Compit2006 Oegstgeest
Compit2006 Oegstgeest
Sponsored by:
Editor:
H.T. Grimmelius
Department of Marine & Transport Technology
Faculty of Mechanical, Maritime and Materials Engineering (3mE)
Delft University of Technology
Mekelweg 2
2628 CD Delft
The Netherlands
Board of reviewers:
Prof. Dr. U. Nienhuis, Delft University of Technology, The Netherlands
Prof. Dr. V. Bertram, ENSIETA, France
Dr. H.T. Grimmelius, Delft University of Technology, The Netherlands
Published by:
IMarEST Benelux Branch
De Krammer 16
4335 XA Middelburg
The Netherlands
Printed by:
Sieca Repro
Turbineweg 20
2627 BP Delft
The Netherlands
ISBN-10: 90-810065-3-3
ISBN-13: 978-90-810065-3-8
The texts of various papers in this volume were set by the authors or under their supervision
by J.M.L. Spannenburg
Copyright © 2006
All rights reserved. No part of the material protected by this copyright notice may be
reproduced or utilized in any form or by any means, electronic or mechanical, including
photocopying, recording or by any information storage and retrieval system, without written
permission from either the authors or the publisher: IMarEST Benelux Branch.
ii
Index
How can we reach the wanted level of reduced risk in managing, archiving and exhanging digital
information? 1
Kristin Dalhaug, Geir Hardt
Optimizing man-hours of Nordseewerkes’ assembly halls using Genetic Algorithm including space
allocation as boundary condition 74
Marcus Bentin, Urs Henkelmann, Christof Sacher
Automated Discharge Monitoring Report System for Shipyard Compliance with the Clean Water
Act 103
Bhaskar Kura, Karthik Kura
Prediction of ship turning manoeuvre using Artificial Neural Networks (ANN) 127
Adel Ebada, Moustafa Abdel-Maksoud
Differentiating product model requirements for ship production and product lifecycle maintenance
(plm) 154
Rolf Oetter, Patrick Cahill
iii
Simulation of material flow processes in the planning of production spaces in shipbuilding 186
Christian Nedeß, Axel Friedewald, Lars Wagner, Michael Hübler
Combined analysis methods used to investigate the steering capabilities of a river pusher 218
Razvan Ionas, Valeriu Ceanga
An Analytical Cost Assessment Module for the Detailed Design Stage 335
Jean-David Caprace, Philippe Rigo, Renaud Warnotte, Sandrine Le Viol
Risk analysis as a base for the alternative method for safety assessment of ships 369
Miroslaw Gerigk
ISO 17894 - Marine Programmable Electronic Systems and an alternative approach to complying
with Lloyd's Register Classification Requirements 378
Duncan Gould
iv
Automation of the Ship Condition Assessment Process for Accidents Prevention 403
Philippe Renard, Peter Weiss
v
vi
How can we reach the wanted level of reduced risk in managing, archiving
and exhanging digital information?
Kristin Dalhaug , Det Norske Veritas, Oslo, Norway, [email protected]
Geir Hardt , Det Norske Veritas, DNV Software, Oslo, Norway, [email protected]
Abstract
There is a continuous drive towards a paperless production in most business, this includes the
Maritime Cluster. The fact that our business is acting globally and around the clock is an important
driver for this. We need to take into account the risks of handling digital information as part of our
communication base with our customers. If the vision is to develop a paperless and digital
organisation, the challenge is to move from a paper based to a digital work environment in a
controlled manner. PKI is an enabler and support to the vision by ensuring that the security of a
paperless production environment is the equivalent or better compared with a paper based
production.
The issue of PKI (Public Key Infrastructure) relates to how electronic documents are secured in
storage (short/long term) as well as in transit, to avoid breeches in confidentiality, integrity,
traceability and availability, and how non-internal users of graded information can be authenticated
in a secure manner.
The paper will present results from a rather substantial feasibility study and an outline of the design
of the technical solution and the suggested infrastructure. In the presentation there will also be a
demonstration of the functionality and the architecture of the solution.
1. Introduction
The organisation has requested new and improved functionality related to the handling of electronic
document in business processes. Furthermore, there is an identified need to allow our customers, from
an untrusted network, secure access to information inside the organisation network from an untrusted
network.
The issue of PKI (Public Key Infrastructure) relates to how electronic documents are secured in
storage (short/long term) as well as in transit, to avoid breeches in confidentiality, integrity,
traceability and availability, and how external users are authenticated in a secure manner. The PKI
feasibility study is based on evolved recommendations within the field of information security.
This document contains highlights from the PKI feasibility study. The essence of the document
is concentrated in the chapter; Business vision and requirements. The body of the document
supplements the executive summary and describes more of the results from the feasibility study. In
the end of the document there are supplementary readings in appendixes.
In parts of the business there is a continuous drive towards a paperless production line. This implies
taking into account the risks of handling digital information as part of our communication base with
our customers. The challenge is to move from a paper based to a digital work environment in a
controlled manner. The PKI feasibility study seeks to support this vision by ensuring that the security
of paperless production is equivalent or better compared to paper based production. One important
acknowledgement captured during the study came from the business:
“We handle a large and increasing number of documents both internally and with customers and
partners. We need solutions to handle the increasing volumes in a controllable fashion.”
The two most important business related requirements identified to handle secure customer
collaboration and internal processes are:
1. Increase efficiency and quality by moving to a paperless production environment.
2. Enhance own services by providing secure exchange of information with clients.
A summary of business requirements identified is presented in chapter 7.
1
When production processes and information-exchange with clients are no longer based on paper, then
we can say that we have moved to a digital production environment. To reach this development stage
there are some major issues that need to be addressed, one of the most important being the issue of
qualified digital signatures. In the following we will give a short presentation of three processes where
signatures are required. The tables following each process description describes the process key
challenges and / or typical activities. The tables describe the processes in three different settings;
before, now and in a the possible future with digital signatures. This is a light version of the
description of the development of one major business process given the implementation of digital
signature solution.
To ascertain a controlled digital storage of signed documents, reports and certificates and other
business critical documents, these are scanned and microfilmed centrally. The microfilming process is
time-consuming and costly. Moreover, the process has proved to be complicated to follow-up and to
carry through in a sustainable manner on the entirety of the document volume.
2
At present more than 500.000 documents are stored in DocuLive Db without a microfilmed equivalent
and without a signature. To ensure longterm storage procedures is a centralised responsibillity in the
organisation.
3. Alternative solutions
The PKI Feasibility study has looked closely at PKI as the technology to cover the identified business
requirements. The study has made a comparison between PKI and other alternatives. The main
alternatives are:
1. As is; perform no changes in security related to document exchange and long time storage.
2. Apply confidentiality mechanisms (encryption) securing documents while they are in transit
(channel based security)
3. Apply integrity and traceability (non-repudiation) mechanisms while documents are in transit
and storage (document/message based security.)
4. Apply the new mechanisms in both 2 and 3 above in combination (giving integrity,
traceability and confidentiality. Applying both channel security and message based security).
Use of alternative 4, digital signatures to establish integrity and traceability in combination with
confidentiality will meet the business requirements.
PKI as an infrastructure may be used to handle both message based and channel based security. This
is an important simplification compared to handling two or more security related infrastructures in
parallel to achieve channel and message based security.
There is no single, widely accepted, definition of public-key infrastructure (PKI). This is a reflection
of the fact that PKI is peculiarly difficult to define because it is more than a single technology or
product. Rather it is a complex amalgam of technology, people, services, processes and policy.
One commonly accepted definition of PKI is:
A public-key infrastructure (PKI) is the set of policies, people, processes, technology and
services that make it possible to deploy and manage the use of public-key cryptography
and digital certificates on a wide-scale.
An easier way of understanding PKI may be to have a look at the business advantages PKI may give
an organisation.
An organisation using PKI will be able to better:
• Digitally sign information, e.g. when signing documents electronically.
• Ensure that users cannot repudiate their actions at a later date
e.g. when required to confirm the identity of the specific user who sent a particular document
over an untrusted network.
• Detect unauthorised changes to data, e.g. when sending an approved drawing over the
Internet.
• Encrypt communications, e.g. when sending confidential documents over an untrusted
network.
• Positively identify remote users over untrusted networks, e.g. when required to provide access
to sensitive business information over the Internet.
3
• Installing firewalls and intrusion detection systems (IDS) doesn’t
save money either.
• Installing power backup (UPS) doesn’t save money.
• Using virtual private networks (VPN) doesn’t save money.
So why should we invest so much in security if we can’t take home a profit from our PKI investment?
The answer is simple: Security doesn’t directly show up as an ROI
The question is instead: “How much will I lose if I don’t invest in PKI?”
• Hiring security staff to guard against break-ins.
• Installing video surveillance (CCTV) to protect equipment and employees
against theft
• Installing virus protection to protect our data from computer viruses, Trojan
horses and worms
• Installing firewalls and IDS to maintain privacy and keep hackers out
• Installing UPS to ensure our data survives power outages, lightening strikes
and power surges
• Implementing VPN to keep our communications networks safe from hackers.
PKI protects data integrity, and ensures identity control, non-repudiation and privacy from internal
and external threats, Similar to any ‘infrastructure’ (such as a railway or telephone network), a PKI
will not, in itself, deliver a return on investment – the reliant applications will. The real benefits,
however, lie in leveraging PKI technology with business-critical applications; applications that play a
core role in your company’s day-to-day business activity.
A recent report, published by the PKI Forum, noted that Public Key-enabled applications typically
deliver business benefits within five high-level categories:
1. New Revenue Opportunities
2. Cost Savings
3. Compliance
4. Risk Mitigation
5.3 Compliance
Compliance generally refers to things about which we have very little choice, i.e., things we must do
in order to stay in business as we know it. In some cases, compliance may be related to cost avoidance
(e.g., avoid a fine); in others, it may be related to protecting an existing revenue stream. As it relates
to e-security infrastructure, compliance-based arguments tend to come from one of the following four
categories: Regulatory, Partner, Customer, and Competitive.
• Regulatory compliance: where failure to implement could mean fines, loss of revenues, jail
terms, etc., e.g., the Basel II Accord for global financial services, HIPAA regulations for the
4
U.S. healthcare industry, the Gramm-Leach-Bliley bill for the U.S. financial services industry,
and Directive 95/46/EC for all enterprise in the EU.
• Partner compliance: where failure to implement could mean losing our ability to participate
with a key partner or group of partners, e.g., a segment of the financial industry moving to the
Identrus model for cross-certification.
• Customer compliance: where failure to implement could mean the loss of a business
relationship with a key account, e.g., “all General Motors suppliers who wish to have their
contracts renewed must implement technology X by a certain date”.
• Competitive compliance: where failure to implement could mean the loss of competitiveness.
6. Business requirements
The most important acknowledgement captured during the PKI feasibility study came from the
Maritime part of the organisation:
“We handle a large and increasing number of documents both internally and with customers and
partners. To handle scalability and security in electronic workflow, implementing PKI is the only long
time solution identified as satisfying.”
The generalized business related requirements identified to handle secure customer collaboration are:
1. Increasing efficiency and quality by moving to a paperless production
environment.
2. Enhance own services by providing secure exchange of information with
clients.
3. Enabling secure digital work routines with customers and partners based on a
well defined and acceptable level of traceability and security
4. Establishing legally satisfying electronic work flow.
5
5. Reducing risk related to loss of confidentiality (encryption)
6. Reducing risk related to weak traceability (integrity, authenticity, non-
repudiation, authorization).
Production
Efficiency /
Activity Savings
Increased quality
Man-hours on printing, scanning
and microfilming number of
No need to
certificates
Issuing certificates print and scan certificates
Example:
Streamlined process
20 min * 2000 contracts 666
hours per year
No need for local storage at
Survey reports stored in
station ( storage room) Man-hours
document repository are
on archiving at station
unsigned.
Example:
Signed master stored on paper Central digital master secures and
1 hour per week * no of offices /
at local station for the regulated streamlines digital preservation
stations 150 hours per week * 1
time (15 year after deletion)
000 NOK 150 000 per week
A proper digital signature will
*52 weeks KNOK 780 000 per
render this processing obsolete.
year
Microfilming of digital Secures storage in DMS will app. NOK 1 mill ( If microfilming
documents reduce need for microfilming obsolete )
We believe these requirements are of a general kind and not specific to the scenarios we have studied.
In the next chapter the requirements above have been rephrased and related to the detailed needs in
DNV Maritime.
The Maritime business unit has taken an active part in the feasibility study, identifying more detailed
requirements related to their business processes. The requirements have been grouped into:
traceability, confidentiality and user friendliness. The requirements are as follows:
7.1 Traceability
• The signatures must be traceable and readable for 15 years after class deletion
• The history of the signatures should be traceable, to see who has been doing what and when
• The role of the signature must be traceable: approval engineer, verifier, line manager.
• It must be extremely difficult to tamper with the signatures
7.2 Confidentiality
• Counter measures must be taken to avoid loss of confidentiality
6
8. Alternative solutions
Based on an evaluation and comparisons of four alternatives a PKI pilot architecture is suggested. The
suggested pilot is integration and generation of digital signatures in NPS, and validation of the
signatures in the eApproval service built on DNV exchange. Changes in both applications and the
involved processes are described. We suggest that approval letters and approved drawings should be
the two document types to start with. The feasibility study has resulted in a number of documents.
You will find a complete list of these documents in the appendix.
The study was supposed to evaluate whether PKI as technology will cover the requirements identified.
Even though there should be a special focus on PKI, there is made a comparison between PKI and
other alternatives, to assure that PKI really is the best possible solution.
The alternatives are described in the following chapters.
7
9. Comparison of alternatives
To give an overview of improvements in the alternative solutions, Table 3 below lists the business
requirements compared to the four alternative solutions.
Alternatives
1 2 3 4
The table above is a simplified view of a complex situation, but is gives an indication of
improvements. The comparison in the table above would be improved if only one business process
and the supporting system where evaluated one at a time. For further improvement it is possible to
build the analysis on the basic concepts within information.
1
Security mechanisms enables electronic workflow in a secure way, the information needs to be electronic
up front to get this improvement.
2
eApproval already using channel security.
3
A better statement than legally satisfying will be reduced risks related to legal disputes and incompatibility.
DNV are today not forced by juridical considerations to enroll PKI.
8
10. Critical factors affecting cost of ownership
A total cost of ownership calculation is not performed. In the DNV environment, PKI deployment will
probably have the main costs.
Digital information exchange and use of PKI will introduce some new risks and remove some other,
compared to paper based information exchange.
DNV has to make several decisions on how to use PKI, how to achieve interoperability and how to
meet legal and business requirements. Some of these choices will influence on cost and ROI.
Examples of such choices and their impact on costs are given in Table 5.
Table 5: Cost estimates
Cost estimate
Area
Pessimistic Optimistic
Demands related to third parties
H L
(parties outside DNV, bilateral agreements)
Number of applications that need to be changed M L
Number of processes need to be changed M L
How to handle different jurisdictions when designing and operating processes
H L
and systems
Requirements on non-repudiation and long time storage M L
Openness and interoperability H L
Configuration control in interorganisational systems H L
Usability M L
Availability of Trusted Third Party services like Validation services, Time
H L
Stamp, Notary etc.
References
Acknowledgements
9
[Q]COOL:
A Knowledge-based Cooling Water System Configurator
M. Th. van Hees, Qnowledge B.V. Wageningen, The Netherlands, [email protected]
E.C. van der Blom, IHC Holland Dredgers B.V., Kinderdijk, The Netherlands,
[email protected]
Abstract
Within the scope of the VNSI Open Mind project, IHC Holland Dredgers B.V. and Qnowledge B.V.
carried out a research project into the application of the Quaestor knowledge system in the domain of
project configuration. A cooling water system configuration for dredgers was considered to be of
acceptable complexity and a realistic and ‘typical’ product configuration application. The paper
discusses some of the fundamental concepts of knowledge-based product configuration, the generic
object model of solutions representing various system configurations and presents some examples of
the function and component descriptions in the configurator knowledge base. The configuration
process flow and result is illustrated with examples and some experience gained with the practical
introduction of this configurator are presented.
1. Introduction
Within the scope of the VNSI Open Mind project, IHC Holland Dredgers BV and Qnowledge BV
jointly developed a novel approach to product configuration using the Quaestor knowledge system.
Quaestor has been developed by the Maritime Research Institute Netherlands (MARIN) and has been
transferred to Qnowledge BV in 2004. Qnowledge has been founded by MARIN, M. van Hees and T.
Verwoest as a company dedicated to the management and application of technical and scientific
knowledge. IHC Holland Merwede BV is the world’s market leader in the design, fabrication and
supply of equipment and services for the dredging industry.
The goals of the VNSI Open Mind project were to develop and introduce standards for product,
process and information models for the Dutch maritime industry. An important motivation for this
effort was to improve the exchange of such information between the parties involved in sales,
engineering design and building process of maritime structures.
One of the key concepts that lie at the basis of these models is the IO (integraal ontwerpen/integral
design) approach in which the design process is decomposed in a collection of functions and function
fulfillers as shown in fig. 1. In this graph referred to as the ‘hamburger model’, the top of the
10
hamburger depicts the function or task to be performed and the lower part the entity that fulfills that
function. The lines between the hamburgers represent ‘requires/consist of’ relationships. These
hamburger diagrams are to a representation of the tasks, knowledge and capabilities required to fulfill
the topmost function represented by the upper part of the upper hamburger in the diagram.
The development of a cooling water system configurator was one of the four pilot projects performed
within the Open Mind project. After a review process of several software systems, Quaestor was
selected by IHC Holland Dredgers BV as the basis for the pilot on product configuration. This became
also the first involvement of Qnowledge in the Open Mind project that was already a few years
underway at that moment. IHC Holland Dredgers BV had thoroughly investigated the cooling water
system design process and had represented the process amongst other in the form of the above
diagrams. The results of their analyses formed the basis on which the development of the cooling
water system configurator was started.
When the project started early 2004, it was considered conceptually possible to use the Quaestor
system as a product configuration tool. At that moment the system was already capable to assemble
complex multi-level computational models consisting of objects containing data and methods on the
basis of a collection parameters, relations and constraints (vHees 1992, 1995, 1997, 2003). The
system is used in a variety of applications (Brouwer 1990, vEs 2003, Keizer 1996, vOers 2006, vdNat
1994, Sipkema 2003, Wolff 1994). Therefore, the available technology for the configuration of
computational models was expected to be well suited towards the configuration of abstract structures
representing technical solutions, in this case cooling water systems for dredgers.
In fig. 2, an example from [Q]STAP (Quaestor Sea Trials Analysis Program) is presented. [Q]STAP
is a typical Quaestor application consisting of a collection of sea trial correction methods in the form
of relations, constraints FORTRAN executables and Excel sheets. It shows a scenario in the workbase
of the [Q]STAP knowledge base consisting of a set of sub solutions dealing each with a step in the
seat trial correction process. Each node in the tree view on the left contains a set of input and
computed data. This example illustrates one of the key symmetries in the Quaestor domain model: the
unification of the parameter and object concepts. In Quaestor parameters are ‘value containers’.
Values can be (vectors of) numerical or nominal (binary string/document) or object. Objects contain a
collection of parameter-value combinations in which values can be objects again. The data model of
Quaestor is referred to as fractal. A fractal is a mathematical construct with scale independent
properties (fig. 3).
11
For Quaestor this means that the object model is capable
to describe complex hierarchical structures. Objects can
contain multiple instances of values and objects in
which each instance can have different origins or can
even be missing. Within a branch of such a structure,
multi level inheritance is possible, i.e. if values are not
denominated local, they are accessible or ‘visible’
deeper in the hierarchy. Another symmetry in the
Quaestor domain model is between object and function.
Quaestor has its own library of intrinsic functions like
SIN(), COS(), LEFT$() etc. Some of these functions return
a single numerical or nominal value on the basis of one
Fig. 3: Fractal image, courtesy: or more arguments, others return a dataset or
www.softwarefederation.com computational model.
Therefore, the object parameters are also functions as they can receive goals and input and perform
calculations. Quaestor can assemble and execute a computational model in the object instance in the
solution. A simple example is the calculation of a Reynolds number (the F_Re object in fig. 2):
Re_ship = F_Re(@Re, @Length:Lpp, @Speed:Vship)
For Quaestor F_Re is a parameter of the Object type, all the others are of the numerical type. As soon
as Re_ship is needed, the F_Re object is created if it does not yet exist in the solution and is invoked
with new input. In this case, the F_Re() function returns the results of its top goal Re, indicated by @.
For Quaestor F_Re is one of the objects in the hierarchy forming the solution.
One of the steps that were needed for the application of Quaestor in product configuration was to
enable that objects (functions) could be provided with objects (function fulfillers) as top goals
whereas previously only numerical and nominal parameters could be used as goals in objects. It is
simple to understand this from the following relation:
Dredging# = Dredging(@Dredge#, @Jet#, @HydPowerPack_cooler#)
In the hamburger model the Dredging# function (string object parameter) is the top of the hamburger
containing the result of the Dredging() object, being the function fulfiller. The @Dredge#, @Jet# and
@HydPowerPack_cooler# top goals are needed to fulfill (compute) the Dredging# function. If each of
the functions has their own relations in a similar form, product configuration becomes feasible.
Objects in Quaestor can represent functions like dredging but also the hardware needed to perform
these functions such as a hydraulic power pack cooler. In the above example one need two parameters
to describe a function, namely the object and a parameter needed as left term for a relation as relations
in Quaestor always consist of an equality. In order to reduce the number of parameters, the internal
representation of these function/function fulfiller construct was changed into Object=Object(Goals,
Arguments), shown as Object(Goals, Arguments). This allows an extremely compact function syntax
changing the above example into:
Dredging(@Dredge, @Jet, @HydPowerPack_cooler)
The introduction of function-function fulfiller concept with objects as goals is considered by
Qnowledge to be the most important innovation resulting from the project.
3. Implementation of [Q]COOL
A typical product configuration tool contains a body of knowledge on system components and their
functions in the form of mostly simple rules. Product configuration tools support or enable the design
of systems consisting of varying assemblies of selections of these components. Functions are e.g.
dredging, propulsion and cooling and components can be any hardware such as coolers, electric
motors, valves, pipe sections, etc.
12
The first step was the decomposition along the lines described in the hamburger diagrams. The
process of defining the hamburger diagrams together with designers and engineers proved to be far
more useful than the actual facts it contained. However, the hamburger diagrams are only a
description of the product and not of the design process. Therefore, the actual design process was
described using flowcharts. The flowcharts are a very detailed description of the steps taken in the
design process, the possible choices made and the information needed and generated. The flowcharts
proved to be useful during the building process of the cooling water system configurator.
The basic input for the design of a cooling water system is a heat balance presenting the power
consumption and heat production of the cooling water users. As a first step the new function-function
fulfiller construct was used in the generation of the heat balance by gathering and dimensioning the
cooling water users. Therefore, the heat balance is not the starting point of the cooling water system
design but rather an intermediate result of the process in which system components are selected and
dimensioned.
The knowledge browser presented in fig. 5 allows the management and maintenance of the collection
of knowledge forming the product configuration tool. The class tree on the left is a folder structure in
which the parameters, relations and functions are stored. A few of the functions in the knowledge base
are visible on the right. Quaestor allows the presence of more than one function fulfiller (relation) for
each function (parameter). In the above list six Heatbalance() functions are available, each connected
to a constraint describing an operational condition (design, dredging, sailing…). By means of
constraints, it is possible to select specific function fulfillers for specific conditions.
13
Fig. 5: Knowledge browser containing some functions
Please note that Quaestor inference engine is capable to configurate a computational model on the
basis of a collection model fragments (parameters, relations and constraints) and this capability. This
engine forms the heart of the product configurator and performs the task of selecting functions from
the knowledge base and controls their evaluation by the numerical solver.
An important aspect of this application was the selection mechanism of objects. For instance if we
have placed a dredge pump, how to drive this pump: by means of an electric motor, by a diesel,
through a gearbox?.Is a particular bearing or electric motor water cooled etc. Many of these choices
depend on aspects like size, application, etc., i.e. knowledge and experience of the designers. In order
to create a realistic and useful configuration tool, it is necessary to acquire this knowledge by means
of interviewing designers and reviewing existing designs. IHC Holland Dredgers BV judged that
making such relevant design knowledge explicit is not always easy but very useful in order to
improve the traceability, quality, efficiency and the understanding of steps taken during the design
process.
The component selection mechanism can be easily understood by means of the following example.
Assume that in the Dredge object we have the following function representing the a dredge pump:
Pump(@ID$, @SymbolText$, @Drive#, @Power:PowerDredgepump)
In which ID$ is the name of the pump, SymbolText$ is the text to be placed in the pump symbol in
the diagram and @Power is the input power requirement of the pump. In this case Drive# is the most
interesting one for it represents the driving function (component) of the pump.
14
The Drive# function can be fulfilled by different relations, e.g.
Drive# = Bearing
Drive# = Gearbox
Drive# = Diesel
Drive# = EMotor etc.
Each of the above relations is connected to a constraint:
Connection$ = ComponentName
In which ComponentName indicates one of the above objects. Objects can hold a dedicated list of
driving components. The selection list for Connection$ in the Dredge object being the parent of Pump,
is restricted to:
Dredge Pump connected to
Bearing<EQ>
Gearbox<EQ>
Diesel (dedicated)<EQ>
Main Diesel<EQ>
Electric Motor<EQ>
The <EQ> tags are indicate that these options are to be input by means of a combo box, i.e. the input
value is limited to one of that list. Fig 6 shows the part of a configuration in the workbase in which the
Dredge.Pump object is in focus.
As soon as the components are selected, introduced in the product model and dimensioned if
necessary, the heat production can be computed, in most cases on the basis of a simple efficiency
formula. Each heat producing or power transforming component holds its contribution to the heat
balance.
15
Fig. 7: Excel sheet with cooling water balance
The final composition of the heat balance is only a query over the product model. In [Q]cool, these
data are included by Quaestor in a spreadsheet for reporting purpose (fig. 7). As any other Quaestor
session, a configuration session starts with the selection of a top goal, in this case the Configurate
object (function). The function Configurate is fulfilled by:
Configurate(@TotalLTHeatbalance#, @TotalHTHeatbalance#, @Diesels#, @Design,
@CoolingSystem#, @ProcessVisio) + ADDGOALS("", NrOfPropellers, NrOfDiesels,
NrOfShaftGenerators, NrOfAuxGenerators, NrOfDredgepumps, NrOfJetpumps,
NrOfACCondensors, NrOfProvCondensors, NrOfCompressors, NrOfBowThrusters,
NrOfHydPPcoolers, NrOfMiscCoolers, OptInjectorCooler, PowerDredgepump,
PowerJetpump, PowerMaindiesel, PowerBowthruster, PowerAuxGenerator,
PowerProvRefrig, PowerShaftGenerator, PowerPropeller, PowerAC, MakeDrawing,
MakeTechSpec, T_seaw, T_sea_out, T_LT_in, T_HT_out)
This function includes a number of the primary system choices, generally considered as provided in
the systems specification. This approach was preferred by IHC as it reflects the current design
approach. As can be seen in the above relation, powers- and numbers of components are specifically
provided. In this way the system is forced to calculate or request these figures in the top level of the
session, i.e. at the very beginning of a modeling session. A drawback of this approach is the
introduction of a series of parameters that are no properties of the objects they refer to. As an example,
consider the parameter NrOfDredgepumps. If represented as property we can make it an attribute of the
Dredge object. In fig. 6 we can see that it would become
Configurate.Heatbalance.Dredging.Dredge.Nr
as Nr can be used to determine the number of cases in the Dredge() object of which each case
represents a dredge pump. Another drawback of configuration input in this manner is that it is
uncertain beforehand if, and how many of a particular object type are needed since the necessity of
using the component at all may depend on the evaluation results of relations in the knowledge base.
16
One of the problems encountered during the development of [Q]COOL is that information is provided
during the configuration process in objects on multiple locations in the product model, i.e., the input is
distributed all over the resulting product model. The product model is created on the basis of functions
from which the contents and structure fully depends on the input provided during the process. This
implies that any change in the input requires that the functions have to be fulfilled again. As input
requested by these functions form part of that function, this input is lost as soon as the function is re-
evaluated. In order not be obliged to provide all input again upon recalculating a solution (product
model representing a design) to e.g. change the number of auxiliary generators or the type of cooling
water system, an input recorder was developed. This was realized through the introduction of reserved
object QInput containing a set of values of the reserved parameters QParameter and QValue
respectively containing the object location and the value or the decision provided. If a value is
selected to be computed, this is indicated with a “Computed” value. Below a few records of a
sessions’ input are presented.
0
2 "QParameter" "QValue"
"48" "Configurate.Design.LT_cooling.Plate_cooler.1.dP_default" "1"
"49" "Configurate.Design.LT_cooling.Plate_cooler.1.ColdSide.T_in" "32"
"50" "Configurate.Design.LT_cooling.Plate_cooler.1.ColdSide.T_out" "48"
"51" "Configurate.Design.LT_cooling.Plate_cooler.1.ColdSide.Vman" "Computed"
One of the challenges of the project was to develop a generic method for the generation of principle
cooling water diagrams. Initially, it was the intention to use CADMATIC DIAGRAM for this
purpose. Investigation into the scripting capabilities of this CAD system revealed a scripting syntax
that is not object-oriented which requires a large amount of information and script to generate a
17
drawing. Therefore the use of this system was not considered within the project, given the time and
resources available. In order to investigate the possibility to generate diagrams with Quaestor and to
gain understanding of the process behind the preparation of these diagrams it was decided to use MS
Visio as a development environment. MS Visio contains a full implementation of VBA (Visual Basic
for Applications) and has a reasonably powerful macro recorder that can be used to generate the
elementary scripts for the components to be placed in the drawing. In addition MS Visio is object-
oriented in a sense that components and be addressed by name and have properties that can be read
and set. Another advantage of MS Visio is that, similar other MS Office products, it is possible to post
an externally generated macro which is then executed by Visio. Quaestor offers advanced document
generation capabilities that can be used to create scripts in any language or form.
The approach followed is not to generate a single large script for the complete diagram. Each
component represented in the product model that need to be included in the diagram either generates
its own VBA script or data needed to generate that script, such as the text to be placed in or next to the
symbol. Primarily, the problem is not the generation of these elementary scripts but the overall
topology of the diagram. Components and their connections by lines representing cold and warm
water pipes need to located in such way that a realistic diagram is created.
18
Fig. 10: Topology object tree
As stated above, each case in the tables of the objects represent a component that is connected to its
predecessor with warm and cold pipes. The first component in a table is connected to its parent object,
or rather to the position in the parent table in which the object is located. The sequence and distance
between components are partly properties of the objects and partly determined on the basis of datasets
that were derived from the analysis of existing diagrams and depend on the type of cooling water
system (central, decentral, combined, separated, LT cools HT etc.). In a sense, these datasets contain
the logics of component ordering and connection. The component data not only contain the scripts for
the generation of pipes and components but also the flows, pressures and diameters of each pipe
section. On the basis of a preliminary positioning of the components, it is also possible to estimate
pipe lengths for cost estimation purposes. Figure 11 shows an example diagram that result from the
above topology. The names of the branch objects in the topology tree of fig. 10 are indicated in the
diagram.
Aux
Fore
EndBranch[8]
PS
Main
EndBranch[6]
SB
19
5. Interfacing www.SeaQuipment.com
One of the goals of the Open Mind pilot project was to demonstrate the immediate use of component
data through a direct connection with other parties like the SeaQuipment database, using the Open
Mind information models. For this purpose the Quaestor SERVICE$() function was developed which
makes it possible to communicate with a remote database by exchanging XML data. If a component is
needed by the configuration tool, say a coolingwater pump, the first step is to establish a connection
with the SeaQuipment by requesting a session token:
SessionToken$ = SECTION$(SERVICE$(80, www.seaquipment.com", SessionRequest$,
"Sending Connection Request", 10, 10,
"No connection established","NullString"), "<sq:session_token>",
"</sq:session_token>", -1)
This function sends a query to www.seaquipment.com and returns a session ID which can be used to
retrieve the query results. The parameter SessionRequest$ contains the component query with the
component selection criteria provided in the dialogue screen shown in fig 12.
After executing the above SERVICE$() function, Quaestor opens the web browser by executing the
function:
SeaQuipmentPage$ = GET$("NullString",
"http://www.seaquipment.com/connector/2005/" + SessionToken$)
The SeaQuipment website shows the results of the query, in this case two pumps fulfill the criteria
provided.
As soon as one of the pumps is selected, SeaQuipment presents the pump properties and saves them
as XML dataset. By pressing the Quaestor OK button the selection is accepted after which Quaestor
retrieves and decodes this dataset with another SERVICE$() function. Subsequently the required data
is parsed from this XML dataset and stored in the product model after which the process continues
with the next component.
20
Fig. 13: Component selection fulfilling the criteria
A cooling water system configurator is the first application of a production configuration method
developed within the Open Mind project. The main benefit of this configuration tool is the ability to
generate and calculate alternative designs easier during the starting phase of the design process.
Therefore, the preliminary design choices should be of better quality. This first application of product
configuration with Quaestor indicates that the methodology and structure of knowledge-based
configuration tools are generic. The approach is expected to be applicable in any engineering design
discipline in which systems are assembled from components on the basis of the function-function
fulfiller paradigm, empirical rules and first principle calculation methods. The cooling water system
configurator also demonstrates the technology to immediately introduce component data from a web-
based database into a knowledge-based product modeler. The bottleneck is now the availability of
useful technical component data in such databases. Component suppliers should be made aware of the
commercial advantage of publishing this data in web-based databases like SeaQuipment. The product
configuration capabilities of the Quaestor shell have been greatly improved as a result of this effort
and are already applied in other applications. Practical use of the configuration method will provide
feedback for further improvements and extensions of knowledge-based configuration tools. Although
the project has consumed more time and effort than initially intended, the proof of concept has been
successfully concluded. Similar to earlier applications of Quaestor, it requires effort to persuade the
intended user group of designers and analysts to actually start using the tool. A change management
approach is needed for the implementation of knowledge-based configuration tools like [Q]COOL
and time and resources should be allocated for this by the management.
21
7. Literature
BROUWER, R., (1990), Description and use of VIBREX, a Quaestor knowledge base to evaluate the
vibration risk in the preliminary design phase of a ship, graduation work T.U. Delft.
BROWN, D.C., CHANDRASEKARAN, B. (1989), Design Problem Solving, Knowledge Structures
and Control Strategies, Pitman, London, Morgan Kaufman Publishers, Inc., San Mateo, California.
CLANCY, W.J. (1982), The Epistemology of a Rule Based Expert System - A Framework for
Explanation, Artificial Intelligence 20 (1983), pp. 215-251.
CHAO, K. M.; SMITH, P.; HILLS W.; FLORIDA-JAINES, B. and NORMAN, P. (1998), Knowledge
Sharing and Reuse for Engineering Design Integration, Expert systems with applications. 14:3, 399.
Culley, S., 1998, “Design Reuse of Standard Parts, Keynote address”, Proceedings of Engineering
Design Conference ’98, Professional Engineering Publishing Ltd., London.
ES, C. VAN, HEES, M. TH. VAN (2003), Application of knowledge management in the conceptual
naval ship design, COMPIT 2003, 2nd International EuroConference on Computer and IT
Applications in the Maritime Industries, Hamburg.
GOUBAULT, PH. (1999), Preliminary Design in a Modern Computerized Environment, Atma
conference, Paris.
GUESGEN, H.W., HERTZBERG, J. (1992), A Perspective of Constraint-Based Reasoning, An
Introductory Tutorial, Lecture Notes in AI 597, Springer Verlag
HEES, M.TH. VAN (1992), Quaestor: A Knowledge-Based System for Computations in Preliminary
Ship Design, Practical Design of Ships and Mobile Units, Elsevier Science Press, 1992, ISBN 1-
85166-863-2, pp. 2.1284-2.1297.
HEES, M. TH. VAN (1995), “Towards Practical Knowledge-based Design Modelling”, PRADS’95
Symposium, Seoul, Korea.
HEES, M.TH. VAN (1997), Quaestor: Expert Governed Parametric Model Assembling, Doctors
thesis, Delft University of Technology, ISBN: 90-75757-04-2.
HEES, M.TH. VAN (2003), Knowledge-based Computational Model Assembling, Proceedings of
SCSC 2003, Montreal, Canada 20-24 July 2003, ISBN 1-56555-099-4.
HUGHES, J. (1989), Why Functional Programming Matters, The Computer Journal, Vol. 32, No. 2.
KEIZER, E.W.H. (1996), Future Reduced Cost Combatant Study, Status Report, MO 2015 Phase 2,
MARIN.
LELER, W. (1988), Constraint Programming Languages, Their Specification and Generation,
Addison-Wesley Publishing Company, Amsterdam.
OERS, B. VAN, HEES, M. TH. VAN (2006), Combining a Knowledge System with Computer-Aided
Design, 5th International Conference on Computer Applications and Information Technology in the
Maritime Industries COMPIT '06, Oud Poelgeest, Leiden/Netherlands, 8-11 May 2006
MØLDRUP, M., MØLLER, N. (Technical University of Denmark) (2004), Development and
implementation of product configuration systems - a change management perspective, International
Conference on Economic, Technical and Organisational aspects of Product Configuration Systems
(PETO), 28-29 June 2004, Lyngby, Denmark
NAT, C.G.J.M. VAN DER (NEVESBU), HEES, M.TH. VAN (MARIN) (1994), A Knowledge-based
Concept Exploration Model for Underwater Vehicles, International Marine Design Conference
(IMDC),, Delft, the Netherlands.
NEWELL, A. (1982), The Knowledge Level, Artificial Intelligence 18, pp. 87-127.
RUMBAUGH, J. (1991), Object Oriented Modelling and Design, Prentice Hall.
22
SERRANO, D., GOSSARD, D. (1992), Tools and Techniques for Conceptual Design, Artificial
Intelligence in Engineering Design, Vol. 1, Design Representation and Models of Routine Design,
Chapter 3, Academic Press, San Diego.
SIPKEMA, S.F., HEES, M. TH. VAN (2003), KOAS: Knowledge-based Design and Analysis System
for Ship Propellers, Schip & Werf (Dutch).
SIVALOGANATHAN, S. and SHAHIN, T. M. M. (1999), Design Reuse: An Overview, Proceedings
of the V Institution of mechanical Engineers. Part B, Journal of engineering manufacture. 213:7, 641-
654.
WOLFF, P.A. (Royal Netherlands Navy) (1994), Development of a Remote Controlled Mine Sweeper,
Paper 24, INEC 94 Cost Effective Maritime Defence.
23
24
25
26
27
28
29
30
31
32
33
Advanced Simulation in the Work of a Modern Classification Society
Karsten Fach, Germanischer Lloyd, Hamburg/Germany, [email protected]
Abstract
The general development towards simulation-based design has been supported and in some cases
even driven by modern classification society work. Advanced finite-element analysis has long been
part of the services of classification societies. However, more recently the scope and depth of
simulations at Germanischer Lloyd have developed rapidly and a survey of the techniques used as
well as typical applications is given. The article focuses on basics of the techniques, pointing out
progress achieved and current research activities, giving references which describe in more detail the
individual applications and simulation techniques.
Introduction
The word simulation is derived from the Latin word “simulare” which can be translated as “to
reproduce”. The VDI (Society of German Engineers) defines the technical term “simulation” as
follows: “Simulation is the reproduction of a system with its dynamic processes in a running model to
achieve cognition which can be referred to reality”. According to the Oxford dictionary “to simulate”
means “to imitate conditions of a situation or process”, specifically “to produce a computer model of a
process”. In this sense virtually all computer models used in the design and construction of ships
would qualify as simulations. Indeed, we see an ever increasing scope and importance of simulations
in our work. The trend in modern classification society work is also towards simulation-based
decisions, both for design and operation of ships.
Stability analyses were among the first applications of computers in naval architecture. Today, the
naval architect can perform stability analyses in intact and damaged conditions quasi at the push of a
button. Two other “classical” applications of computer simulations for ships are CFD (computational
fluid dynamics) and FEA (finite-element analyses). Both have been used for several decades now to
support ship design, but today’s applications are far more sophisticated than 20 years ago. The
following will review different simulation fields as found in the work of Germanischer Lloyd,
showing how advanced engineering simulations drift from research to frontier applications.
1.1 Seakeeping
For many sea keeping issues, sea keeping is determined as follows:
1. Representation of the natural seaway as superposition of many regular (harmonic) waves
2. Computation of the ship reactions of interest in these harmonic waves
3. Addition of the reactions in all these harmonic waves to a total reaction (superposition)
This simplified linear approach is appropriate for many questions in ship sea keeping and frequently
applied. The corresponding tool at Germanischer Lloyd is a 3-d Green function method (GFM). The
advantage of this approach is that it is very fast and allows thus the investigation of many parameters
(frequency, wave direction, ship speed, metacentric height, etc.). Computations become considerably
more expensive if this simplification is not made. Non-linear computations are usually necessary for
the treatment of extreme motions; here simulation in the time domain is the proper tool. These
simulations require massive computer resources and allow only the simulation of relative short
periods (seconds to minutes). Combining intelligently linear frequency-domain methods with
nonlinear time-domain simulations allows exploiting the respective strengths of each approach, El
Moctar et al. (2004a,b). The approach starts with a linear analysis to identify the most critical
parameter combination for a ship response. Then a non-linear strip method determines the global
motions of the ship, Fig.1, which are then in a final step prescribed in a RANSE simulation which
captures the complex free-surface deformation and local pressures better than any other approach. We
employ the commercial RANSE solver Comet for our free-surface analyses involving complex free
34
surfaces. Comet is based on a finite-volume method and allows unstructured grids with cell-wise
refinements. The equations for conservation of mass and momentum are solved in integral form. The
free surface is captured by the high-resolution interface capturing (HRIC) scheme. Most recently, the
intermediate step (nonlinear strip method) has been omitted and RANSE simulations for ships free to
move as a result of the hydrodynamic forces have been realised for a variety of ships, El Moctar
(2005), Fig.2.
Fig.1: Non-linear strip method SIMBEL Fig.2: CFD simulation of fast ship in waves
1.2 Sloshing
Sloshing is a strongly non-linear phenomenon featuring often spray formation and plunging breakers.
Only surface-capturing methods can reproduce these features. El Moctar (2002), Bertram et al. (2003)
applied Comet to sloshing problems, employing the RNG-k-ε turbulence model. The test cases were
the simple rectangular Ship Research Institute (SRI) tank and the Euroslosh tank of Sirehna, which
includes one internal stiffener. The tanks were investigated in pure sinusoidal horizontal respectively
roll motion. Fig.3 shows a typical time instant of the sloshing motion with wave breaking and air
entrapment. Foam formation observed in the experiments can only be approximated by the present
method in employing a rather coarse grid with a smearing between air and water phase. This results
also in a larger discretisation errors. Fig.4 compares measured and computed pressures at one point,
indicating the high quality of the simulations. The computed sloshing motion agreed also well with
videos of the experiments.
Extensive experience gathered over the last 5 years allows today the numerical prediction of sloshing
loads in ships with great confidence. While the validation cases are de facto two-dimensional and
simple in geometry, complex three-dimensional tanks as found e.g. in the bow have been modelled
and pose no principal problems for the simulation tools.
Fig.3: CFD simulation of sloshing for validation Fig.4: Measured and computed pressures for
test case Euroslosh tank validation test case SRI tank
35
1.3 Rudder flows
Diagrams to estimate rudder forces have been popular in classical rudder design. These diagrams
extrapolate model test results from wind tunnels or are based on potential-flow computations.
However, the maximum lift is determined by viscous flow phenomena, namely flow separation (stall).
Potential flow models are generally not capable of predicting stall and model tests predict stall at too
small angles. CFD is by now the most appropriate tool to support practical rudder design, El Moctar
(2001), Bertram et al. (2003). The propeller is typically modelled in these computation in a simplified
way using axial and tangential body forces. These are external forces distributed over the cells which
cover the location where the propeller would be in reality. The sum of all axial body forces is the
thrust. The body forces are assumed to vary in radial direction of the propeller only. This procedure is
much faster than geometrical modelling of the propeller (by two orders of magnitude) at a negligible
penalty in accuracy (about 1%). The procedure has been extensively validated for rudder flows both
with and with-out propeller modelling, Fig.5. The same approach for propeller and rudder interaction
can be applied for podded drives, Fig.6, Junglewitz and El Moctar (2004). Comet allows also the
treatment of cavitating flows, Lindenau and Bertram (2003). For the numerical simulation of
cavitating flows with the RANSE solver, the basic treatment for the two-phase flow is similar to the
treatment of free-surface flows, i.e. an additional transport equation is solved for the volume
percentage of air (or vapour) in each cell. For the modelling of cavitation the transport equation is
adapted adding a source term based on classical bubble dynamics models the growth and collapse of
vapour bubbles. The extensive experience gathered in the last 5 years has resulted in a GL guideline
for rudder design procedures, GL (2005).
36
Fig.7: Model of cargo hold Fig.8: Temperature distribution in cargo hold
with cooled containers
37
Fig. 9: Detailed CFD simulation for single- Fig. 10: Fire and smoke simulation in engine room
room, Single-fire case
2. Evacuation simulation
Simulation is particularly difficult when it involves also humans. This is for example the case in ship
evacuation simulation. The trend is here to equip each simulated human with a certain perception and
reasoning capability. Such multi-agent systems are subject to research and likely to become
increasingly important for a variety of simulations with relevance to ship design.
Evacuation assessment became a major topic at the International Maritime Organization (IMO) after
the loss of the ‘Estonia’, resulting in new requirements for evacuation analyses in an early stage of the
design process, IMO (2002). Germanischer Lloyd and TraffGo developed the software tool AENEAS
to simulate pedestrian traffic on ships. Evacuation analyses focus on safety, but the tool can be used
also for the optimization of boarding and de-boarding processes, Petersen et al. (2003), or space
requirements for promenades on cruise ships and large RoPax ferries. Since its introduction,
AENEAS has been well accepted by shipyards like Flensburger Schiffbau Gesellschaft (FSG),
Kvaerner Masa-Yards and Meyer Werft, encouraging TraffGo and Germanischer Lloyd to develop
the capabilities further. Like all simulations, pedestrian traffic simulations are a simplified model of a
more complex reality. They can help to estimate durations and queue formation, giving insight into
how to modify designs or procedures.
38
Fig.12: Steps to AENEAS model from CAD Fig.11: Agents with different abilities following
model to cells with assigned information different routes to their destinations
For an AENEAS simulation, the floor plan of the investigated vessel is discretized into a regular grid
of square cells, each representing the average space a person occupies (typically 40 cm × 40 cm),
Fig.11. The grid results in an extreme computational rate which allows AENEAS to perform a high
number of simulation runs, typically in the order of 500 within one hour, to gain a broad basis for
statistical evaluation. By using various cell types like accessible floor, doors and stairs as well as non-
accessible cells representing obstacles and walls the general arrangement of a ship can be represented
in detail. Passengers and crew are represented by intelligent agents. These model individual persons
with properties ranging from walking speed to stochastically distributed characteristics like dawdling
and swaying. By varying the parameters, the composition of the population can be adjusted to the
scenarios under consideration. Commuters, daily using a ferry will move in a very straight forward
kind of way, while tourists often have plenty of time, thus tend to dawdle while moving. Similar to a
chessboard, the agents move across the accessible cells towards their assigned destination interacting
with others, avoiding obstacles (non accessible cells) and being influenced by their individual
parameters. Since the number of cells an agent occupies depends on his speed of movement, the
relation between flow and density is derived through self-organization. The user defines the routes
that the agents will follow, Fig.12. According to his input, a so-called guided potential is distributed
through the geometry by which the agents assigned to this route determine their direction of
movement.
In most cases, a hazard changes in time, often affecting passengers and crew in a variety of ways
during different stages of development. Then we should simulate both the development of the hazard
and its effects on the evacuation. Examples are progressive flooding leading to heel angles and
progressive fire and smoke development. Results of fire modelling can be considered in the
evacuation simulation. The numerical fire simulation provides time lines for the development of
critical conditions (e;g. smoke, temperature) at predefined escape points, such as escape doors at the
ends of corridors. Because of the extensive engineering effort necessary for fire simulation, the
interaction with AENEAS is arranged in a simplified manner, in which escape routes are blocked
sequentially due to smoke or fire. For the process of analyzing fire designs, Germanischer Lloyd has
developed an integrated methodology called NESTOR, Petersen and Voelker (2003), combining fire
simulations with the Multi Room Fire Code, evacuation simulation with AENEAS and a Event Tree
Analysis for risk assessment.
Since the evacuation on ships is simulated, the most often raised question concerns the influence of
the ship motion on the movement of the pedestrians onboard. Meyer-König et al. (2005) coupled sea
keeping simulations and evacuation simulations in a semi-empirical approach to find the influence of
ship motions on evacuation times. Since trim and pitch angles are usually relatively small, their effect
is mostly negligible. Roll motions were found to be less critical than static heel for evacuation time.
39
3. Structural analyses
Fig.13: FEA for collision of two ships Fig.14: Detailed FEA model of hatch corner
40
distribution as input. The mass distribution considers the ship, the cargo and the hydrodynamic 'added'
mass. The added mass reflects the effect of the surrounding water and depends on the frequency. Its
determination is problematic. One can either use estimates based on experience or employ
sophisticated hydrodynamic simulations. Determination of the stiffness is also not trivial. Stress
distributions in the stiffened bottom and deck plates depend on vibration modes. Again either
estimates based on experience or complex FEAs are employed.
Resonance problems appear often for local ship structures. This can affect human comfort, but also
induce fatigue problems of structures. The vibration analysis of these local structures is similar as for
the ship hull and often based on FEA, Fig.16. Because of the high natural frequencies of local
structures, FEA models must be detailed including also the bending stiffness of structural elements.
The amount of work required for the creation of such models is considerable despite modern pre-
processors with parameterized input possibilities and graphic support. Beam grillage models suffice
usually for the lowest vibration modes. For higher vibration modes, 3-d models of higher precision are
needed.
Fig.15: Global FEA of vibrations of containership Fig.16: Local FEA of vibrations of ship aftbody
and deckhouse
3.3 Acoustics
The prediction of structure-borne sound propagation in ships is difficult for a number of reasons. The
large number of modes participating in any state of high frequency vibration makes it impossible to
treat the global sound propagation problem as a vibration problem today. For a typical passenger
vessel for a frequency of 1000 Hz, a FEA vibration model would lead to several million degrees of
freedom. Since predictions for the mean propagation of structure-borne noise are usually required in a
particular frequency band, vibration computations would have to be repeated for many frequencies.
However, the very fact that information is required only averaged over a frequency band allows an
alternative, far more efficient approach based on statistical energy analysis. The Noise Finite Element
Method (NoiseFEM) of Germanischer Lloyd, Cabos and Jokat (1998), Cabos et al. (2001), is based
on a related approach. NoiseFEM predicts the propagation of noise by analyzing the exchange of
energy between weakly coupled subsystems.
The practical application to complex ship structures requires efficient grid generation procedures.
Compared to existing software for noise prediction, FEA pre-processors support the generation of
complex structural models much better. Nevertheless, even with leading commercial FEA pre-
processors, the assembly of a ship model is very time-consuming. Therefore, Germanischer Lloyd has
cooperated with the software vendor MSC to adapt their product Patran for optimal support of FEA
model generation for ships. Using this tool, FEA models are then built such that thay can be used for
global strength analysis, global vibration analysis and the prediction of structure-borne noise with
NoiseFEM. In particular, structural attributes are stored together with the FEA model which needs to
be handled differently depending on the type of analysis.
Validation with measurements on full-scale mock-ups show that the accuracy of NoiseFEM is
sufficient for typical structure-borne sound predictions for the frequency range between 80 Hz and
2000 Hz, Wilken et al. (2004). Typical ‘coarse’ FEA models as used by Germanischer Lloyd for
global vibration analysis of ships have been proven as well suited for NoiseFEM simulations.
41
Fig.17: Prediction of structure-borne noise in Blohm&Voss cruise vessel
4. Final remarks
The technological progress is rapid, both for hardware and software. Simulations for numerous
applications now often aid decisions, sometimes ‘just’ for qualitative ranking of solutions, sometimes
for quantitative ‘optimization’ of advanced engineering solutions. Continues validation feedback
serves to improve simulation tools as well as it serves to build confidence.
Personally, I am convinced that several expensive failures in ship design could have been avoided by
simulations. However, advanced simulation software alone is not enough. Engineering is more than
ever the art of modelling, finding the delicate balance between level of detail and resources (time,
man-power). This modelling often requires intelligence and considerable (collective) experience. The
true value offered by advanced engineering service providers lies thus not in software licenses or
hardware, but in the symbiosis of highly skilled staff and these resources.
Acknowledgements
Many colleagues at Germanischer Lloyd have supported this paper with their special expertise,
supplying text and/or figures, namely (in alphabetical order) Christian Cabos, Bettar El Moctar,
Holger Mumm, Stefan Nusser, Ulf Petersen, Helge Rathje, Pierre Sames, Leshan Zhang. Special
thanks are due to Volker Bertram for his support in writing this paper.
Literature
ASMUSSEN, I.; MUMM, H. (2001), Ship vibration, GL technology, Germanischer Lloyd, Hamburg,
http://www.gl-group.com/brochurepdf/0E094.pdf
ASMUSSEN, I.; MENZEL, W.; MUMM, H. (1998), Schiffsschwingungen, Handbuch der Werften
XXIV, Hansa-Verlag, pp.75-147
BERTRAM, V.; EL MOCTAR, O.M.; JUNALIK, B.; NUSSER, S. (2004), Fire and ventilation
simulations for ship compartments, 4th Int. Conf. High-Performance Marine Vehicles (HIPER), Rome,
pp.5-17
42
BERTRAM, V.; CAPONNETTO, M.; EL MOCTAR, O.M. (2003), RANSE simulations for unsteady
marine two-phase flows, RINA CFD Conf., London
BREHM, A.; EL MOCTAR, O.M. (2004), Application of a RANSE method to predict temperature
distribution and gas concentration in air ventilated cargo holds, 7th Num. Towing Tank Symp.
(NuTTS), Hamburg
CABOS, C.; EISEN, H.; KRÖMER, M. (2006), GL.ShipLoad: An Integrated Load Generation Tool
for FE Analysis, 5th Int. Conf. Computer and IT Applications to the Maritime Industries, Leiden
CABOS, C.; JOKAT, J. (1998), Computation of structure-borne noise propagation in ship structures
using noise-FEM, 7th Int. Symp. Practical Design of Ships and Mobile Units (PRADS), The Hague,
pp.927-934
CABOS, C.; WORMS, C.; JOKAT, J. (2001), Application of an energy finite element method to the
prediction of structure borne sound propagation in ships, Int. Congr. Noise Control Engineering, The
Hague
EL MOCTAR, O.M. (2001), Numerical computations of flow forces in ship manoeuvring, Ship
Technology Research 48, pp.98-123
EL MOCTAR, O.M. (2003), Numerische Simulation von Sloshing in Tanks, Schiff&Hafen 10,
pp.201-208
EL MOCTAR, O.M. (2005), Computation of slamming and global loads for structural design using
RANSE, 8th Num. Towing Tank Symp. (NuTTS), Varna
EL MOCTAR, O.M.; BERTRAM, V. (2002), Rudder loads for a fast ferry at unusually high rudder
angles, 3rd High-Performance Marine Vehicles Conf. (HIPER), Bergen, pp.127-136
EL MOCTAR, O.M.; BREHM, A.; SCHELLIN, T.E. (2004), Prediction of slamming loads for ship
structural design using potential flow and RANSE codes, 25th Symp. Naval Hydrodyn., St. John’s
EL MOCTAR, O.M.; BREHM, A.; SCHELLIN, T.E.; BERTRAM, V. (2004), A multi-stage
approach to ship slamming load, 7th Num. Towing Tank Symp. (NuTTS), Hamburg
GL (2005), Recommendations for preventive measures to avoid or minimize rudder cavitation,
Germanischer Lloyd, Hamburg
IMO (2002), Interim guidelines for evacuation analyses for new and existing passenger craft,
MSC/Circ.1033, International Maritime Organization
JUNALIK, B.; BERTRAM, V.; EL MOCTAR, O. (2003), Preliminary investigations for CFD fire
simulations in ship rooms, 6th Num. Towing Tank Symp. (NuTTS), Rome
JUNGLEWITZ, A.; EL MOCTAR, O.M. (2004), Numerical analysis of the steering capability of a
podded drive, Ship Technology Research 51/3, pp.134-145
JUNGLEWITZ, A.; EL MOCTAR, O.M.; STADIE-FROHBÖS, G. (2004), Loads on podded
drives, 9th Int. Symp. Practical Design of Ships and Mobile Units (PRADS), Lübeck-Travemünde, pp.
894-901
LEHMANN, E.; EGGE, E.D.; SCHARRER, M.; ZHANG, L. (2001), Calculation of collision with the
aid of linear FE models, 8th Int. Symp. on Practical Design of Ships and Other Floating Structures
(PRADS), Shanghai, Vol. II, pp. 1293-1300
LINDENAU, O.; BERTRAM, V. (2003), RANSE simulation of cavitating flow at a foil, Ship
Technology Research 50, pp.51-65
MEYER-KÖNIG, T.; VALANTO, P.; POVEL, D. (2005), Implementing ship motion in AENEAS -
Model development and first results, 3rd Int. Conf. Pedestrian and Evacuation Dynamics, Vienna
MUZAFERIJA, S.; PERIC, M.; SAMES, P.; SCHELLIN, T. (1998), A two-fluid Navier-Stokes solver
to simulate water entry, 22nd Symp. Naval Hydrodyn., Washington
43
PETERSEN, U.; MEYER-KÖNIG, T.; POVEL, D. (2003), Optimising boarding and de-boarding
processes with AENEAS, 7th Int. Conf. Fast Sea Transportation FAST, Ischia, pp.9-16
PETERSEN, U.; VOELKER, J. (2003), Deviating from the rules – ways to demonstrate an equivalent
level of safety, World Maritime Technology Conf., San Francisco
POVEL, D.; NUSSER, S.; VOELKER, J.; MEYER-KÖNIG, T. (2004), Analysing escape &
evacuation concepts, 1st Int. Conf. Escape, Evacuation & Recovery, London, pp.1-15
WILKEN, M.; CABOS, C.; SEMRAU, S.; WORMS, C.; JOKAT, J. (2004), Prediction and
measurement of structure-borne sound propagation in a full scale deckhouse-mock-up, 9th Int. Symp.
Practical Design of Ships and Mobile Units (PRADS), Lübeck-Travemünde, pp. 653-659
ZHANG, L.; EGGE, E.D.; BRUHNS, H. (2004), Approval procedure concept for alternative
arrangements, 3rd Int. Conf. Collision and Grounding of Ships (ICCGS), Tokyo pp. 87-96
44
Hydrodynamic Aspects of AUV Design
Volker Bertram, ENSIETA, Brest/France, [email protected]
Alberto Alvarez, IMEDEA, Esporles/Spain, [email protected]
Abstract
The design of a concrete AUV for oceanographic research in the Mediterranean revealed the lack of
general design guidelines for AUVs. During a cooperation phase between AUV designers and naval
architectural hydrodynamicists some design guidelines for AUVs were compiled. This compilation
exemplifies the design approach, combining empirical estimates where available with advanced
hydrodynamic simulations. Some general guidelines for hull shape and some empirical manoeuvring
coefficients for torpedo-like geometries are given.
1. Introduction
In general, civilian and military ocean observations are mostly based on sensors and observing
platforms. The interdisciplinary character of the measurements basically depends on the sensors. A
strong development of sensor technology during the last years has lead to sensors able to measure
important physical, chemical, biological, optical and acoustical properties of the sea, Dickey (1991).
Most of these sensors are small and with low energy consumption, which facilitates their integration
on different ocean observing platforms.
Spatial and temporal resolution of ocean observations depends on the observing platform employed.
Ships and buoys have traditionally been used for oceanographic observations. Both allow inter-
disciplinary measurements of the ocean, but not with the spatio-temporal resolution required.
Autonomous mobile ocean platforms constitute a new emerging technology that can cause a profound
impact on studying, preserving and managing the marine environment. Specifically, they allow
continuous surveying of the ocean at low cost (compared with ships), providing data in real time. The
finer spatio-temporal data resolution will improve our present knowledge of marine areas like the
coast, where data changes rapidly, Fig.1. Such vehicles include gliders (= unpropelled underwater
vehicles gliding down in depth and using buoyancy to surface), Autonomous Underwater Vehicles
(AUVs) and Autonomous Surface Vehicles (ASVs). The basic idea is that a network of small,
intelligent and cheap platforms constitutes the most efficient and economic way to sample ocean data,
Fig.2, Kunzig (1996).
Fig.1: Oceanographic data gathered Fig.2: Multi-modal data gathering with AUVs
45
ASVs can only provide data at the ocean surface and are not considered here. AUVs are more
adequate than gliders to sample ocean areas if strong energetic processes are present. AUVs are
submarine robots that carry out sampling campaigns autonomously. They widely employed in
research studies and marine industries, with more than 60 operational AUVs around the world, Janes
(1999). The main limitations of AUVs are related to battery duration, submarine positioning and
navigation.
In coastal environments, depth requests rarely go beyond 60 m. These particularities allow designing
low-cost vehicles made of polystyrenes or polyesters able to resist the required depths. Commu-
nications can be established by GPRS (general packet radio service) technology and sampling
methodologies do not require employing sophisticated inertial navigation and underwater positioning
systems. Thus, a novel ocean observing platform scaled to coastal problems can be proposed.
Specifically, a marine robot with the capability to move at the ocean surface and diving at determined
points to sample water column profiles is considered.
The ‘Cormoran’, Fig.3, is a simple low-cost coastal water observing platform, a hybrid between AUV
and ASV. It moves at the sea surface and dives to make vertical profiles of the water column follow-
ing an established plan. Gathered data is transmitted in real time to the laboratory. The prototype has a
torpedo shape with a total length of 1.5 m, a diameter of 16 cm, and a displacement of 25 kg. The
speed of 1.3 m/s results in a Froude number of Fn=0.34. Its dimensions allow easy handling from a
human operator. The sampling strategy followed by the platform is closer to the traditional monitoring
procedure of oceanographic variables where measurements are carried out at grid nodes. The surface
motion of the platform between measurement points allows GPS (global positioning system)
positioning and direct communication and data transmission with the laboratory through mobile
phone. In the locations determined in the cruise planning, the platform carries out a vertical diving to
sample the water column. Similar to gliders, immersion is obtained by changing buoyancy through a
piston. The platform will be completed with the intelligence required to stay long periods at sea
without human intervention, energy management and possibilities of recharging energy from the
environment (solar energy, wave motion), anti-collision systems, automatic evasion strategies and
emergency procedures when facing on dangerous situations. The platform can be then designed with
enough robustness to allow long periods of autonomous operation.
The optimization of the resistance and propulsive efficiency of an AUV is essential because of their
influence on cruising range and maximum speed. Thus the resistance of the hull including appendages
and control surfaces should be minimized and the propulsive efficiency maximized by proper match-
46
ing of hull and propeller. The Cormoran will operate predominantly near the water surface. Therefore
it has to be designed for good performance in near-surface condition. For AUVs or submarines oper-
ating in deep submergence, no wave making occurs and thus the wave resistance vanishes. For snor-
keling at low speeds, the wave resistance is also negligibly small.
The resistance could be substantially reduced by maintaining a laminar boundary layer as long as pos-
sible. Attempts to develop hull shapes analogously to the laminar profiles in aerospace engineering
failed. While American model tests in the late 1960s suggested theoretically potential drag reductions
of up to 65% for small hulls, largely laminar flow proved to be impossible in real sea water conditions
due to the impurities in real sea water, Friedman (1984).
Systematic model test series for streamlined axi-symmetric bodies performed in the David Taylor
Model Basin in Washington, Gertler (1950), Landweber and Gertler (1950), give some guidelines for
design for deeply submerged bodies. The offsets of the models were derived from a 6th degree poly-
nomial, i.e. a shape without parallel midbody. The investigation varied fineness ratio L/D, prismatic
coefficient CP = ∇/(¼π⋅D2⋅L), nose radius (non-dimensional) r0=R0⋅L/D2, tail radius (non-dimen-
sional) r1 =R1⋅L/D2, and distance of the maximum cross section from the nose (non-dimensional)
m=x/L. ∇ is the displacement, D the maximum diameter, L the length of the body. The results of the
investigations together with statements taken from Arentzen and Mandel (1960) are summarized:
• The fineness ratio L/D influences substantially the resistance of submarines, since the wetted
surface depends strongly on it for a given volume. It is customary to decompose the resistance
of the naked hull of a deeply submerged submarine into skin friction resistance and form resis-
tance. The skin friction resistance is order of 60% to 70% of the total resistance for typical
submarines. The skin friction resistance is due to the viscous shear of water flowing over the
hull. It is essentially related to the exposed surface area. Therefore reducing the wetted surface
reduces the resistance. For a given volume, a sphere (L/D=1) has the smallest surface and thus
the smallest skin friction resistance.
The form of the submarine induces a local flow field with velocities sometimes higher and
sometimes lower than the average velocity. The average of the resulting shear stresses is then
higher. Also energy losses in the boundary layer, vortices and flow separation prevent the in-
crease to stagnation pressure in the aft body as predicted by ideal fluid theory. The form resis-
tance can be minimized by having slowly varying sections along the body. A needle-shape
would be good in respect of form resistance.
Optimizing for either only skin friction resistance or form resistance thus leads to opposing re-
quirements. In consequence there is an optimum, albeit a very flat one. This optimum is shifted
if appendages are considered. In reality we want to design (optimize) for propulsive power
where the flow changes yet again. Gertler (1950) found the optimum for his series at L/D=6.5
for bare hulls, L/D=7 if the control surfaces are taken into account. Submarines and torpedoes
usually feature L/D ≈ 9…11 due to constraints on maximum diameters. As the total resistance
curve over L/D is rather flat, there is little penalty involved in moving to such fineness ratios.
• The prismatic coefficient significantly influences the resistance. Gertler (1950) found CP=0.61
as optimum for bodies of revolution having constant volume and constant L/D. Considering
also control surfaces does not materially alter the position of the optimum CP. However, drag
depends very much on the slope of the body lines. A greater CP without substantial resistance
increase can be achieved by inserting a parallel mid body, which is also desirable for produc-
tion and docking aspects. Special care is required in the detailed shaping of the lines for greater
CP values. The following table lists favourable combinations of CP, CP,e and Lx:
CP is the total prismatic coefficient, CP,e the prismatic coefficient of the residual hull without the
parallel mid body, Lx the length of the parallel mid body, L’x=Lx/L, where L is the overall
length.
47
• The models were investigated with different non-dimensional nose radii r0 = 0...1. This radius is
comparatively small for submarines where values of r0 = 2.5 are typical due to constraints of in-
ternal arrangement of sonar equipment and torpedo tubes. However, a realistic comparison with
Gertler (1950) should consider only the length without the parallel midbody, and then values of
r0 ≈ 1 are typical for submarines, and smaller values for AUVs. Gertler’s model tests show a re-
sistance minimum at r0 = 0.5, with a resistance increase at r0 = 1 by 1.3% and at r0 = 0 by 1.5%
for models with control surfaces. Slender forebody shapes are good for low resistance. There is
no consensus on the importance of the forebody shape. Some claim that there is a substantial
difference of the forebody compared to an optimum shape. Others claim that a forebody de-
signed carefully to avoid separation will contribute only marginally to total resistance.
• The shape of the aft body cannot be considered apart from the propeller as the propeller modi-
fies considerably the flow over the aft body. The best form for a propelled body is not the best
form for a towed body. For the body in unpropelled condition, a stern cone angle of θ=20° can
be regarded as a limit for flow separation for a parabolic outline of the aft body, θ=18° for an
aftbody composed from a cone at the stern until a diameter of half maximum breadth is reached
and a parabolic transition with the vertex at the cylindrical mid body. For the submarine in pro-
pelled condition, the flow acceleration due to the propeller prevents separation for much higher
cone angles. A thicker aft body is desirable for various reasons (internal arrangement, manoeu-
vrability, decreased frictional resistance due to smaller wetted surface). A cone plus parabola
allows a greater fullness of the aft body and therefore this shape can be regarded as more fa-
vourable.
• The distance of the maximum cross section from the nose is m ≈ 0.37 for minimum resistance
(with or without control surfaces) for a body of revolution having L/D=7, CP=0.65, r0=0.5, and
r1=0.1. The resistance increase is 4.4% for m=0.52, and already 2.7% for m=0.34 (small shift
ahead). There is no statement concerning the incorporation of a parallel mid body. So again the
statements of Gertler (1950) have to be taken with caution.
Generally, appendages and openings contribute disproportionally to the total resistance. Absolutely
necessary appendages (control foils, signal mast, etc.) should be streamlined and made as small as
possible. Masts should be lens-shaped rather than having foil cross-sections to avoid blunt leading
edges which create much wave-breaking.
Propeller efficiency increases with propeller diameter. To avoid damage to the propeller, typically a
diameter slightly smaller than the AUV diameter should be chosen. The number of propeller blades in
naval submarines follows cavitation and noise considerations, which are irrelevant for AUVs.
Therefore standard three-bladed propellers can be recommended as they are simple to produce and
easier to balance. Propeller ducts sometimes seen on naval submarines serve mainly noise and
vibration aspects. Thus for typical AUV applications, they can and should be avoided, as they will
only increase power requirements respectively reduce maximum speed.
4. Manoeuvring
4.1. Body
Manoeuvring simulations for submarines follow the same principle as for surface ships, Bertram
(2000), but have 6 degrees of freedom. Bohlmann (1990) describes in detail the techniques specific
for submarine manoeuvring. The general approach employs body force coefficients, coupling input
variables (like speed, yaw and pitch angles, foil deflection angles, etc.) and generalized forces acting
on the body. The resulting differential equations are relatively easy to integrate in time allowing to
simulate manoeuvres. The essential difficulty is then the find a sufficiently large and accurate set of
body force coefficients. These are determined best in model tests (for AUVs ideally at full scale).
Submarine and torpedo manoeuvring data are usually classified. However, Lambert (1956) investi-
gated the manoeuvring of deeply submerged torpedoes, considering only motions in the vertical plane
and neglecting buoyancy and trim effects. The coefficients for the horizontal plane follow
correspondingly for bodies of revolution. We follow the notation of Bertram (2000) here, giving in
48
Table I the manoeuvring coefficients in the horizontal plane. α is the angle of attack defined as the
angle between the axis of rotation of the body and the velocity of the centre of gravity of the body, δ
is the angle between the control foil and the axis of rotation of the body, r=∂ψ/∂t the rate of turn. Y is
the side force, N the moment about the centre of gravity of the body. Torpedo A is directionally very
stable, requiring large rudders to turn. Torpedo B is directionally moderately stable.
A B A B A B
∂Y/∂α -11866 -13223 ∂Y/∂δ -2688 -2287 ∂Y/∂r -1888 -2511
∂N/∂α -2957 +65785 ∂N/∂δ -19891 -27095 ∂N/∂r -11967 -24738
• Near the hull the boundary layer reduces the velocity and thus stagnation pressure. The
boundary layer thickness may reach values of 1/3 of the foil width.
• The hull changes the local flow direction and thus the local angle of attack and velocity for
the foil sections.
The influence of a cylindrical hull on a foil has been investigated extensively in experiments and sim-
ple potential flow computations. Outside the boundary layer, the local flow velocity near the cylinder
is much higher than far away or in uniform flow. Thus the foil experiences spanwise different local
angle of attacks. Computations with classical flow theories (e.g. conformal mapping techniques, lift-
ing-line methods, etc.) have shown good agreement (for the simple geometries investigated and small
angles of attack) with experiments. Thus systematic computations with such classical methods have
been compiled into curves for fast engineering estimates as required in early design. For strongly
asymmetric foil configurations, complex geometries, or large angles of attack, experiments or ad-
vanced CFD simulations are necessary to investigate the flow in detail.
5. Free-surface aspects
Most of the time, the Cormoran will operate in snorkelling condition. The main body is then close
enough to the water surface to make waves, and the mast pierces the water surface creating its own
small wave system.
The wave resistance in snorkelling and surfaced condition can be determined using advanced wave
49
resistance codes, Bertram (2000), Fig.4. These codes neglect viscosity and the action of the propeller,
but determine iteratively the position of the free surface. Advanced codes like ν-Shallo of HSVA are
capable of handling ‘moderate’ nonlinearities like partial surfacing of initially (at calm water level)
submerged parts of the body, but cannot handle breaking waves which appear in surfaced submarines,
Fig.5. For naval submarines, wave making in snorkelling condition is critical due to detection of the
created wave system and thus simulations are commonly employed to reduce wave making by
geometrical design changes or predict wave making to give operational advice. For our civilian AUV,
the wave making was very moderate and a standard non-linear wave resistance approach as described
by Bertram (2000) could be employed. The code was implemented in Matlab. The approach solves
the Laplace equation for potential flow, keeping the submerged body fixed, but iterating at the free
surface, until the non-linear free-surface conditions (atmospheric pressure and no-penetration condi-
tion) are both fulfilled at the actual free surface. Convergence is rapid with typically 3 or 4 iterations.
The computational time on a Pentium IV processor machine of 3.06 GHz is typically 40 s for a grid of
1000 elements.
Fig.4: Flow around surfaced submarine computed Fig.5: Complex flow around surfaced submarine
with ν-Shallo, source: HSVA
Fig.6: Flow around surface-piercing masts of Fig.7: Flow computed around surface-piercing
submarine mast using RANSE solver Comet, source: HSVA
The flow around surface-piercing appendages (masts) features high Froude numbers, typically Fn > 2,
often associated with steep, massively breaking waves, Fig.6. If we are just interested in a resistance
estimate, Michell's integral may be employed, Bertram (2000). A corresponding Fortran routine can
be downloaded from www.bh.com/companions/0750648511. For detailed analyses, free-surface
RANSE codes based on surface-capturing methods are the tool of choice. Such computations are able
to reproduce very realistically the flow around snorkeling masts, Fig.7, but only few specialists can
perform such advanced analyses, El Moctar and Bertram (2001), Bertram et al. (2003).
In snorkelling and surfaced condition, the AUV will also be subject to the seaway. In surfaced
condition, the simulation of sea keeping may appear to be straight-forward as for regular ships.
50
However, the circular shape of the AUV with very little ‘freeboard’ as opposed to largely wall-sided
ships means that traditional linear sea keeping methods are at least questionable already in moderate
sea states. This leaves then nonlinear strip methods as appropriate tool if sea keeping aspects form
part of the design considerations.
In snorkelling condition, the effect of waves is decreased (exponentially with depth of submergence),
but there are no longer hydrostatic restoring forces for heave, pitch, and roll. A realistic simulation
requires appropriate consideration of viscous damping terms and the action of control foils, i.e.
coupling manoeuvring and sea keeping in time-domain simulations with at least some empirical
correction for viscous damping effects. However, standard linear sea keeping tools like strip methods,
Bertram (2000), may be employed to solve the diffraction problem, artificially keeping the AUV at a
given depth and computing just the forces on the body due to waves. These forces would need to be
counter-acted by the control foils to keep the AUV at this position and the simplified diffraction
analysis can serve to dimension the foils.
We split the body in three simple segments of respective lengths La, Lc, Lf, for aft, centre, and front
part, Fig.8. The aft part and the front part follow from:
1
(L − x ) na
(x − L − L ) n f nf
ra = R 1 − a rf = R 1 − a c
La Lf
r is the radius at the position x and R is the radius of the central cylinder.
Fig.8: Geometry for optimization Fig.9: Grid for wave resistance computation
The body was optimized for minimum total resistance averaged for 0.9, 1, and 1.1 design speed. The
total resistance is computed as sum of wave resistance near the free, and the frictional resistance
following ITTC’57. Wave resistance was computed with the fully non-linear Matlab wave resistance
code mentioned above. This approach corresponds to the state of the art. A total of 843 source
elements were distributed on the body hull (343) and free surface (520), employing symmetry in y by
mirror images of the elements, Fig.9. Desingularization was applied to the free-surface elements.
Constraints for the optimization were constant displacement volume and maximum length of the
vehicle of 1.5 m. We used a simulated annealing optimization algorithm which proved to yield better
results than the standard sequential quadratic programming optimization routine of Matlab version 7.
Comparative calculations revealed that the objective function has shallow and slightly oscillating
contour lines making heuristic optimization algorithms more suitable than gradient based algorithms.
Table II and Fig.10 summarize the results. The optimized hull shape is shorter, with a much shorter
parallel mid body and larger diameter, which also improves propulsion as a larger propeller diameter
can be chosen. The power requirements then are effectively reduced by approximately 20%, allowing
a smaller engine or longer autonomy. For comparison, Table III, Fig.11, give the corresponding
results if the simple Michell integral, Bertram (2000), is used to approximate the wave resistance. In
both computational approaches, the general trend towards decreasing the length: diameter ratio of the
51
body is reflected, but the nonlinear wave resistance approach tends to increase the forward part and
decreases the parallel mid body more clearly than the (less accurate) Michell approach, reducing
effectively the required power. The next applications released na and nf to allow more arbitrary
shapes. Table IV, Fig.12 give the results for the fully nonlinear wave resistance code and the simple
Michell integral. In this case the differences become very pronounced. The tendency is to eliminate a
parallel mid body completely which is feasible for a very small platform like the Cormoran, that does
not require flat docking facilities.
Table II: Results of optimization for nonlinear wave resistance code, na=3
52
Original Case 5 (nonlinear) Case 6 (Michell)
7. Conclusion
AUVs should be designed following a mixture of empirical rules of thumbs, numerical simulation,
and full-scale tests. Generally, most AUV designs appear to have considerable potential for
improvement in their hydrodynamics.
Acknowledgements
We thank Dr. Hans-Jürgen Bohlmann of Howaldtswerke-Deutsche Werft GmbH in Kiel for valuable
discussions and literature references. We thank Bartolomeo Garau of IMEDEA for extensive help in
vectorization of the wave resistance code and Alberto Gomez for performing some of the
computations. The support from the Spanish National project REN2003-07787-C02-01 and the
Govern Balear UGIZC Project is gratefully acknowledged.
References
ARENTZEN, E.S.; MANDEL, P. (1960), Naval architectural aspects of submarine design, SNAME
Trans., pp.622-692
BERTRAM, V. (2000), Practical Ship Hydrodynamics, Butterworth+Heinemann, Oxford
BERTRAM, V.; CAPONNETTO, M.; EL MOCTAR, O.M. (2003), RANSE simulations for unsteady
marine two-phase flows, RINA CFD Conf., London
BOHLMANN, H.J. (1990), Berechnung hydrodynamischer Koeffizienten von Ubooten zur Vorher-
sage des Bewegungsverhaltens, IfS-Report 513, Univ. Hamburg
BURCHER, R.; RYDILL, L. (1999), Concepts in Submarine Design, Cambridge Univ. Press, 2nd Ed.
DICKEY, T. (1991), Concurrent high resolution physical and bio-optical measurements in the upper
ocean and their applications, Review of Geophysics 29, pp.383-392
EL MOCTAR, O.M. (2001), Numerical computation of flow forces in ship manoeuvring, Ship
Technology Research 48, pp.98-123
EL MOCTAR, O.M.; BERTRAM, V. (2001), RANSE simulations for high-Fn, high-Rn free-surface
flows, 4th Numerical Towing Tank Symposium (NuTTS), Hamburg
FRIEDMAN, N. (1984), Submarine Design and Development, Conway Maritime Press
GERTLER, M. (1950), Resistance experiments on a systematic series of streamlined bodies of
revolution - for application to the design of high-speed submarines, DTMB Report C-297, Bethesda
JANES (1999), Janes Underwater Technology 1999-2000
KUNZIG, R. (1996), A thousand diving robots, Discover Magazine, April, pp.60
LAMBERT, J.D. (1956), The effect of changes in the stability derivatives on the dynamic behaviour
of a torpedo, ARL report ARL/R3/HY/13/0
LANDWEBER, L.; GERTLER, M. (1950), Mathematical formulation of bodies of revolution, DTMB
Report 719, Bethesda
53
Task Coordination of Automated Guided Vehicles in a Container Terminal
Abstract
This paper introduces a new generation of Automated Guided Vehicles, called IPSI® AGVs, which
are evaluated in the operations of a Container Terminal. The objective is to identify number of
automated guided vehicles and cassettes required in order to that a number of cranes will not be idle
during ship loading and unloading operations. A protocol called Contract Net is evaluated in the
coordination of cassettes with automated guided vehicles, which are represented as agents in a
simulator. The simulation tests show that under various scenarios a high throughput of container
handling is achieved using cassettes, which act as a moving buffer while coordinating with IPSI AGVs
1. Introduction
The transport of containers is continuously growing and many container terminals are coping with
congestion and capacity problems. In many ports, especially in Europe, the amount of available land
in many ports is restricted. This mostly due to that many of the ports are located in major cities, such
as Hamburg, London, Marseille, and Rotterdam. Thus many managers in ports and terminals are
searching for more efficient means in the handling of containers.
One area in which container terminal management has been focusing in finding solutions is on the
landside transport in container terminals. Often, terminals use trucks, reach stackers, straddle carriers
and Automated Guided Vehicles (AGVs) to move the containers from the marine side of the terminal,
called a quay to stacks located in the terminal yard. The pictures in Fig.1 show how these various
transport systems look.
54
Following two European Union sponsored projects, IPSI (Improved Port Ship Interface) and
INTEGRATION (Integration of Sea Land Technologies), a system for handling containers using
cassettes and AGVs has been developed. The IPSI AGVs and cassettes, steel platforms which
containers can be set on, are built by TTS AB for the Roll-On Roll-Off (RORO) shipping sector. A
picture of an IPSI AGV is shown in Fig.2 transporting a cassette loaded with two containers.
The purpose of this paper is to introduce a new development in container handling technology and
study the use of a protocol called Contract Net in the coordination of cranes, cassettes and IPSI AGVs
during the loading and discharging of containers. In HENESEY (2006) suggests that many container
terminal managers view the interface between the quay cranes and the yard as major problem in
considering plans or solutions An objective that many container terminal managers share is trying to
keep the assigned quay cranes from being idle or avoid interruption during ship operations so as to
quickly service (turn-around) a ship.
This paper presents simulation as a tool using agent technology which can offer container managers
an alternative approach in understanding and testing decisions for transhipping containers. The
simulation approach offers the power of problem decomposition and parallelism for problems that are
considered to be complex, such as in the operations of transhipping containers in a container terminal.
In addition, simulation provides a method of evaluating a concept that has not been used in the real
world LAW and KELTON (2000). Therefore, the evaluation and testing of the new AGV system by
using simulation provides more robust testing than using spread sheet analysis.
The remainder of the paper is organized as follows; in section 2 a general description of the container
terminal operations. In section 3, an overview of the methodogy and model is presented. In section 4
provides a description of the simulation experiments. The results are presented and discussed in
section 5. A discussion and conclusion is presented in setion 6 with pointers for future work.
The number of Twenty-foot Equivalent Unit containers (TEUs) shipped world-wide has increased
from 39 million in 1980 to 356 million in 2004 and growth is still projected at an annual growth rate
of 10 per cent till 2020 DAVIDSON (2005). In order to handle such volumes, larger container ships
are being designed and built with capacities of 12,000 TEUS+. Often due to both physical and
economic constraints, big container ships are calling on smaller number of ports. Many shipping
companies are trying to serve a geographic region, such as Europe, by establishing two or three main
hubs from which smaller container ships will “feed” containers to and from ports in the region. With
the large flow of containers being transhipped, a segment of the shipping business, called feedering is
increasing, thus the number of containers being transhipped is also increasing. In a study by Ocean
Shipping Consulting (OSC), the total transhipment throughput for Europe and the Mediterranean has
increase more than threefold over 1995-2004 and by 58 per cent over 2000-2004 to 22.5 million TEU
55
OSC (2006). The container throughput for nine port regions is presented in Table I, indicating growth
in number of containers handled.
In considering future demand, the study suggests that North European transhipment demand will
increase over 2004-2010 by 56-68 per cent to 14.73-15.87 million TEU and in South
Europe/Mediterranean region by 80-97 per cent to 23.5-25.7 million TEU OSC (2006). For over,
2010-2015, the study indicates that transhipment demand in North Europe will increase by 31-42 per
cent and in South Europe/Mediterranean, a further increase of 41-55 per cent to 33.2-39.8 million
TEU OSC (2006).
The description of container shipping points to many ports will be handling a large number of
transhipment containers for both ‘feeder’ vessels and serving bigger ‘deep-sea’ vessels. There are a
growing number of container terminals around the world in which transhipping containers is the
dominant activity, such as Malta, Gioia Tauro, Salah, Algeciras, Singapore, etc. BAIRD (2005). To
meet increasing demand, ports and container terminal will try to create additional capacity. Many of
the solutions considered can be classified as either physical expansion or increasing terminal
performance. Some of the solutions are:
• Increase the length or number of berths
• Increase the productivity at the berth by acquiring new technologies or machinery
• Increase the time that berths can be operated (i.e. 24 hours, 7-days a week)
Some types of physical expansion solutions are purchase of new or additional equipment, hiring more
labour, development and purchase of land. Solutions that can be classified as increasing terminal
performance are:
• Improve the efficiency in allocation of resources
• Improve policies and management decisions, which often have non-optimal objectives
In this paper we will focus on improving the efficiency in the allocation of resources as a part of our
research work in developing an Intelligent Decision Support System for container terminal
operations. In the next section we will describe the model and processes that we have considered for
developing the simulation model.
3.1 Methodology
ISODA (2001) describes that mapping real world entities into programming languages has been one
of the greatest desires of software developers. Object-oriented Simulation (OOS) modelling can be
used for example in mapping the behaviour of interacting objects over time. The simulations built
with these tools possess the benefits of an object-oriented design, including the use of encapsulation,
56
inheritance and polymorphism FISHWICK (1995). Mapping of problems from real world to the
statements in a computer programming language has always been the trickiest work for the
developers.
Computer programming languages such as C++ and JAVA have implemented the concept of object,
which has attributes and operations, just like “entity” in a real world. Just like Object-oriented
simulation, MABS provides a close match between the entities of the reality, the entities of the model
and the entities of the simulation software. MABS is not a completely new and original simulation
paradigm DAVIDSSON (2000), it is influenced by and partially builds upon: 1) Traditional dynamic
micro-simulation, 2) Parallel and distributed DES, and 3) Object-oriented simulation. However, in an
OOS, the simulated entities are typically purely reactive, not using any communication language,
stationary, static, and not modelled using mentalistic concepts DAVIDSSON (2000). MABS differs
from other kinds of computer-based simulation in that (some of) the simulated entities are modelled
and implemented in terms of agents.
That is why we used the term agent for these real world entities and in programming domain we
implemented them as objects. During the simulation all these objects work like processes in the
operating system and perform their task with parallel to each other. This behaviour is close to
replication of the real world scenario where the agents of the system continuously perform their work
even if they have to coordinate with each other to finish the main work. These objects are directly
mapped with agent (entities) of port like Quay crane, AGV, and Cassette.
Based on this direct mapping we used DESMO-J as our preferred library to work with. DESMO-J is
an open source code based on the JAVA computer programming language and is available for
download from the University of Hamburg, Germany, U.H. (2006). DESMO-J provides runtime
process based simulation engine that can be used to map port agents (entities) and to simulate the
coordination of these process. Agents use the Contract Net protocol to coordinate tasks. This protocol
is implemented because it is a fairly good means of distributing tasks and self organisation of the
group of agents. The protocol is suitable to our model since in our model we describe a hierarchy
tasks that are well-defined. Also, the problem of moving containers from quay crane to the yard in the
container terminal is characterised as a problem having a coarse-grained decomposition.
The system that we have modelled is illustrated partly in Fig.3 a container terminal layout is presented
showing a ship that is docked along a quay of a container terminal. We have followed a general
simulation process as described by Law and Kelton in LAW and KELTON (2000) (attached as an
Appendix). The stage that the simulation process is at would be stage 5 in that we are testing a
prototype with real data. In formulating questions to evaluate, we have focused in modelling the
operations between the quay cranes and the transporters that transfer containers between the stacks
and quay. Also shown in Fig.3 are text boxes listing questions that a container terminal manager
would ask in allocation of cranes, cassettes and AGVs to serve a ship that will be loaded/unloaded
with containers. The main questions that would be considered in deciding the resource allocation
would be in the following sequence:
57
1. How many containers in the ship?
2. How many cranes should be assigned to work the ship?
3. How many cassettes, acting as a buffer should be used?
4. How many AGVs should be allocated?
58
4. Agents in the model
This section defines the agents of the system along with their attributes, functions and messages.
4.3 Container
Total number of containers in the simulation experiment is specified by the user before the start of the
experiment. Each cassette has its unique name assigned to it.
59
Functions
AGV is responsible to transport the container assigned to it by the quay crane Buffer from the Buffer
to the containers stack. The time required to transport a container is a random number picked up
within a range specified by the user.
5. Experiment description
The simulation tool offers a command prompt interface to execute a simulation. The input parameters
are stored in a text file from which the AGV simulator reads the parameters and executes the
simulation until all the containers are put on the stack. The output of the simulation is a set of files
generated by DESMO-J library based on the progress of the simulation. The files contain the
information of all events taken place during the simulation. The file also prints the overall
performance of each machine (agent) involved in the simulation.
We have been provided by an industrial partner a scenario in which a ship arrives and a number of
decisions must be made on allocation of quay cranes, cassettes and AGVs. To demonstrate the
experiment in this paper we are using the following settings:
Based on above value we execute the simulation and the result is shown in Fig. 4 as a time graph for
the simplicity of understanding.
60
The time graph shows the activities of the entities involved in the simulation. The graph shows that
the crane was not idle for the first 15 unit time but after that it was in the idle state frequently. AGV
service time and container arrival time were taken constant numbers as 5 units and 2 units
respectively. But during the actual simulation it is a random number taken within a fixed range. That
makes the simulation results more realistic as then there may be more idle time for the crane. From
the above results one can conclude that there is a need to introduce one more AGV in the simulation
to make the crane in the busy state throughout the simulation. As crane operating cost is higher than
the AGV operating cost then these results are very helpful for the supervisor at the port to decide how
many cranes, AGVs to be allocated to a ship to complete the work in minimum time.
In Fig. 5 a more detailed illustration shows the problem of a single quay crane unloading or loading
containers onto a cassette that will be transferred by an AGV, the simulator helps us to determine the
best combination of terminal resources, utilization rate and the service time necessary for serving a
ship.
Quay Crane
Cassette loaded
with containrs
AGV
From the detailed illustration in Fig.5, we can see how the various terminal machines are used
together for transferring containers within the container terminals. Obviously the addition of more
cranes, cassette and AGV adds more complexity in finding solutions that will provide a fast turn-
around time for a container ship while using the equipment efficiently.
In the next section we will describe a scenario that was tested from data provided by industrial
partners. The data and information used as parameters and settings was initially checked for
confidence to ascertain correct values for cycle times for the quay cranes and AGVs.
6. Simulation Results
A series of simulation tests were conducted to identify the best combination of terminal resources;
quay cranes, cassettes and AGVs for serving a ship with 493 containers to be unloaded/loaded. A
container handling rate is obtained by dividing the number of containers handled by a quay crane with
the number of containers, e.g. 10 hours for 300 containers would yield a container handling rate of 30
containers per hour. The utilization rates are determined for the quay cranes, cassettes and AGVs by
dividing the average service times by the total time needed to serve a ship, which is often called turn-
around time. In the graph presented in Fig.6 the results for various configurations or combinations of
container terminal equipment are tested in order compare the container handling rate during
61
operations within a 24 hour interval. The left hand side of the graph shows the handling rate as units
of containers handled per hour in a simulation time (turn-around time). The simulation times vary as
the number and types of configurations of equipment are experimented. To compare the handling
rates, we compare the handling rate per hour for the 24 hour period of time. There fore, a quay crane
having a handling rate of 40 containers per hour, for moving 493 containers in 14.7 hours of time
would be the same handling rate as 4 quay cranes moving the same load of containers, but achieving
this task in 3.17 hours. Thus, we can see from the graph that having additional cranes provides a
higher handling rate, but this must be compared to the utilization rates for various machine resources
on the right hand side of the graph. Having one quay crane handle a task, would mean that it would
lead to higher utilization rates for itself, the cassettes and AGVs when compared to using 3 or more
quay cranes. It appears that the utilization rates start to stable as the numbers of quay cranes are
added.
Operations using AGVs and Cassettes for a Container Terminal During a 24 Hour Time Period
180,0 0,70
160,0
140,0
0,50
120,0
0,40
100,0
80,0
0,30
60,0
0,20
40,0
0,10
20,0
0,0 0,00
1 QC 1 QC 1 QC 1 QC 1 QC 1 QC 2 QC 2 QC 2 QC 2 QC 2 QC 2 QC 3 QC 3 QC 3 QC 3 QC 3 QC 3 QC 4 QC 4 QC 4 QC 4 QC 4 QC 4 QC
3 Cas. 3 Cas. 3 Cas. 4 Cas. 4 Cas. 4 Cas. 3 Cas. 3 Cas. 3 Cas. 4 Cas. 4 Cas. 4 Cas. 3 Cas. 3 Cas. 3 Cas. 4 Cas. 4 Cas. 4 Cas. 3 Cas. 3 Cas. 3 Cas. 4 Cas. 4 Cas. 4 Cas.
2 AGV 3 AGV 4 AGV 2 AGV 3 AGV 4 AGV 2 AGV 3 AGV 4 AGV 2 AGV 3 AGV 4 AGV 2 AGV 3 AGV 4 AGV 2 AGV 3 AGV 4 AGV 2 AGV 3 AGV 4 AGV 2 AGV 3 AGV 4 AGV
Number and Allocation of Container Terminal Equipment Used for Vessel Operations
Prod.1 QC Prod. 2QC Prod. 3QC Prod.4QC
Util QC Util AGV Util Cas.
The tests further indicated that often introducing an additional cassette or AGV would offer the same
or better handling rate than introducing another crane and its group of cassettes and AGVs. The
possible use of IPSI AGVs and the cassette system is that a number of containers can be stacked on a
cassette for transport by an AGV. The cassettes appear to offer the AGVs higher service time levels,
which leads to a higher utilization rate. The utilization rates for cassettes and AGVs seem to follow a
similar pattern.
The initial results and testing of the prototype AGV simulator provide some interesting observations
in determining the quantity and configuration combination to test. The pilot runs that we have
conducted are also creating further questions that require more investigation. Naturally, the notion of
adding more quay cranes to have better handling rates is not applicable when costs are considered.
The capital costs from various sources for a quay crane can be nearly 7 million euro or more DE
MONIE (2005). The pilot runs have given us much insight in the relationships among various
terminal equipment types that can be used in container terminal operations.
62
The introduction of the cassette system and AGVs posses some advantages in that it can act as a
‘floating’ buffer, meaning that it can allow the AGVs to keep servicing a quay crane and not having to
wait. Waiting time is lower for the AGVs and thus they are obtaining better utilization rate.
This paper is proposes that a new technology using cassettes and AGVs could be conceptually
modelled and simulated in the context of a container terminal system. The modelling approach using
MABS has provided a better granularity in modelling the entities and having them communicate and
coordinate amongst other entities. Further experiments will be designed and data will be verified.
Also, the parameters and settings will also be investigated and validated for confidence. Our agents in
the simulation are functioning as purely reactive agents and more work is planned for the cassette
agents in introducing logic in assisting these agents in assigning AGVs tasks.
Acknowledgements
This work has been partially funded by Karlshamn Municipality. The following port industry
representatives have provided useful information necessary for the development of IPSI AGV
Simulator; TTS AB in Göteborg, Sweden, Lennart Svensson, Bjørn O. Hansen and Michel Lyrstrand.
References
BAIRD, A. (2005), Optimising the container transhipment hub location in northern Europe, J.
Transport Geography article in press
DAVIDSON, N. (2005), A global capacity assessment and needs analysis, 39th Terminal Operating
Conference, Antwerp, Belgium
DAVIDSSON, P. (2000), Multi Agent Based Simulation: Beyond Social Simulation, 2nd International
Workshop on Multi Agent Based Simulation (MABS’2000)
DE MONIE, G. (2005). Environmental Scanning in Ports. ITMMA Private Public Partnerships
inPorts. Antwerp, Belgium
FISHWICK, P.A. (1995), Simulation Model Design and Execution, Building digital worlds, Prentice-
Hall
HENESEY, L. (2006), Agent Based Simulation for Evaluating the Operational Policies in the
Transhipping of Containers, Blekinge Institute of Technology working paper
ISODA, S. (2001), Object-oriented real-world modeling revisited, J. Systems and Software 59/2, pp.
153-162
LAW, A.M.; KELTON, W.D. (2000), Simulation Modeling and Analysis. Boston, McGraw-Hill
International.
MULLER, J. P. (1996), The Design of Intelligent Agents: A Layered Approach,. Berlin, Springer-
Verlag.
OCEAN SHIPPING CONSULTANTS (2006), European and Mediterranean Containerport Markets
to 2015. Surrey, UK, Ocean Shipping Consultants, Ltd.
University of Hamburg Department of Computer Science (2006). DESMO-J: http://www.desmoj.de
WOOLDRIDGE, M. (2002), An Introduction to Multi Agent Systems. West Sussex, England, John
Wiley and Sons, Ltd.
63
Appendix 1:
Formulate problem Every simulation begins with a statement of the problem. It is important
1 and plan the study that the simulation analyst and the client agree on the problem definition in
transshipping operations and object relationships in a container terminal
Collect data on terminal layout and procedures, specify parameters, decided
Collect data and define
on level of model detail, and discuss with key experts. Setting an overall
2 the model
project plan that includes a statement about the various transshipment sce-
narios that will be investigated.
Make production runs Production runs of transshipment operations in a container terminal simula-
8 tor
Analyze output data Determine absolute performance of certain system configurations and
9 compare alternative system configurations for estimating measures of per-
formance for the simulated scenarios.
Documentation of assumptions and results of all the analysis should be
Document, present, reported clearly and concisely. Results can be used in the validation proc-
10 and use results esses to promote credibility for the simulation model
64
Combining a Knowledge System with Computer-Aided Design
Bart van Oers, Delft University of Technology, Department of Marine and Transport
Technology, Delft, The Netherlands, [email protected]
Martin van Hees, Qnowledge B.V., Wageningen, The Netherlands, [email protected]
Abstract
This paper discusses the implementation and application of the knowledge-based system
Quaestor linked to the NURBS-modeller Rhinoceros. The research improves the computer-
aided design of ships and other technical objects; several applications in the field of ship
design show the potential of the approach. The main benefit is the integration of different
types of knowledge in a single software environment, creating a flexible, integrated and
detailed design model. The resulting model is capable of feeding data to state-of-the-art
prediction tools, while still allowing large design changes, making it well suited for the design
and analysis of new concepts.
1. Introduction
The knowledge-based system Quaestor, Van Hees (1997), is used to design technical products
such as ships and propellers, Sipkema and Van Hees (2002). Quaestor‘s inference engine
assembles, from the design relations in its knowledgebase, a computational model based on
the requested goals and applies an enhanced Newton-Raphson solver. The computational
model consists of any combination of parameters, relations, constraints and links to external
programs, e.g. Matlab, Excel or legacy software. Quaestor allows an integrated,
multidisciplinary design approach and through reduction of time constraints, it enables the
rapid evaluation of multiple concept designs.
From its inception, Quaestor was employed extensively to assist with the design of naval
vessels, e.g. frigates, Keizer (1998) and submarines, Van der Nat (1999). Though these early
applications proved highly successful, their main drawback was a parameter-based
description of the design, limiting the integration of geometrical information in the design
process. Van der Nat (1999), partly remedied this through the purpose-built space-allocation
algorithm SUBSPACE, which provided an accurate estimation of compartment dimensions
based on the components placed inside. Nevertheless, support to create, analyse and export
free-form shapes was sorely lacking. This resulted in the extensive use of shape factors, while
export capabilities to other programs, e.g. prediction tools were limited. Another important
drawback was the lack of means to visualise designs.
To remove this shortfall, Van Oers (2004) developed a two-way link between Quaestor and
the commercial NURBS-modeller Rhinoceros, McNeel (2004). It enables the generation of
free-form geometry in Rhinoceros based on information from Quaestor. Moreover,
information derived from the geometry in Rhinoceros, e.g. hydrostatics, feeds back into
Quaestor for inclusion in the calculation process. More recently, Van Hees and Van der Blom
(2006) report a further expansion of Quaestor, extending the capability to handle binary data,
e.g. images and CAD-files. More importantly, they introduce an object-oriented structure,
further streamlining data management. The new capabilities create a powerful integrated
design model, able to design ships and other artefacts at higher levels of detail. In addition to
integration benefits, Quaestor increases, through its inference engine, the flexibility of the
geometrical model compared to existing parametric CAD-software. This paper provides an
overview of these developments. Section 2 discusses both Quaestor and Rhinoceros, together
with the data-exchange. Section 3 discusses the potential capabilities, while Section 4
evaluates these with several test cases. As a closure, Section 5 draws conclusions and presents
topics for future research.
65
2. Approach
2.1 Quaestor
Quaestor, Van Hees (1997), is a knowledge-based system shell consisting of an inference
engine coupled to a numerical solver, operating in a network database. Inside the shell, the
user defines three types of objects: relations, parameters and constraints.
Relations, such as the simple example in eq. (1), describe a design and are either of the
numerical, nominal -deal with text-, or binary type, e.g. access external applications.
Parameters form the building blocks of relations, contain information and can be of any of the
following types: numerical, characters (binary) or object. The object type can hold value
collections of any of the three types. The parameters in eq. (1) are B _ hull , B _ hold
and B _ wing . Constraints connected to a relation provide a condition to be fulfilled before
the inference engine can use the relation in the computational model. For example, eq.(1)
should only be applied when the left hand side is larger than zero. This could be enforced with
a constraint.
After feeding the knowledgebase with appropriate relations, the user can select one or more
parameters as goals. To be able to solve for these goals, the inference engine uses the
available relations in the knowledgebase to assemble a coherent computational model. In
addition to retrieving relations, the inference engine also prompts the user to provide input for
the requested parameters. If multiple relations are deemed suitable by Quaestor, the user is
asked to select the appropriate one. Quaestor applies a set of heuristic rules to establish the
sequence in which relations are proposed. Upon completion of the computational model
assembly, it is solved using a multi-dimensional Newton-Raphson solver, capable of reducing
degrees of freedom by term rewriting, and decomposing large systems of equations into
smaller ones, hence determining the requested goals. Though the term computational model is
used here, it consists of a combination of relations, parameters and constraints and can deal
with numerical values, text and binary data in a single design or analysis model.
A key feature of Quaestor is the separation of the computational model and data of individual
designs, i.e. knowledge and values. For example, an existing computational model can be re-
used with different input; alternatively, the computational model can be rebuilt to investigate
different goals. The latter capability allows for a flexible re-use of existing relations, unlike,
for example, a spreadsheet.
Van Hees and Van der Blom (2006) present an object approach, which stores design data and
provides a structured design overview, such as the one shown in Fig. 1. In addition, the object
structure enables the use of dynamic arrays containing the data of multiple parameters of a
single type. For example, instead of defining parameters related to the bulkheads shown in Fig
1. for each bulkhead individually, the object structure allows the designer to only define them
once and re-use them as required. All ship designs exhibit dependencies between design
different aspects, e.g. both hull form and general arrangement influence the initial stability
and therefore data is shared between different object branches. This allows the designer to
take dependencies in the design into account. Van Hees (1997) and Van Hees (2003) offer a
thorough introduction to Quaestor; Van Hees and Van der Blom (2006) discuss the most
recent developments.
66
Fig. 1: Object structure containing design data
2.2 Rhinoceros
Rhinoceros, McNeel (2004), is a cheap and powerful general purpose CAD-package. It uses
Non-Uniform Rational B-Splines (NURBS) to define of curves, surfaces and solids. It also
includes a large set of analysis tools, e.g. to establish area, volume and curvature properties. A
powerful scripting language, based on Visual Basic script offers automated creation and
analysis of geometry. Though no inherent parametric capability exists, the scripting enables
the parametric design of geometry. Most importantly, using an ActiveX object, the scripting in
Rhinoceros is exposed to other programs, enabling the real-time exchange of scripting
commands and data derived from the CAD-model to and from other programs. In addition to
the scripting, purpose-built plug-ins, written in Visual C++, enable further expansion of
capabilities.
Export
Computational model
data to file
Script Input
template values Analyse
geometry
Assemble
script
Create
Write script geometry
to file
Quaestor Rhinoceros
Fig. 2 shows the data exchange and workflow. The script files are generated using a template,
shown in Fig. 3. All characters after the `~’, i.e. `length’, `width’ and `output_file$’ are
67
replaced with parameter values from Quaestor. In addition to defining input parameters, the
template also contains script commands to create and analyse geometry inside Rhinoceros –in
this case to draw a square. Fig 3. and Fig. 4 also show the possibility to analyse the geometry,
i.e. determine square’s area, and write it to an output file to export the data back to Quaestor.
This allows results to be processed in the calculation process. The relation shown in Fig. 4
writes the script file, executes it in Rhinoceros, reads the resulting output file and stores the
data in `data_from_rhino$’. The inference engine in Quaestor ensures the correct order of
execution.
68
Fig. 5: A ship model consisting of curves, surfaces and solids
3.2 Export
The design and engineering process of a ship relies on a series of different programs, each
with their own data-format. To be able to handle this exchange of data a large number of
export formats should be available, which are provided by Rhinoceros. Moreover, the
scripting capability of Rhinoceros offers the opportunity to create custom export formats, e.g.
to provide input for prediction tools.
69
First, replacing the manual generation of geometry by automation using scripts greatly speeds
up the process. The user provides the input, instead of actually drawing. Secondly, separating
the parametric relations from the geometry enables the re-use of both scripts used to create
geometry, e.g. to draw a deck, and the parametric relations themselves, e.g. to position the
deck. Provided multiple parametric relations are available, this enables a rapid re-
configuration of the design, by allowing the designer to choose the relation deemed most
applicable. Thirdly, building a geometrical model driven by the requested goals, e.g. predicted
radar cross-section, reduces the scope of the model. Geometry is generated only when
requested, thereby preventing excessive detail and hence reducing the amount of input
provided by the designer. Fourthly, the inference engine of Quaestor deals with any
dependencies in the design, e.g. data needs to be entered only once, ensuring both a consistent
design model and a reduction of user input. Related to this is the ability of Quaestor to reverse
relations depending on the requested goals, which ensures that only the input needed to
determine the independent parameters is requested; the dependent parameters are derived
from the independent ones, Van Hees (1997). Fifthly, should even more flexibility be
required, different levels of human interaction are allowed for. This could include the
generation of components using scripts, with the designer being responsible for the actual
positioning of components. The position of components is analysed by a script upon
completion and results are used in the remainder of the calculation process in Quaestor, e.g.
in a stability calculation.
Together, these improvements in flexibility increase the opportunity for the designer to
investigate design variations, by allowing both bigger design changes and reducing the time
required to implement them.
4. Examples
Three examples are discussed. First, the use of Quaestor and Rhinoceros as a parametric
CAD-system applied to hull form design. Secondly, the design model is used to provide input
for prediction tools for topside design, i.e. positioning weapon and sensor systems aboard
naval vessels. A last example shows the integrated design of a Joint Support Ship, a new type
of naval auxiliary vessel.
70
solely due to the parametric definition of the curves (shown in Fig. (6)), but more importantly,
due to selection from the different types of curves in the knowledgebase. This is alike the
parent hull form approach, but by relying on the curves instead of the deformation of an
existing hull form, far larger changes are possible. Among such changes are those that affect
the surface topology of a patched surface-, e.g. removing a bulb, or changing from a
traditional stern with a single propeller to a pram-type aft-body with two propellers.
Moreover, the resulting hull form modeller incorporates easily into an overall design model,
maintaining the integrated design approach.
Fig. 7: One of the concept designs for the Joint Support Ship, from Bons (2006)
The geometrical model in Rhinoceros provides data on hydrostatics, available areas, volumes
and centres of gravity. Geometry includes the hull surface, bulkhead, decks and tanks and the
payload, e.g. a goalkeeper close-in-weapon system and replenishment-at-sea masts. In
addition, it offers a visualisation of concept designs, shown in Fig. (7). Quaestor provides the
integration environment and performs both pre and post processing functions, i.e.
dimensioning of components, the creation of script files for drawing them and processing the
results after placement, to include them in the weight and stability analyses.
71
4.3 Generating input for prediction tools
The third example deals with generating input for state-of-the art prediction tools. The most
ambitious application so far was part of the Integrated Topside Design (ITD) project,
conducted by the Royal Netherlands Navy in cooperation with TNO Defence (Netherlands
Organisation for Applied Scientific Research). Reported in Bos et. al. (2004) it aimed at the
integration of the analysis of infrared, radar signatures, and electro-magnetic interference
during the conceptual design of naval vessels. The combination of Quaestor and Rhinoceros
provided the geometry for an initial test case. Custom scripts generated the meshes for three
prediction tools: EOSTAR for infrared (IR) signature prediction, RAPPORT for radar cross-
section prediction and EMENG for electro-magnetic interference.
Fig. (8) and Fig. (9) show the mesh used for the IR signature prediction together with the
calculated IR radiance distribution. Being an initial test case the calculations were run
manually using the generated meshes, the results from the analysis were not returned to
Quaestor for further processing. Despite this, the project still showed the possibilities of the
approach.
5.1 Conclusions
Originally developed as a small visualisation tool with a data-feedback capability, the
approach has proved to be surprisingly scaleable. The three main benefits are:
Several limitations were also encountered, the two most important ones are:
• The use of Quaestor as a mixed numerical / geometrical constraint solver had limited
success. An example would be to determine the shape of a tank enclosed by the hull,
two decks, a longitudinal and two transverse bulkheads, with the distance between the
two transverse bulkheads being the variable. This approach poses stringent demands
on computational power, as one is defining and analysing geometry in an iterative
way. One of the causes is the time consuming interpretation of Visual Basic scripts by
Rhinoceros. A possible solution would be to write purpose-built Visual C++ plug-ins
to speed up such iterative geometry operations.
72
• Generating input for prediction tools from the geometrical model would occasionally
run into problems. This was mainly due to the lack of truly generic conversion scripts,
resulting into incorrect meshes for the signature prediction tools.
Acknowledgements
We would like to thank ir. R. Brouwer, ing. A.S. Visser, ing. L.F. Galle, and A. Bons, all
from the Defence Materiel Organisation, Netherlands Ministry of Defence, for the fruitful
cooperation during the research and the permission to publish the results.
Literature
BONS, A. (2006), Development Concept Design Model, MSc Thesis, Delft University of
Technology, Department of Marine and Transport Technology
BOS, A. D, et. al. (2004), Automated Analysis in Support of ITD, TNO Physics and
Electronics
Laboratory, TNO-DV1 2005 A058
GORIS, B. (2005), Rapid Hull Modeling in Rhinoceros, http://www.rhinocentre.nl/
VAN HEES, M.T. (1997), Quaestor: Expert Governed Parametric Model Assembly, PhD
Thesis, Delft University of Technology
VAN HEES, M.T. (2003), Knowledge-based Computational Model Assembling, Summer
Computer Simulation Conference 2003, Society for Modeling and Simulation International
VAN HEES, M.T. and BLOM, E.C. VAN DER (2006), A Knowledge-based Dredger Cooling
System Configurator, Fifth International Conference on Computer Applications and
Information Technology in the Maritime Industries, Oude Poelgeest, The Netherlands
KEIZER, E.H.W. (1998), Future Reduced Cost Combatant Study (NATO Restricted)
MCNEEL, R., et al. (2004), Manual of Rhinoceros 3.0, http://www.rhino3d.com
VAN DER NAT, C.G.J.M. (1999), A Knowledge-Based Concept Exploration Model For
Submarine Design, PhD Thesis, Delft University of Technology
VAN OERS, B.J. (2004), Manual of the link between Quaestor and Rhinoceros 3.0 (in Dutch)
SIPKEMA, S.F. and VAN HEES, M.T. (2002), KOAS: An innovative propeller design system
(in Dutch), Received the Dutch Timmersprijs for most promising innovation in maritime
design tools
73
Optimizing man-hours of Nordseewerkes’ assembly halls using Genetic
Algorithm including space allocation as boundary condition
Marcus Bentin, Nordseewerke GmbH, Emden/Germany, [email protected]
Urs Henkelmann, Nordseewerke GmbH, Emden/Germany, [email protected]
Christof Sacher, Nordseewerke GmbH, Emden/Germany, [email protected]
Abstract
A tool is needed to optimize the planning of the assembly halls. At Nordseewerke these halls are bot-
tlenecks. The used resources are space and man-hours that influence each other. Nordseewerke has
three assembly halls and modules can pass assembly processes in more than one hall. (e.g. one mod-
ule can be the child for the next bigger module that is assembled in another hall). Hence it is a dy-
namic problem. Lateness in one hall can cause lateness in another hall.
Today periods of overcrowded halls are followed by periods of less allocated space. This results in an
up and down usage of man-hours. A nearly constant usage of man-hours would be more efficient.
Therefore, Nordseewerke developed a tool to optimize/leveling the man-hours including the space
allocation as boundary condition. The design variable in this system is the duration of assembling of
each module. The boundary condition is checked with a space allocation simulation using eM-Plant
from Tecnomatix/UGS. Graphical User Interfaces are developed to enable the user to model space
allocation problems as required. The user can fix the position of modules in an assembly hall at a
certain position or he can put the module into another assembly hall. The system uses data from the
bill of material (BOM), planning data from the planning department (budget and planning dates) and
gets the area information of each module by a dxf file from our CAD System.
The objective of the optimization is to level the man-hours. It can be enhanced by punishment factors
for tardiness of modules and exceeding a maximum man-hours capacity.
The paper will describe the new space allocation system as well as the optimization system based on
Genetic Algorithm. In the end current results are given.
1. Introduction
Nordseewerke started 2003 to develop simulation tools for operative and tactical planning. The first
simulation tool was build for the cutting workshop and panel line. The tool is used by the foreman to
organize the daily business of the workshop. The simulation window into the future is about four
weeks. The planning department is using this tool to verify their plans.
One of Nordseewerkes´ main restricted capacity is the erection space in the assembly halls (Hall 17,
117, 119) together with the available personnel (shipbuilder and welder). Hence this area has to be
focused in planning. If the used/planned capacity in these assembly halls is more smoothed, the used
capacity in the predecessor workshops are smoothed too. Therefore no more up and downs like a
fever curve of the used capacity occur that is expensive. The challenge is to build a plan that takes
care of both bottlenecks space and personnel. Systems introduced in the pasted focused the
management of time window and assembly space but neglected the resource personnel e.g. Langer et
al. (2005), Massow et al. (2004).
In general the assembly of modules at NSWE can be continued in another assembly hall before it is
assembled on the building slip, e.g. a module built in the assembly hall 17 can move to the assembly
hall 119 where it is assembled with another module to a bigger module. This makes the problem to a
dynamic problem that can only be solved by simulation because lateness in one assembly hall can
cause lateness in the sequencing assembly hall. The space utilization of the assembly halls is very
high. Space planning with boxes that describe a module will lead to wrong solutions. The new tool
has to use the real required building surface as good as possible. The restricted personnel resource can
be addressed in two ways. The first way would be an assembly simulation that knows at least how
many welders and shipbuilders are used to build a module in a certain time. This can go into detailed
74
simulation of each necessary process to assemble the module. If there is no information available so
fast, a second approach can be interesting. The target is to smooth the personnel resource curve.
Hence an optimization can be helpful that smoothes the curve directly by changing the duration of the
assembling time window and the assembling sequence of the modules. The work content of each
module is described by linearized budgets for welders and shipbuilders.
The developed tool is integrated in the NSWE IT environment. The new space allocation planning
tool can be used by the foremen to solve their daily space allocation problem. Therefor a program is
developed that collects all the necessary data and publishes it to the user. The foreman works with a
powerful graphical interface that displays the surface of each module he has to build in a specific
week and the surface of the assembly hall. The foreman can put the modules by drag and drop into
the surface of the assembly hall. Beside this the program generates information for the simulation. A
simulation model that describes the material flow between the assembly halls including the part
supply, conservation halls and transport is built. The erection space in the halls is modeled using the
space allocation component from the STS (Simulation Toolkit Shipbuilding) Nedeß (2004),
Steinhauer (2005). The planner can use this tool to optimize his assembly plan. He has to generate a
module structure of the new ship and has to assign the assembly surface figures to each module. He
can use the optimization tool that is connected to the simulation model and is based on genetic
algorithm. Its design variables are sequence and duration of module assembly. The optimization
objective is to minimize the variance to a mean or a given resource curve. Therefore a fast calculating
simulation model is developed.
The tool connects the user with design, production control and planning information. This makes the
tool effective.
FPS
Planning Simulation Optimization
Fig. 1: Basic Concept of Space Allocation Planning and Optimization Tool, named FPS
(FlächenPlanungsSystem)
75
TRIBON ORACLE
DXF-file Placing BOM Production Budgets Buffer WSP Conservation
of module conditions control store conditions
- turning - module - prod. status - pl. budgets - availability - duration
- hall hierarchy - real dates - real man-hours - place
FPS
Collecting & Connecting data
Display data
for each ship
MS Projekt
- planneddates
Schedule
Graphic Inf
Time
planned start
planned end
N
simulated start Object 1 Object Number
specified
Status Part simulated end Object 0+
N+
Object
76
Fig. 4: Data viewer FPS
One function of the FPS is to display data in a hierarchical assembly tree. In this tree structure details
can be shown for each module and schedules can be changed as well as other attributes.
Another important function is to show the user which modules have to be assembled in a certain week
according scheduling. Therefore a table is generated. The different colors of the modules give an
overview of the availability of the parts needed to assemble the module. Green means all parts are
77
Fig. 6: Space allocation with FPS
available in the store, yellow shows that the cutting workshop has simulated this working package
and that the part will come in time. Black color shows that no clear information can be given. The
missing parts can be printed.
The modules can be placed by drag and drop into the assembly hall Fig. 6. Under the figure of the
assembly area of the hall the user finds the modules that have to be assembled in that hall in that
week. If a module is assembled over several weeks, it will hold the position it got once until the user
models a move to another area of the hall. This positions and time windows are stored as move
operations in the module object and can be given to the simulation that simulates that data. This is
helpful to define a starting situation for the simulation or giving special places to some modules the
simulation would not do by itself. In the end the user can print this figure in a large scale to
communicate his plan to his workers.
The budgets for each module is linearized over the given assembly period of time. Then the required
man-hours of all modules present in the hall in a week are cumulated and can be displayed in a curve.
This curve supports the foreman in his workforce planning Fig. 7.
78
When the user has checked the data and has modeled the starting situation for his assembly hall he
can start the simulation. The FPS is preparing the data for the simulation model and is starting and
controlling the model using the COM interface. The used simulation software is “eM-Plant” from
Tecnomatix (UGS).
79
Fig. 9: Detailed simulation model of NSWE hall 17
3.3 Optimization
The Optimization is done by a Genetic Algorithm developed by Tecnomatix. The GA is ideal to
combine with the simulation model. The model developer has to define the chromosomes and
methods to calculate the fitness value of each generation. The design parameters of the GA are:
- Duration of assembly of each module
- Starting date of assembly for each module (sequence)
The GA works with evolution mechanism like selection, mating
crossover and mutation e.g. Goldberg (1999). Like Darwin the
Initialising 1.
Generation fittest solution/ individual is pushed. Hence this algorithm can
find good solutions but the real optima itself can only be found at
random and is not the target. The target is to give good solutions
Evaluation of to the user. The GA generates solutions for the parameters that
1. Generation
are set in the simulation model. Then the simulation model is
(Simulation)
started and in the end the objective function is calculated that
represents the fitness of the solution. If the solution has a high
Selection & Mating fitness, in case of minimization the system takes he reciprocal, it
is more likely that it can reproduce itself in the next generation.
This is the way the GA is developing in the direction of good
Crossover & Mutation
solutions. In general the GA is working like displayed in Fig. 10.
In the GA two chromosomes are used. One is for modeling the
Evaluation of new assembly duration and the other one is used for changing the
Generation sequence of the module assembling. Both chromosomes assign
an array of min/max number to each module. The starting date
e.g. can be changed between +-30 days (Fig.11). The GA would
Stop maybe assign +10 days to the module. Hence the starting date is
postponed and another one can start earlier. The sequence of
assembly is changed.
Optimized solution The objective is to minimize the variance of man-hours of each
week in conjunction with a mean value eq. (1). The mean value
Fig. 10: Workflow of GA can be calculated or a wanted man-hours curve can be defined. If
the mean is calculated the resource personnel is not fixed!
i
( i i )
Fittness = 1 ∑ ABS Man − hours_Week − μ + ∑ Tardiness
The boundary condition is that no dates of the planned slipway assembly is allowed to fail. This is put
as a punishment to the objective function to be minimized. The boundary condition of limited space is
tested by a simulation. The objective function is calculated after each simulation of an individual. One
individual is a set of solutions for the design parameters. To speed up the optimization calculation, the
simulation of the space allocation is done with a more abstract model. This gives up on the nesting
and manages the assembly area in table. Hence at each time it can be calculated how much assembly
area is available. The model is tuned by a space utilization factor that adapts the space losses of a
80
nesting. Hence after optimization it is necessary to check the solution with the space allocation
simulation including nesting.
4. Application
Reporting
Setting parameter used man-hours
- Assembly duration
- Start
Download data
Positioning
OK? modules in hall
Yes
Printing
arrangement of hall
Start working
can check his current plan how he wants to arrange all the modules in his hall. Then he can check the
man-hours curve for the next weeks. This curve depends on the dates the foreman sets in the
production control system. If the foreman wants, he can start a simulation just to see what the
simulation suggests for the next weeks. The automated nesting the simulation model makes is not
important for the foreman. But the information if he has to expect a bottleneck situation in the future
is important. In the end the foreman can print the planned nesting arrangement for his hall to
communicate his plans to his workers.
81
4.2 Tactical workflow
The FPS is also a tool for the planner. He can check with the FPS the assembling area availability in
the future. Therefore, he can use the automated nesting that makes the planning more comfortable. In
addition the optimization suggests him the assembly duration and start dates for each module for a
harmonized personnel resource utilization.
Download data
OK?
Yes
Change scheduling of
ship
4.3 Optimization
The optimization can be used very easy. The user just has to define how many generation he wants to
calculate and how big one generation is. Then he can start the optimization. In the end the GA sets the
best solution into the simulation model. In the following several optimization results are presented.
The parameter changed in this case study is the assembly duration. The sequence of assembly is not
changed. The optimization is done with the fast calculating simulation model without nesting. The
first figure Fig.14 displays the starting situation that has to be improved.
This man-hour curve in Fig. 14 is characterized by a lot of up and downs. This makes working
inefficient. Hiring and firing personnel on a weekly basis is not possible. Hence, there is to many
personnel from time to time or the planned dates cannot be reached.
82
Fig. 14: Man-hours starting situation .
83
The Fig. 16 shows the results of optimizing man-hours with slipway assembling dates for each
module as boundary condition. The big difference between figure 16 and 15 is the peek in the ninth
week caused by slipway assembling dates. The rest of the variance is similar of Fig.15. These results
prove the effectiveness of the new system. Finally Fig. 17 shows the typical behavior of the GA in
improving the fitness value.
5. Conclusion
The introduced new space allocation tool for the NSWE assembly halls is an highly integrated system
that can be used by the foreman and the planner. The foreman can manage his daily business and plan
his assembly hall with real geometry figures of the modules. He sees the modules he has to assemble
according scheduling and can visualize the acquired man-hours he will need to assemble all the
modules. If he makes changes he does this in the production control system. Hence the information is
displayed to all others who have to know about this change. Getting quick information about the
availability of the material makes it easy for the foreman to plan the operative sequence of the module
assembling.
The planner is getting a powerful tool that is helping him in an early stage to optimize his plans
according personnel and assembly area utilization. He needs a BOM structure, budgets a first
scheduling and the surface figures of the modules, then he can start. The results presented in the last
chapter show that the GA is capable in minimizing the variance of acquired man-hours. The planner
has to check in the end the feasibility of the results.
Literature
GOLDBERG, D.H. (1999), Genetic Algorithms in Search, Optimization and Machine Learning,
Massachusetts USA
LANGER, Y., BAY, M., CRAMA, Y., BAIR, F., CAPRACE, J.-D., RIGO, P. (2005), Optimization
of Surface Utilization Using Heuristic Approaches, COMPIT 2005, Hamburg Germany, pp. 419-425
MASSOW, C., SIKSNE-PEDERSEN, I. (2004), Computer Integrated Planning and Resource
Managemnet in Shipbuilding, COMPIT 2004, Siguenza Spain, pp.378-390
NEDEß, C., FRIEDEWALD, A., HÜBLER, M. (2004), Simulation im Schiffbau – Zukunftssicherung
durch Planungsgenauigkeit, Hansa International Maritime Journal 2004 Jan, pp. 15-17
STEINHAUER, D. (2005), SAPP-Simulation Aided Production Planning at Flensburger, COMPIT
2005, Hamburg Germany, pp. 391-398
84
An Application for the Management of Standardized Parts in
Steel Structural Design
Abstract
Today, the design process in the maritime industries is characterized by a complex interaction of many
partners. Often, identical or similar design tasks are performed by different partners working in parallel. A
significant amount of time is needed for information exchange and for the coordination of design activities.
Within the collaborative research project Context Sensitive Structural Components (KonSenS) research
is carried out with the objective to develop IT-based methods for the management and application of
standards for steel structural design. As a result improved standardization which leads to the exploration
of series effects is achieved; manufacturing costs are reduced. In this paper a flexible solution for the
handling of standards is presented.
Access to an electronic catalog is provided to all partners concerned via network connections. Follow-
ing a single-source approach only one standards database is needed for many different design projects.
Configuration on a per-project basis allows for the tailoring of the standards to the problem at hand. In
addition a subset of the catalog information is selected for certain design tasks or design contexts. These
selection methods restrict the options available to the user; the probability of incorrect design decisions is
reduced. For common design tasks workflows for the selection of optimal solutions are defined. Taking
up ideas defined in the standardized WfMC formats these workflows incorporate strength, manufacturing
and cost aspects. If required a computation module is used to evaluate the performance of a solution. Us-
ing the standards database as a result designers working on identical problems will find identical results;
standardization is improved.
1 Motivation
As part of the ongoing trend towards globalization many engineering processes are outsourced to special-
ized partners. This behavior can also be seen in the design process in the maritime industry. Here, in a
complex outsourcing scenario every partner involved has its
A close and successful cooperation requires the use of evolved communication strategies. In ship design in-
formation about design procedures, best practice recommendations and specifications about the problem
at hand need to be communicated. For this purpose standardization is used as a tool to define guidelines
for all engineers involved. Though, even with elaborate documentation about standards etc. the inter-
pretation of the mostly paper-based information is left to the engineer. For identical problems different
solutions might be developed based on the background, knowledge and interpretation of a person. A
reduction of series effect and thus an increase of cost can be registered. For approaches to a collaborative
85
design process and their application in the industry see [1]. With the ”KonSenS Electronic Catalog”,
in the following called KonSenS, an application server for the management of standardized parts based
on [2, 3] is developed. An IT-based solution allows the direct retrieval of up-to-date information about
applicable solutions by an engineer. With interfaces to CAD-systems solutions are directly linked to the
standards database in the application server KonSenS. Methods for the application of workflow based
design wizards and for the handling of related documentation or other information are also provided.
2.1 Architecture
The main objective of the research project is the centralized storage of all information for access by all
partners involved in a certain project. For this purpose a client–server–architecture was chosen. Thin
clients are used to access information objects on the server, i. e. the information is stored on the server.
Using the SOAP–protocol for communication clients access the application logic like design wizards or
rules via web services, see section 2.4. By using a standardized communication protocol clients can be
implemented in many programming languages. The integrability into third–party systems like CAD–
systems is eased. For a sample implementation see section 3.2.
An efficient data model is a fundamental requirement for the fast access to information objects and for
the communication between server and client. Also, flexibility is a key requirement for an efficient storage
of all types of information objects. Therefore, the data model uses three layers of abstraction namely
On a generic level the Abstract Model is used to define associations between any type of information
object stored in the application server. The structure of the Abstract Model is shown in figure 1 in UML
notation.
The classes Entity and EntityGroup are super classes of all other classes used in the data model.
Entity: This class is parent class for all classes used to represent data objects in the more specific lower
layers of the data model, i. e. it acts as a very abstract container class for the storage of information
about e. g. brackets. Entities have a name, a level unique identifier and one ore more associated
EntityGroups. Associations to EntityInformation and derived objects can also be stored.
EntityGroup: Entities are grouped by properties, e. g. referring profiles or yard names, using Enti-
tyGroup objects. EntityGroups have a name, a level unique identifier and a parent group, which
also is an EntityGroup. Tree–like structures for the organization of standardized solutions into a
hierarchy can be defined using EntityGroups. For each attribute that should be sortable like e. g.
the thickness of a plate an EntityGroup–based tree is defined.
86
Figure 1: The Abstract Model
EntityInformation: EntityInformation is the abstract superclass of all objects used to store additional
information for a standard object like comments, revision tags, links etc. EntityInformation objects
have an author and a time stamp depicting the creation date.
Using these general classes a structure for search in the standards database can be defined. Grouping
of standard objects based on certain attributes is possible, too. Any kind of object can be stored and
indexed. Associations to arbitrary pieces of information can be defined.
Based on the Abstract Model the Meta Model, as shown in figure 2, is used to describe the structure of
all standardized objects stored in the database. Comparable to an object oriented programming language
87
objects and attributes can be defined. With such an approach types of standardized parts are defined.
For the Meta Model the following classes are used:
MetaObjectGroup: This class is derived from the class EntityGroup. It groups the MetaObjects
which are associated with a MetaObjectGroup object. This structure is the base for browsing the
database, see chapter 2.4.
MetaObject: The class MetaObject, derived from Entity, represents the description of a real world
object like a bracket, a bulkhead or a stiffener. Attributes are associated with it. Grouping is done
using MetaObjectGroup objects.
MetaAttributeGroup: Instances of this class are used for subscription of attributes, i. e. to link
certain attributes like e. g. the thickness of a plate to a group named e. g. ”geometric attributes”.
It is also derived from EntityGroup.
MetaAttribute: MetaAttributes define the attributes for the Meta Model. Therefore, a data type is
given to define the type and range of attribute values. Values for any attribute are stored at the
Instance Model level. MetaAttributes are associated to MetaObjects, so a MetaObject ”BCB” has
a MetaAttribute ”MAT”, which describes the thickness of the bracket ”BCB”.
E. g. for brackets a shipyard defines, amongst others, the bracket types BCB, BLK and BKB as standard
types with a certain shape, material, thickness and further information. For this purpose in the meta
model three different MetaObject instances are created; relevant attributes are defined by MetaAttribute
instances.
While the Meta Model defines attributes etc. for a type of a standardized object, at the level of the
Instance Model values are assigned to each object. So, instances have attribute values that are either
calculated by the rule based computing system or saved as static values in the database.
88
Figure 3 shows the associations of all classes at the Instance Model. Classes are:
Figure 4 shows two ObjectInstances for a MetaObject named ”BCB”, a bracket type. This bracket
”BCB” has three MetaAttributes:
89
Also, two InstanceObject instances ”BCB 120” and ”BCB 140”, representing the real brackets with their
geometric parameters, are shown. Instances of class InstanceAttribute are associated with ”BCB120”
and ”BCB140” for the storage of attribute values.
For navigation within the data model two different methods are implemented. Firstly, navigation in the
structure defined by the group–type–instance structure of the standards, i. e. browsing, is available.
The second alternative uses search to extract the relevant information and its structure. Combining both
methods is also possible.
2.3.1 Browsing
Browsing is the navigation through the Meta Model with associated instances. In figure 5 it is shown on
the left part. First, root nodes are shown. In this example only one root node ”part” is presented. By
selecting one of the root nodes, the related child nodes ”bracket” and ”cutout” are retrieved from the
standards database. These nodes are instances of the MetaObjectGroup class. These objects also have
child nodes, either MetaObjectGroup or MetaObject instances. MetaObject instances, here ”BCB” and
”KL”, are leaf nodes in the Meta Model tree, but they have child nodes on the Instance Model level (BCB
120, BCB 140, KL 140) that are of type InstanceObject. On this level, the end of the tree is reached,
that means, instances from InstanceObject are always leaf of the browsing tree. Once the browsing tree
leafs are reached, all associated attributes with their values (A, B, MAT, NOTCH, ...) and presentations
can be read from the database. The information is presented by the client, as shown in figure 6.
2.3.2 Searching
With browsing the hierarchical structure of the data model is used to retrieve a structured representation
of the standards database. Often, for certain problems the engineer on the shipyard or in the design office
needs to retrieve only a subset of a group of instances. Also, for certain design wizards the retrieval of
standardized objects with certain values for some parameters is required.
For this purpose search for InstanceObjects is based on the grouping defined by instances of type Instance-
Group. Using the instances tree-structured graphs are created where each tree represents a property of
searchable InstanceObject objects. The root nodes of a search tree are called systematics. As an example
on the right of figure 5 examples for search trees are shown. E .g. using the systematics ”Yard” all stan-
dard objects available for a given yard can be found. For a flexible definition of queries for information
retrieval a query language is supported. Omitting basic definitions for comparators and parameter values
the characteristics of the query language defined in a simplified EBNF is as follows:
90
retrieves all standard objects of type (S=part, G=bracket), i. e. all brackets, that can be assembled on
flat bars (S=design, G=refprofile, G=FP, G=200) and that have a plate thickness less than 12 mm (mat
< 12.0). The definition of more complex queries can be performed using logical operators like AND, OR
and NOT.
2.4 Communication
As mentioned above, the communication between server and clients is based on the SOAP[5] protocol.
SOAP is a lightweight XML-based messaging protocol used to encode the information in Web service-
Request and -Response messages before sending them over a network. SOAP messages are independent of
any operating system or protocol and may be transported using a variety of Internet protocols, including
SMTP, MIME and HTTP. A SOAP message is modelled as Head–Body–Pattern. A SOAP envelope
acts like a container in so far as that it stores a message header and a message body. The header has
information for authorization, routing etc. while the body contains the user data for the web service.
Because firewalls are normally used to protect the internal network of shipyards etc. the more comfortable
or more powerful communication protocols [4] for client–server–applications like Java RMI, CORBA or
DCOM are not used in the scenario described. These protocol, while offering more advanced features like
remote method invocation and passing of complex objects require complex configuration if used across
firewalls or proxies.
For browsing, search and more advanced features web services are published using the Web service
Description Language (WSDL) [6] and are used by client applications to access the data and application
logic of the application server KonSenS. With such a thin–client based approach enhancements to the
capabilities of the system or bug fixes to the application logic can be deployed often without the complex
and time-consuming need to update all clients.
Figure 5 shows a part of the data model used for the application in the steel structural design process.
Shown are three ObjectInstances ”BCB 120”, ”BCB 140” and ”KL 140”. To the left of these objects
the corresponding Meta Model is shown. On the right side systematics used for searching are presented.
Here, the objects can be grouped by:
As seen in the chapter above, the Meta Model defines the main syntax and semantics of all objects used.
For the usage in steel structural design the following MetaObjects and -Groups are defined:
MetaGroups: These objects are used for browsing the tree to InstanceObjects, i. e. they define a very
general structure for all standardized parts. The following groups are defined:
Part: Parts are main structures used in steel structural design. These are solid structures. Cur-
rently, the following parts are implemented as child groups:
91
Figure 5: Sample from the KonSenS Data Model
• brackets,
• clips.
Feature: Features are geometric details of parts like holes or cutouts, i. e. these are objects that
modify the geometry of existing parts. As an example cutouts are currently implemented as
children of features.
MetaObjects: For steel structural design MetaObjects are used to define parts used. These are stored as
child nodes in the Meta Model tree. In figure 5 the nodes ”BCB” and ”KL” represent MetaObjects.
The KonSenS Meta Model is defined by a XML–file. As an example the following excerpt of a definition
file shows the definition of a MetaObject called ”BCB”, a certain type of bracket.
<metastructure>
...
<attributegroups>
<attributegroup name="geometry">
<attribute name="A" type="INT"/>
...
<attribute name="OFF" type="FLOAT"/>
</attributegroup>
<attributegroup name="common">
<attribute name="name" type="STRING"/>
</attributegroup>
...
</attributegroups>
<objectgroups>
<objectgroup name="part">
<objectgroup name="bracket">
92
<object id="bcb">
<ref_presentations>
<presentation_ref type="TRIBON"/>
<presentation_ref type="IMAGE"/>
</ref_presentations>
<ref_attributes>
<attribute_ref name="name"/>
<attribute_ref name="A"/>
<attribute_ref name="MAT"/>
<attribute_ref name="NOT"/>
<attribute_ref name="NOA"/>
<attribute_ref name="OFF"/>
</ref_attributes>
</object>
</objectgroup>
</objectgroup>
<objectgroup name="feature"/>
...
</objectgroups>
...
</metastructure>
The MetaObject shown has several attributes with names based on the Tribon syntax. As seen, the
MetaObject ”BCB” is a child node from the MetaGroup ”bracket”, which will be a child of ”part”.
In the example two presentations are defined. One is used for the integration in Tribon, the other one
is an associated image. With multiple representations a single source approach can be used. I. e. one
common data set is used where for each target system - this can be a CAD-system, for visualization
purposes etc. - only the relevant attributes are shown.
At present 35 MetaObjects, i. e. 35 different types of standardized objects, are stored in the database.
The Instance Model stores all InstanceObject instances of the formerly described MetaObject instances.
Within the collaborative research project KonSenS information about standardized parts is delivered by
the project partners, three major shipyards in Germany. In table 1 an overview over the number of
different parts managed is given.
Similar to the Meta Model instances are also defined in an XML–File. Administrators is given the option
to automatically generate Spreadsheet-Files from this description that can then be distributed to the
engineers for data entry. Completed spreadsheets are imported into the system. Changes to the data
model are possible, i. e. round-tripping is supported.
3.2 Clients
Within the research project two clients are implemented as prototypes. These are used for demonstration
purposes. Both are implemented in Python; the pyQT framework is used for visualization and to generate
the user interface.
93
Figure 6: Screenshot of the Standalone Client
Standalone Client: This client is used as named, i. e. it provides an interface to the standardized
parts and related information stored in the standards database. By browsing the MetaGroup tree
all parts can be accessed. The client is also used as front end for the document management system.
Tribon integrated client: For the usage in steel structural design this client is integrated in the CAD–
system Tribon using the Vitesse application programming framework. Basic workflows like the
standards conformant construction of brackets and cutouts are implemented. For this purpose
the structural model of the CAD-system is evaluated; possible solutions for a chosen problem are
retrieved using search and systematics. For a design task like the definition of a bracket attribute
values are entered automatically as far as possible thus reducing the number of user inputs required.
Due to the open design and the consolidation of the complete application logic into the server the
development of additional clients that integrate CAD–systems or further applications into the system is
possible.
94
5 Acknowledgement
The work presented in this paper is supported by the German Federal Ministry of Education, Science,
Research and Technology under grant 03SX163D. The authors acknowledge the valuable discussions with
the partners of the project Konstruktionsstandards fr schiffbauliche Strukturen zum Einsatz in CAD-
Systemen, KonSenS.
References
[1] Hyeong–cheol Kim, et al., Introduction to DSM Hull Modelling System ”COSMOS” based on
TRIBON M3, ICASS 2005
[2] M. Zimmermann, R.Bronsart, K. Stenzel, Knowledge Based Engineering Methods for Ship
Structural Design, ICASS 2005
[3] R. Bronsart, M. Zimmermann, Knowledge Modelling in Ship Design using Semantic Web Tech-
niques, Compit05
[4] Young–Soon Yang, A Study on the Web–based Distributed Design Application in the Prelimi-
nary Ship Design, ICASS 2005
[5] SOAP Spec, http://www.w3.org/TR/SOAP
[6] Web Services, http://www.w3.org/2002/ws
95
Simulation Aided Production Planning in Block Assembly
Dirk Steinhauer, Flensburger Schiffbau-Gesellschaft mbH & Co. KG,
Flensburg/Germany, [email protected]
Stephanie Meyer-König, Flensburger Schiffbau-Gesellschaft mbH & Co. KG,
Flensburg/Germany, [email protected]
Abstract
At Flensburger Shipyard (FSG) simulation is now well established as the main tool for supporting
decisions in production and planning. The basic condition for this step was the development of the
Simulation Toolkit Shipbuilding (STS) which enabled the simulation team of Flensburger to effectively
and efficiently build-up and maintain simulation models of the production. This toolkit is now being
developed and used within an international co-operation of shipyards and universities called
SimCoMar.
The main focus of using the simulation models at Flensburger is not the design of processes or the
investment planning but the continuous production planning. The possibility of considering the
dynamic relationships between the product and the production flow result in a reliable plan. Alternate
plans can be derived and evaluated very quickly which leads an optimum plan. The impact of
disturbances and changes can be immediately shown and the most cost-effective reaction can be
chosen. And last but not least the communication of the plan to the people who have to make it reality
is much easier by simulation.
Recently another production area was added to the ones already being simulated: the block assembly
station. In this production station blocks are assembled and outfitted to be later on painted in the
painting halls and erected on the slipway as the hull. A simulation model has been built with the STS
considering the special characteristics of this production area: the space allocation and the control of
the various assembling processes. For using the model in the different planning phases the interfaces
to the simulation database and the analysing functionalities have been further developed. By this tool
the plan for block assembly or it's variations can be analysed and verified easily. The tool is not used
by simulation experts but by the planners and foremen on the shop floor in their daily business.
The paper presents the implemented tool for Simulation Aided Production Planning (SAPP) in block
assembly pointing out it’s results and benefits.
1. Introduction
The main focus of using the production simulation at Flensburger is not the design of processes or the
investment planning but the continuous production planning (SAPP – simulation aided production
planning Steinhauer (2005). The possibility of considering the dynamic relationships between the
product and the production flow result in a reliable plan. Alternate plans can be derived and evaluated
very quickly which leads an optimum plan. The impact of disturbances and changes can be
immediately shown and the most cost-effective reaction can be chosen. And last but not least the
communication of the plan to the people who have to make it reality is much easier by simulation.
The goal of the simulation activities at FSG is the simulation model of the whole shipyard to enable a
holistic optimisation of the shipbuilding process and to evaluate the dependencies between the
different stations. Recently another production area was added to the ones already being simulated:
the block assembly station. The block assembly station at FSG is a pure assembly station on building
sites using the same big shipbuilding hall as the slipway. All parts types of steel fabrication or
outfitting are assembled in this station. The blocks are afterwards carried out by a heavy goods
transporter to a buffer station prior to the conservation.
A staff of workers with a big variety of qualifications works in this station, shipbuilders, welders, pipe
men, electricians, scaffolders or isolation workers. Part of the staff is provided by suppliers who have
to be coordinated.
96
A special dependency on this station is the quality of the delivered parts. This quality can have major
effects on the process times or even on the assembly procedure.
Building up simulation models of a production in shipbuilding can be done easily and fast using the
Simulation Toolkit Shipbuilding (STS). The development of this library of reusable simulation tools
for shipbuilding production started at FSG in the year 2000. The STS is now further developed and
used within the international cooperation community SimCoMar (Simulation Cooperation in the
Maritime Industries). Members of SimCoMar are FSG, Nordseewerke Emden, Delft University of
Technology, Technical University of Hamburg-Hamburg and Center of Maritime Technologies.
The Simulation Toolkit Shipbuilding offers a variety of tools to model assembly processes. The main
aspects of the block assembly can be considered by combination of the simulation tools assembly
control and space.
The basic assembly control is done by an assembly strategy which can be defined by the user. This
assembly strategies contain a certain sequence of assembly stages. These assembly stages consist of
process steps for part types which optionally can be worked on in parallel.
By these strategies different assemblies can be standardised keeping the possibility to define
individual strategies for special assembly procedures.
The general functions of the AssemblyControl can be tailored to the specifics of every application by
programming user-defined controls. By these controls the assembly process can be modified in a lot
of ways to fit the problem’s needs.
Settings
Data tables
Assembly control
Assembly model
97
2.2 Space tool
The space tool models the allocation of production areas by constructions or parts of different sizes
Steinhauer, Hübler and Wagner (2005). The production area is approached as a rectangular matrix
with a flexible size of matrix fields to be able to consider the allocation as accurate as needed. Within
the space special areas can be defined for different purposes e. g.
• blocked areas where nothing can be placed (crane posts, buildings, ways, etc.)
• areas for special purposes (building sites)
The space tool provides automatic allocation of the space by predefined rules. Additionally a
graphical functionality enables the user to place certain constructions or parts manually.
To use the simulation model supporting the production planning various actual data has to be
available for the simulation model:
• Assembly tree
• Assembly part description (delivery date, geometry, weight)
• Planning data
• Manning level
• Production status
Regarding the block assembly the assembly tree is needed describing the different assemblies by their
part lists. The parts to be assembled are then related to the block. This relation is added by part
information such as delivery dates from suppliers or previous production stations, geometry, weight,
etc. This data is taken from an internally developed tool for digital planning of building methods
called DigiMeth.
The planning data contains the assembly activities per block with planned dates and the allocation of
building sites. This data is taken from the network planning tool ACOS plus 1.
Resource data about the manning level is taken from the simulation database. Based on ERP data the
number of workers with certain qualifications is defined there by the foremen.
The production status is needed to synchronise the simulation model with the reality. Simulation runs
thereby can base on the actual situation in the production. The production status contains the real start
and end dates for the operations. For block assembly the dates are collected per block and for certain
crucial process steps within the assembly.
The development of a simulation model requires an intensive analysis of the processes. As a result a
process model was visualized and the typical assembly types were structured. This was the needed
input to define the right parameters for the simulation tools in the model. After building up an
executable model the validation followed to assure that the simulation model performs sufficiently the
same than the reality does.
98
There were eleven assembly types defined, e.g.
• Side blocks consisting of closed sections
• Side blocks without closed sections and superstructure blocks
• Blocks covering the whole ship’s width (bow and ring blocks)
The last step in analysing the processes was the determination of suitable process times and
parameters. This was in the first round done by collecting calculation data from the planning
department and by asking the workers in the station. They were afterwards adjusted in the validation
phase.
Each of the stages contains a number of process steps for different part types which can be run in
parallel if it is reasonable. The assembly stage 5 for the strategy 902 contains for example 24 process
steps for assembling sub-assemblies, plates, profiles, mass components, pipes, brackets and fittings.
99
Fig. 3: Screenshot of the simulation model showing the allocation of the station
Most of the simulation tools of the STS also offer 3d functionality so that the model can very easily be
animated in 3d (Fig. 4).
100
4.4 Validation
For the validation of the simulation model special output features were implemented. Part statistics
were collected on process level and an automatic comparison of real and simulated duration was
programmed. Supported by this statistics the deviations were contemporarily analysed until the
simulation result fits the performance of the production in reality sufficiently. The main set screws in
the simulation model were the assembly strategies, the process times and the priority rules.
The simulation model of the block assembly station is now supporting the production planning as one
of the SAPP tools at FSG. The general procedure of SAPP can be described as follows:
Regarding the block assembly as the first step the production status is collected and written in the
simulation database. The simulation run is then started in the past and synchronized with the reality
until the current date. This assures that the actual situation in the production is mentioned in the
simulation model.
For analysing the simulation results different the simulation database provides various output
functions. One example is the Gantt-chart showing bar charts either of the blocks or of the allocation
of building sites as shown in Figure 5. The building sites starting with a ‘T’ are temporary sites in the
slipway area which can only be used for a certain time after launching.
101
Another important output feature are utilisation charts of personnel as shown in Figure 6. Bottlenecks
caused by personnel of a certain qualification get visible in those charts.
6. Outlook
The usage of the simulation within the planning process will be widened by adding the automatic
respectively semi-automatic allocation of the building space.
Another field to be worked in will be the strategic process improvement in the station of block
assembly and in its collaboration with the other stations. One question to be answered recently raised
by the production management was how to define the best shift arrangement for the different
qualifications. Other problems to be solved by simulation result from usage of the same crane
resources like the hull erection on the slipway and the dependencies on other transport means. These
questions are to be answered by connecting the simulation model of block assembly with other
simulation models of FSG’s production areas.
To get the best simulation results in terms of costs, stability in utilization, keeping the schedule or
reducing the production cycle times can take a lot of time depending on the number of parameter to
vary. To increase the efficiency in using the simulation and in some cases to be able to handle the
simulation problem the coupling of simulation and optimisation will be researched soon. A research
project funded by the German Ministry of Education and Research has been initiated by FSG and will
be carried out together with Nordseewerke Emden and Technical University of Hamburg-Harburg.
Literature
102
Automated Discharge Monitoring Report System
for Shipyard Compliance with the Clean Water Act
Bhaskar Kura, University of New Orleans, New Orleans, LA, USA, [email protected]
Karthik Kura, SofTek Systems, Inc., Metairie, LA, USA, [email protected]
Abstract
Shipbuilding facilities and ship repair facilities are involved in a variety of manufacturing and repair
activities that result in wastewater which requires treatment and discharge into surface waters.
Important wastewater pollutants discharged by shipyards include, heavy metals, organics, oil and
grease, total suspended solids, and others. Many of these wastewater contaminants exert oxygen
demand which is measured in terms of biochemical oxygen demand (BOD) and chemical oxygen
demand (COD). Because of the potential to damage the water quality of the receiving stream,
pollutants discharged by shipyards require proper management.
The Clean Water Act (CWA) of the United States employs a variety of tools to restore and maintain
the physical, chemical, and biological integrity of the nation’s surface waters. Under the CWA
authorization, the US Environmental Protection Agency (EPA) uses the National Pollutant Discharge
Elimination System (NPDES) permit program to control the water pollution by regulating point
sources that discharge pollutants into surface waterbodies. Under this NPDES program each
shipyard that discharges pollutants into the surface waters is required to obtain an NPDES permit
which is administered by the state agencies under authorization from EPA. These NPDES permits
have numerical limits for each pollutant discharged from each discharging outfall.
To demonstrate compliance with the NPDES permit requirements, shipyards have to monitor each
outfall for each permitted pollutant as well as the quantity of wastewater discharged at a specified
frequency and report the results in a standardized fashion. These discharge monitoring reports
(DMRs) can be tedious and time consuming for big shipyards having numerous outfalls. Shipyards
are often faced with many questions which include:
• How to manage NPDES compliance?
• How to avoid penalties?
• How to identify treatment plants that are showing declining performance?
• How to demonstrate historical performance to the regulating agency
• How to minimize costs for complying with NPDES and CWA requirements?
To address this problem UNO researchers have developed a knowledge-based intelligent system, an
Automated DMR System for shipyards. This paper aims to presents its various components, user-
friendly features, and the benefits of the system, and how it can be adopted not only in the United
States but anywhere in the world.
Keywords: Shipyard wastewater discharges, NPDES Compliance, Automated DMR System, CWA
Compliance, Wastewater compliance costs.
1. Introduction
The United States has over 400 shipyards involved in construction and repair of ships. Medium to
large size shipyards generate as high as seven million gallons of wastewater (process wastewater)
annually excluding the sanitary wastewater and storm water (KURA, B, 1998c). Such an enormous
quantity calls for efficient management of wastewater in a way that meets the environmental
standards, compliance and protection of the aquatic life. Efficient management of the shipyard
wastewater requires understanding of the pollutants present, their concentration variations, how they
compare to the discharge limits, and what factors (source specific contributions) might be responsible
for increased pollutant discharges.
Operations in the shipbuilding and repair industry are of large scale, and complex, and these activities
generate significant amounts of multimedia emissions (solid, liquid, and air) but this paper focuses on
103
management of wastewater through an Automated DMR System. Major shipyard operations that
generate wastes/pollutants include surface preparation, metal plating and surface finishing, solvent
cleaning and degreasing, machining and metalworking, and vessel cleaning (KURA, B. et al., 1996,
1998a, 1998b, 1999a).
Characterization of waste streams can be effected by knowledge of processes/operations in a shipyard
that have the potential to generate wastewater. The wastewater generated from the surface
preparation operations consist of spent abrasive, paint chips, and other surface contaminants.
Wastewater from metal plating and surface finishing operations comprise of alkaline and acidic
cleaning solutions. Similarly, wastewater streams generated by cleaning and degreasing operations
contain aqueous cleaners and spent organic solvents. Other processes/operations in shipyards
generate wastewater constituents of which are known by the method(s) employed in various
operations.
Pollutants present in shipyard wastewater are biochemical oxygen demand (BOD), chemical oxygen
demand (COD), total suspended solids (TSS), phenol, oil & grease, and heavy metals. The type and
strength of pollutants present in the wastewater streams determines the type of treatment that can be
efficiently applied to bring the wastewater below the discharge standards (KURA, B et al, 1998c,
1999a, U.S. EPA, 1997).
Currently, most shipyards store information pertaining to environmental performance in the form of
flat files like Excel spreadsheets, Word documents, or hard files. Tracking and assessing
environmental releases using these methods is difficult, time consuming, and labor intensive. A
significant portion of environmental management personnel time is invested in tracking outfall
monitoring data, calculating average values, and generating monthly compliance reports. The
Automated DMR System presented in this paper should be a valuable tool in improving efficiency,
ensuring compliance with the applicable environmental regulations, and in overall cost savings.
2. Clean Water Act (CWA) and Applicability to Shipyard Discharges (U.S. EPA (a))
The basic statutes that govern the shipbuilding industry depend on the water quality checklist, which
addresses the following water-related requirements:
• Operations involving point source discharges (e. g., pipes, outlets, ditches) to navigable
waters;
• Operations involving marine equipment (e.g., boats, vessels, port facilities, drydocks, floating
drydocks, and building ways);
• Discharges to public sewers;
• On-site wastewater treatment works or cooling towers with discharges to public sewers or
• navigable waters;
• Stormwater discharges; and
• Construction activities in navigable waters.
The basic framework for the regulations used in the checklist is the CWA, which was enacted in 1977
as comprehensive amendments to the Federal Water Pollution Control Act (enacted in 1956 and
amended in
1972). The CWA was most recently amended by the Water Quality Act of 1987. The CWA is
implemented through the NPDES permit program, which is the key component to control discharges
from industrial facilities and POTWs to surface waters of the United States. Under the NPDES permit
program regulations, the EPA may delegate authority to individual states to administer their own
permit program in lieu of the federal program.
In the absence of federal categorical standards for shipyards, CWA discharge limits are often
established on the basis of Best Management Practices (BMPs). Discharge permits are required for
all “point source” discharges of pollutants into waters of the U.S., including wetlands. Permits may
also be required for indirect discharges of pollutants into municipal collection and treatment systems.
The discharges are controlled under local or state pretreatment program requirements.
Permits are required from the Corps of Engineers for work in navigable waters and for disposal of
dredge or fill material in waters of the U.S. Finally, the CWA contains regulations on oil spill
prevention and run-off control from oil and hazardous substance storage areas through requirements
104
for SPCC plans. The provisions of 40 CFR, Part 112, establish requirements for the equipment and
methods to prevent the discharge, not related to transportation, from on- and off-shore facilities that
could reasonably be expected to discharge oil to the navigable waters.
3. National Pollutant Discharge Elimination System (NPDES) Permit (U.S. EPA (b))
The CWA requires wastewater dischargers to have an NPDES permit issued by the concerned
regulatory agency establishing pollution limits and specifying monitoring and reporting requirements.
Among other sources, the NPDES requires all industrial point sources to obtain a permit to discharge
any type of wastewater into any receiving water body.
The NPDES permit outlines the discharge limits and also the testing and recordkeeping requirements
for the outfalls involved. The main objective of the NPDES permit program is to protect human health
and aquatic life and to see that every facility treats wastewater. The permits tend to be case specific as
well as outfall specific. Also, the permit outlines site specific compliance monitoring and reporting
requirements.
For example, a shipyard having three outfalls may have different discharge limits and monitoring
requirements for the three outfalls, depending on the type of wastewater involved. Apart from the
NPDES permit, the shipyard also needs other permits like the pretreatment permit, sanitary
wastewater discharge permit, etc.
Normally, the primary purpose of an NPDES permit is to establish enforceable effluent limitations. In
addition to effluent limitations, the NPDES permit establishes a number of other enforceable
conditions, such as monitoring and reporting requirements, a duty to properly operate and maintain
systems, upset and bypass provisions, recordkeeping, inspection, and entry requirements.
105
4. Concept of Knowledge-based Approach for Automated DMR System
Concept used in developing the Automated DMR System is illustrated in Fig. 1. This intelligent
system has built in knowledge-base necessary to achieve wastewater management tasks such as (1)
estimating storm water quantities based on catchment area type, (2) calculating average values for
wastewater parameters based on the number of samples collected, and (3) charting capability with
minimum effort so that years of data can be used to draw conclusions. The system can be divided
mainly into three parts, part 1 contains user entered one-time data, part 2 contains continuous input of
wastewater quantity and characteristic data, and part 3 contains reporting and decision making
components.
Compliance
Evaluation
Decision Support 19
Original system developed was a desk top version with Visual Basic as the front-end and Access as
the back-end which was then redeveloped as a Web-based application using .NET framework. The
application as it stands now can run using just an Internet browser. Automated DMR can run on any
operating system and can use any database, including: SQL Server, Oracle, Sybase, etc. Because
.NET is fully compatible with other ERP (Enterprise Resource Program) applications and legacy
systems, interfacing with SAP and LIMS (Laboratory Information Management System) is easy and
efficient.
It is important to recognize that this system is an exclusively tailored application designed to meet all
industrial wastewater management requirements. This is a knowledge-based intelligent system
developed through years of collaborative research and development involving academia, shipbuilding
partners, and the shipbuilding research panel by evaluating the best wastewater management options.
Northrop Grumman Avondale Industries, Inc. served as the industry collaborator during its initial
106
development. Also the UNO faculty members involved in initial development were active
participants of the National Shipbuilding Research Program (SP1 Environmental Panel) who had
ready access to shipbuilding industry’s domain knowledge and environmental issues relevant to the
industry sector.
The decision support system developed has numerous advantages in improving productivity, saving
costs, and improving public image. The software can be customized for shipyards in any state by
changing the applicable state approved reporting forms. Selective features of the system are
explained through screen shots and user options in the following sections.
To make any modifications in the existing outfall details user needs to click on Edit button of the
corresponding outfall in the update column, make the necessary changes and then click on Update to
save the changes (Fig. 3).
Fig. 3: Outfall Details form being edited by the User for Outfall BDOUT01
New outfall details can be entered in the above form. Details include (1) outfall identification, (2)
permit number, (3) discharge number, (4) receiving water body, and (5) the outfall type and location
details. User clicks Add Outfall to save information on the new outfall (Fig. 4).
107
Fig. 4: Form to Add a New Outfall for Wastewater or Stormwater
Catchment (watershed) area details should be added if the outfall type is storm water. User enters
surface type of the catchments area details for storm water outfall. This information is required to
quantify surface runoff during a rainfall event. If a selected outfall has catchments then its details can
be viewed by clicking on Show Catchment Area Details (Fig. 5).
Fig. 5: User Entry Form for Entering Catchment Area Details for Stormwater Quantity Computations
For an existing catchment area, the user can select the catchment by clicking on Edit button of the
corresponding row whose information needs to be modified and by clicking Update to save changes
made to the outfall (Fig. 6).
108
Fig. 7: Outfall Selection Form for Entering the Discharge Limits (Onetime Input Data)
When an outfall is selected by the user, then the discharge limits of various pollutants of that
particular outfall are displayed. Loading limits are applicable only to pollutants whose concentration
units are 'mg/l' or the parameter selected is flow. If flow is selected as the parameter, then
concentration limits are not applicable to flow. Sampling requirements have to be met with and
concentrations as well as loading values have to be below the one specified in this form to
demonstrate compliance (Fig. 8).
Fig. 8: Discharge Limits of a Specific Outfall with Permit Date Tracking Feature
The user can enter discharge limits for a pollutant and select the loading units, concentration units,
sample type and reporting requirement of the pollutant from the drop down list. Limits entered for the
pollutant by the user can be saved by clicking on the Add Parameter button to save it to the database.
Once a new limit is added for an already existing parameter, then the earlier limit is terminated and
the new limit which is added will be set as an effective limit (Fig. 9).
109
Fig. 9: User Entry Form for Entering Wastewater Parameter and its Permit Limits
Fig. 10: Form for Viewing and Editing Wastewater Treatment Units and Its Details
When the user selects a particular wastewater unit then the operating specifications of that wastewater
unit are displayed. The user is then prompted to enter or update the manufacturer's recommended
operating values for parameters, which determine performance of treatment unit (Fig. 11).
Fig. 11: Form for Viewing and Editing Design Data of a Selected Wastewater Treatment Unit
110
Already existing operating specifications can be updated by clicking on Edit button. The user clicks
Update button in the 'Operating Specification' frame to save the data (Fig. 12).
New operating specifications can be added in the above frame. Process control can be selected from
the dropdown list and limits can be added. The user clicks Add Waste Treatment Method button in
the 'Operating Specification' frame to save the data (Fig. 13).
Fig. 13: Form for Viewing and Editing Wastewater Treatment Unit Details
For an existing unit data, the user clicks on the Edit of corresponding process control from the drop
down list whose data is to be modified. After modifications, the user should click Update button to
save the changes (Fig. 14).
Fig. 14: Form for Adding a New Wastewater Treatment Unit Details
For new unit data, the user enters data. The units can either be rectangular or circular, depending on
which the physical dimensions. The user selects the type of treatment unit and enters the physical
dimensions of the treatment unit. The user clicks Add Wastewater Unit Detail to save the physical
dimensions of the treatment unit (Fig. 15).
111
22
Fig. 15: Form for Adding a New Wastewater Treatment Unit and Its Catchment Area Detail
112
Fig. 16: A Ready- to-Send DMR Generated by the Automated DMR System
28
113
Similarly, Fig. 18 shows TSS variation from a process outfall as an x-y plot (TSS concentration vs.
time). Applications’ forms were designed in such a way that user can analyze the information with
minimum number of clicks, for example, the form shown in Fig. 18 allows the user to select any
outfall, any wastewater parameter, and any time periods with three clicks which provides enormous
time saving capability. The user becomes knowledgeable of various wastewater generating sources,
wastewater treatment systems, and the seasonal variations without wasting time as all the information
is stored and processed intelligently by the Automated DMR system. The user has more time to take
corrective actions rather than wasting time in data analysis.
The Automated DMR System presented in this paper should serve shipyards as well as other
industrial facilities in:
114
scientists, application developers, and the potential users on this system as well others within the
environmental engineering and science that promote quality of life by reducing life cycle costs,
conserving energy/natural resources, protecting worker/public health, and preventing environmental
pollution.
Acknowledgements
This is the off-spring of the Expert EMS application which was originally sponsored by the Office of
Naval Research through the Gulf Coast Maritime Resources and Information Center (GCRMTC) and
the Maritime Environmental Resources and Information Center (MERIC). This project, Automated
DMR System was sponsored by SofTek Systems, Inc. Investigators gratefully acknowledge financial
and software support received from SofTek Systems, Inc. and its associates during the development of
the Automated DMR System without which it would not have been possible to complete this project.
References
KURA, B., and LACOSTE, S. (1996), Typical Waste Streams in a Shipbuilding Facility, Proceedings
of Air & Waste Management Association’s 89th Annual Meeting & Exhibition, Nashville, TN, June
24-28, 1996, pp. 96-WP70DB.04/1-14.
KURA, B., LACOSTE, S., and PATIBANDA, P. (1998a), Multimedia Pollutant Emissions from
Shipbuilding Facilities, United States Japan Natural Resources (UJNR) Conference, Washington,
D.C., 1998, pp. 297-318.
KURA, B., TADIMALLA, R., and SAHA, S. (1998b), Wastewater from Shipyards -
Characterization, Minimization, and Treatment, Proceedings of the Water Environment Federation,
1998.
KURA, B. (1998c), Multi Sector General Permit for Shipyard Storm Water: A Review of the Mid
Atlantic and the Gulf-Coast State Regulatory Agencies, MISSTAP, Biloxi, MS, 1998.
KURA, B. and TADIMALLA, R. (1999a), Characterization of Shipyard Wastewater Streams,
Proceedings: Oceans 99, Seattle, Washington, September 14, 1999.
U.S. EPA (1997), Profile of the Shipbuilding and Repair Industry, EPA/310-R-97-008, November
1997.
U.S. EPA (a), Clean Water Act, http://www.epa.gov/r5water/cwa.htm (Accessed: March 20, 2006).
U.S. EPA (b), National Pollutant Discharge Elimination System, http://cfpub.epa.gov/npdes/
(Accessed: March 20, 2006).
115
VOC-HAP Compliance Management System for Shipyard Painting
Operations
Abstract
Painting is a major process in shipyards which provides corrosion protection and/or improves
appearance of the substrate, and is generally distributed throughout the yard. Painting activity can
be divided into two major categories, painting and equipment cleaning, both of which result in
emissions of volatile organic compounds (VOCs) and hazardous air pollutants (HAPs). Surfaces are
generally spray painted, and some parts are hand painted. Most topside and interior paints are not
as toxic as anti-fouling bottom paints, which generally contain toxic pigments such as chromium,
titanium dioxide, lead, copper, and tributyl tin compounds. By employing paint application
equipment with high transfer efficiency, the amount of paint lost due to overspray and the
VOCs/HAPs are minimized.
VOCs in presence of oxides of nitrogen and sunlight contribute to ozone formation which is a major
health concern. In addition, many VOCs are considered as HAPs due to the inhalation-induced
toxicity. Because of these health and environmental effects of VOCs/HAPs emitted from shipyard
painting operations, these emissions are strictly regulated through a NESHAPs (National Emissions
Standards for Hazardous Air Pollutants) program within the United States. Under NESHAPs,
shipyards are required to track substrate painted, paint quantities and composition, thinner quantities
and composition, cold/hot weather conditions, and many more. These records are used in estimating
emissions of VOCs/HAPs as well as compliance evaluation of each paint usage scenario. This
compliance activity is resource demanding and burdens shipyards with high costs.
Authors present a VOC-HAP Compliance Management System (VOC-HAP CMS) that was recently
developed to address shipyard compliance with NESHAPs. Further, the paper discusses the smart
features incorporated into the knowledge-based system and how the shipyards benefit from theses
features. This decision support, intelligent system is readily scalable to any painting facility whether
it is located in the United States or elsewhere.
Keywords: VOC-HAP Compliance of Painting; Emissions from Painting; Environmental
Management of Painting Operations; Paint NESHAPs; Environmental Costs of Shipyard Painting.
1. Introduction
Most vessels constructed or repaired at medium to large size shipyards are made of thick metal plates
that are prone to corrosion and deterioration of life and serviceability. Paints are essential to prevent
corrosion as well as to prevent the growth of marine organisms (for vessel bottoms) thus increasing
the vessel life and the fuel efficiency. To ensure proper adhesion of the protective paints, all metal
surfaces must be prepared and/or cleaned prior to coating application. Surface preparation includes
removing all dirt and other surface contaminants that may interfere with adhesion of paints. Various
methods are available to prepare metal surfaces for painting and the most popular method of surface
preparation is dry abrasive blasting using a variety of abrasives such as coal slag, copper slag, garnet,
steel shot, steel grit, sand, hematite.
Painting activity can be divided into two major activities, painting and equipment cleaning. Interior
and exterior painting is carried out to provide corrosion protection and/or to improve appearance.
Surfaces are generally spray painted, and some parts are hand painted. Most topside and interior
paints are not as toxic as anti-fouling bottom paints, which generally contain toxic pigments such as
chromium, titanium dioxide, lead, copper, and tributyl tin compounds (KURA et al., 1996).
Many hull paints are anti-fouling coatings, containing toxic biocides to prevent or minimize marine
growths that eventually foul hulls. Most of these toxic agents are heavy metals or organo-metallic
compounds, such as cuprous oxide, lead oxide, and tributyl tin compounds. Most importantly,
painting activity involves significant air emissions and these air emissions are regulated by the
116
federal, state, and local environmental regulatory agencies. Air pollutant emissions from painting are
volatile organic compounds (VOCs) most of which are considered as hazardous air pollutants (HAPs)
due to their health effects among the exposed individuals, both workers and the public (U.S. EPA,
1991, 1993, 1994, 1997). Also, VOCs are precursors to the ambient ozone and are heavily regulated
in urban areas that are non-attainment with respect to ozone. Paint emissions of industrial facilities
are regulated by various mechanisms in the United States under the Clean Air Act (CAA). Any
facility with significant potential to emit air pollutants has to have a facility air permit to operate and
the painting operations are regulated by the National Emission Standards for Hazardous Air Pollutants
(NESHAPs) (Kura et, al. 1998a, 1998b).
1.1 NESHAPs for the Shipbuilding Surface Coating (U.S. EPA, 1995; KURA, 1998c)
Section 112 of the CAA as amended in 1990 promulgates NESHAPS for shipbuilding and ship repair
(surface coating) operations. The NESHAPS require existing and new major sources to control
emissions using the Maximum Achievable Control Technology (MACT) to control HAPs. Major
sources which emit 10 tons/year of any single HAP or 25 tons/year of any combination of HAPs are
covered by these guidelines.
Surface coating operations at shipyards are the focus of NESHAPS as a variety of HAPs are used as
solvents in marine coatings. The HAPs emitted by the facilities covered by this rule include xylene,
toluene, ethyl benzene, methyl ethyl ketone, ethylene glycol, and glycol esters. All of these pollutants
can cause reversible and irreversible toxic effects following exposure. The potential toxic include
irritation to the eye, nose, throat, and skin and damage to blood cells, heart, liver, and kidneys.
These standards limit volatile organic hazardous air pollutant (VOHAP) emissions from indoor and
outdoor coating operations. The VOHAP emissions result largely from solvent evaporation from the
coatings. These emissions occur during application and drying/curing. Due to the size of the ships
and their components, most coatings are applied outdoors. These standards also reduce VOHAP
emissions from handling, transfer, use, and storage of VOHAP-containing materials through work
practice measures. These emissions also occur as a result of solvent evaporation.
These standards impose limits on the VOHAP content of 23 types of coatings used at shipyards which
are listed in Table I (U.S. EPA, 1995). Compliance with the VOHAP limits must be demonstrated on
monthly basis. The final standards also require that all the handling and transfer of VOHAP
containing materials to and from containers, tanks, vats, vessels, and piping systems be conducted in a
manner that minimizes spills and other factors leading to emissions. In addition, containers of
thinning solvent or waste that hold any VOHAP must be normally closed (to minimize evaporation)
unless materials are being added to or removed from them.
Compliance Procedures
The NESHAP rule would allow affected sources to choose among several options for demonstrating
compliance with the VOHAP standards. Regardless of the option chosen, affected sources would first
determine the coating category (e.g., general use, air flask, antenna, etc), the applicable VOHAP limit,
and the VOC content for each batch of coating received from the manufacturer.
Affected sources would be allowed to use the following methods to demonstrate compliance to avoid
testing every container of coating; however, any analysis of an individual container of coating using
the Method 24 (U.S. EPA) would take precedence to verify a violation. NESHAPs Compliance
demonstration is illustrated in Fig. 1.
117
Table I: NESHAPS for Marine Paint Categories
The options (Options 1 through 3) available to shipyards to demonstrate compliance are briefly
discussed below:
Option 1:
Shipyards can demonstrate compliance of the as-supplied VOC content as certified by the
manufacturer. If the as-supplied coating is used without adding any thinner, shipyards can certify that
the as-applied VOC content of the batch of coating is identical to the as-supplied VOC content. If the
certified VOC content is less than the VOHAP limit, compliance is demonstrated. As mentioned
earlier, Table I shows the VOHAP limits for 23 categories of marine coatings.
118
START
NESHAP Compliance
(Options 1, 2 &3)
Determine VOC Content &
VOHAP limit for each batch
MATR: Maximum Allowable
YES
Compliance by Thinning Ratio
Thinners NO
coating-by-
added? coating basis?
Option 1 Option 3
Report NO Option 2 Report
Report
Certify the as-applied VOC YES
content of each batch of coating Determine MATR for each Determine MATR for each
batch & notify the painters group & notify the painters
Is VOC Content NO
≤ VOHAP Limit? Is actual volume ≤ NO Is actual volume ≤ NO
allowed volume? allowed volume?
YES
YES YES
Option 2:
Shipyards can demonstrate compliance if the actual volume of thinner used is less the maximum
allowable volume of the thinner on a coating-by-coating basis
Option 3:
Shipyards can demonstrate compliance by comparing the actual volume of thinner used to the
maximum allowable volume on a “group” basis. A group of coatings would be defined as those
which use the same thinner
An affected source may choose to use only one of the options for all coatings at the facility or a
combination of options. Option 2, coatings to which thinning solvent will be added (coating-by-
coating), is discussed in detail below to explain the calculations required for compliance
demonstration.
(i) Prior to first application of each batch, designate a single thinner for the coating and calculate the
maximum allowable thinning ratios for each batch using the eq. 1 listed below:
Where:
R = Maximum allowable thinning ratio for a given batch ( L thinner/L coating as
supplied)
119
Vs = Volume fraction of solids in the batch as supplied (L solids/L coating as supplied)
VOHAP_Limit = Maximum allowable as-applied VOHAP content of the coating (g VOHAP/L
solids)
mVOC = VOC content of the batch as supplied
Dth = Density of the thinner (g/L)
If Vs is not supplied directly by the coating manufacturer, the owner or operator shall determine Vs
using eq. 2 listed below:
⎛m ⎞
Vs = 1 − ⎜ volatiles ⎟ (2)
⎜ D ⎟
⎝ avg ⎠
Where:
mvolatiles = Total volatiles in the batch, including VOC, water, and exempt compounds (g/L
coating)
Davg = Average density of volatiles in the batch (g/L)
(ii) Prior to the first application of each batch, notify painters and other persons as necessary, of the
designated thinner and maximum allowable thinning ratio for each batch of the
coating by affixing a label to each container of coating.
(iii) By the 15th day of each calendar month, determine the total allowable volume of thinner for the
coating used during the previous month using eq. 3 listed below:
n n
Vth = ∑ ( RxVb ) i + ∑ ( Rcold xVb −cold ) i (3)
i =1 i =1
Where:
Vth = Total allowable volume of thinner for the previous (L thinner)
Vb = Volume of each batch, as supplied and before being thinned, used during non-
cold-weather days of the previous month (L coating as supplied)
Rcold = Maximum allowable thinning ratio for each batch used during cold-weather of
the previous month
Vb-cold = Volume of each batch, as supplied and before being thinned, used during cold-
weather days of the previous month
i = Each batch of coating and
n = Total number of batches of the coating
(iv) By the 15th of each calendar month, determine the volume of thinner actually used with the
coating during the previous month.
(v) If the volume of thinner actually used with the coating is less than or equal to the allowable
volume of thinner for the coating, then compliance is demonstrated for the coating for the previous
month.
The rule requires Method 24 (U.S. EPA) be used as the reference method to determine compliance if
the VOC content is used as a surrogate for VOHAP.
120
measured by these procedures would be an enforceable violation of the emission limits of the
standard. The rule requires that in addition to the initial notification by the industry, they would be
required to submit a notification of compliance status on a quarterly basis.
In order to assist shipyards, navy yards as well as other industry sectors that are involved in
painting/coating operations with their emission compliance and health risk reduction, a VOC-HAP
Compliance Management System (VOC-HAP CMS) was developed which is described briefly in this
section. VOC-HAP CMS working mechanism and concept are illustrated in Fig. 2:
As can be seen from the Fig. 2, the system incorporates all the features necessary for calculating air
emissions and hazardous wastes from painting operations by providing the necessary calculations for
compliance achievement and compliance demonstration. The application has built-in knowledge-base
and smart features on maritime painting operations, NESHAPS, necessary equations discussed in the
previous section and other environmental engineering/science calculation procedures.
Data input by the user can be divided into onetime input and continuous input. One time input
consists of facility information such as number of painting locations, shop details, longitude and
latitudes, stack height, air pollution controls, facility air permit requirements, and a few others.
For example, in paint characteristics form, the user enters data on paints and thinners used in the
facility. The information entered in this form is obtained from the material safety data sheets (MSDS)
provided by the manufacturer with the paints and thinners. MSDS provides information on raw
121
materials pertaining to chemical composition, physical characteristics, handling practices, health
hazards of chemicals present in the material and, action to be taken in case of spill or exposure to the
raw material. Chemical and physical information are generally used in quantifying wastes and in
speciation of wastes.
Similarly for adding new paint usage data, the user enters quantity, selects the process conditions (Air
Pollution Control, Solids Control, Coating Category, Job ID) and usage data. Paint used is selected
from Paint/Thinner used drop down list. The 'Storage ID' is populated with storage locations where
the paint ID selected is stored. The quantity of paint used is checked against quantity available in
material storage location. If the quantity is exceeded, the user is warned about excess paint usage.
The user clicks Add Paint Details button to save the paint usage information in database. The user is
then prompted to add thinner usage details for the same transaction.
Continuous input consists of the location, amount of paint used, thinner used, type of job etc. The
application can also provide the thinner volume necessary for each job and/or it takes the data entry
by the user. VOC-HAP CMS assists shipyards in compliance achievement and compliance
demonstration with the NESHAPS and Title V Permit requirements. In addition, it also performs
assessment of health risk resulting from air toxics inhalation. Numerous decision support features are
also available to make right decisions from time to time.
VOC-HAP CMS outputs include emission inventory reports (for any time period; for any source
and/or source group combinations; for any material combinations), NESHAPS reports (for each paint
transaction; for each paint group; for each job; for each source), Toxics Release Inventory Reports
(TRI), and health risk assessment reports.
For quantification of air emissions and solid waste released from painting operations, the user uses
quantification module. Compliance with NESHAP regulation is also checked for each paint usage
transaction. This frame shows paint usage data between the start date and end date selected by the
user. The user selects start date and end date and clicks Transaction Details button to display all
paint usage transactions in the grid, which fall between the selected start date and end date. Similarly,
the user clicks buttons – NESHAP Compliance, Air Emissions or Solid/Hazardous Waste to view
the compliance with NESHAP regulation, to view the air emissions or to view the solid waste released
from paint industry for each paint usage transaction. User clicks Air Emissions button to display air
emissions generated from paint usage transactions. Total VOC released and quantity of each VOC
chemical released during the painting process is displayed in the grid which is shown in Fig. 3.
Similarly, the user clicks the NESHAPS Compliance button to evaluate compliance of each paint
usage transaction with NESHAPS rule. 'NESHAPs' compliance is checked for each paint usage
transaction. If thinner is used in a particular paint usage transaction, the quantity of allowable thinner
is compared with actual thinner used. If allowable thinner is less than actual thinner usage, then non-
122
compliance of NESHAPs regulation is triggered. If no thinner is used, the VOC content of the paint is
compared with the allowable VOC content based on the coating category selected for the paint
application. If allowable VOC content is exceeded by the VOC content of paint, non-compliance of
NESHAPs regulation is triggered. Fig. 4 includes the NESHAPs Compliance report.
Analysis and decision support module includes various features which facilitate shipyards to
determine/ understand, (1) historical trends, (2) emission comparisons (year to year; source to source;
paint type to paint type; job to job), (3) actual emissions vs. emission limits specified by the facility
air permit (any time of the year based on pro-rated emission limits; end of the year), (4) planning
details and pollution prevention options, (5) trouble shooting to see the most important/critical
sources, jobs, and materials, and (6) what-if analysis for hypothetical scenarios using emissions
calculator features. Fig. 5 and Fig. 6 show some of the decision support features of the VOC-HAP
CMS.
27
Fig. 5: Decision Support – Verification of Actual Air Emissions with Time Weighted Air Emission
Limits
123
29
Fig. 6: Decision Support – Historical Trend of a Specific Air Pollutant Emission from a Specific Paint
Used
This VOC-HAP CMS is a web-based application developed using Microsoft DotNet framework with
SQL Server database. As this is a Web-based application no application is required to be installed on
the client system and only an Internet browser is necessary. The application can be hosted at a central
server. Another advantage is that the data sharing among various companies within the industry is
easy and allows simultaneous entry of data from remote locations. VOC-HAP CMS which is a
knowledge-based, intelligent system offers numerous benefits to facilities involved in painting and
solvent degreasing operations, some of which are listed in Fig. 7.
124
3. Summary and Conclusions
The Web-based intelligent system, VOC-HAP CMS presented in this paper offers numerous benefits
to the shipbuilding and ship repair sector that is heavily involved in painting and coating operations.
Most important advantage of this system is that it saves time and effort by the facility owners by
avoiding the repetitive tasks, calculations, and procedures. The architecture of the system is
developed in such a way that the data once entered is never to be reentered and is readily accessed by
all modules. Compliance is ensured by identifying the possible future failure scenarios and
recommending corrective measures way before the problem occurs.
Though this system is currently designed for the shipbuilding industry sector for painting process, the
concept is scalable to any industry sector and any industrial process. Applicable environmental
regulations, air permit requirements, hazardous waste management procedures, standard work
practices of any facility can be easily incorporated into this decision support system whether the
facility is in the United States or outside the United States. Authors continue to improve this system
and are involved in developing many other intelligent systems that focus on worker health
protection/industrial hygiene, total facility environmental management, life cycle costing/life cycle
assessment, and health risk assessment. Authors welcome collaborative opportunities with
international scientists, application developers, and the potential users to further the role of
knowledge-based intelligent systems in advancing the quality of life by reducing life cycle costs,
conserving energy/natural resources, protecting worker/public health, and preventing environmental
pollution.
Acknowledgements
This is the off-spring of the Expert EMS application which was originally sponsored by the Office of
Naval Research through the Gulf Coast Maritime Resources and Information Center (GCRMTC) and
the Maritime Environmental Resources and Information Center (MERIC). This project, VOC-HAP
CMS was sponsored by SofTek Systems, Inc. Investigators gratefully acknowledge financial and
software support received from SofTek Systems, Inc. and its associates during the development of
VOC-HAP CMS without which it would not have been possible to complete this project.
References
KURA, B., and LACOSTE, S. (1996), Typical Waste Streams in a Shipbuilding Facility, A&WMA
Annual Conference, June 1996.
KURA, B. and TADIMALLA, R. (1998a), Pollution Prevention Technologies for Shipyards, United
States Japan Natural Resources (UJNR) Conference, Washington, D.C., 1998, pp. 329-354.
KURA, B., LACOSTE, S., and PATIBANDA, P. (1998b), Multimedia Pollutant Emissions from
Shipbuilding Facilities, United States Japan Natural Resources (UJNR) Conference, Washington,
D.C., 1998, pp. 297-318.
KURA, B. (1998c), Air Quality Regulations Applicable to Shipyards & Boatyards in the Mid-
Atlantic and Gulf Coast States, MISSTAP Conference, Biloxi, MS, 1998, pp. 1-17.
U.S. EPA (Year Unknown), 40 CFR Part 60 Appendix A Method 24: Determination of Volatile
Matter Content, Water Content, Density, Volume Solids, and Weight Solids of Surface Coatings.
U.S.EPA (1991), Guides to Pollution Prevention - The Marine Maintenance and Repair Industry,
EPA/625/7-91/015, U.S. Environmental Protection Agency, October 1991.
U.S. EPA (1993), Shipbuilding and Ship Repair Industry: Background Information for Control
Techniques Guidelines (CTG), 1993.
U.S. EPA (1994), Alternative Control Techniques Document: Surface Coating Operations at
Shipbuilding and Ship Repair Facilities, EPA 453/R-94-032, U.S. Environmental Protection Agency,
April 1994.
125
U.S. EPA (1995), 40 CRR Part 63, Subpart II – National Emission Standards for Shipbuilding and
Ship Repair (Surface Coating).
U.S. EPA (1997), Profile of the Shipbuilding and Repair Industry, EPA/310-R-97-008,
November 1997.
126
Prediction of ship turning manoeuvre using Artificial Neural Networks (ANN)
Abstract
The use of an Artificial Neural Network (ANN) as a new practical tool to predict turning track motion
of ship manoeuvre is described. A navigation simulator was used to generate the data required for the
training and validation of the developed patterns of ANN. For this reason, data sets of more than one
hundred manoeuvres were recorded and documented. More than sixty percent of such data files were
provided for training. A second set of more than thirty percent data files was used for the validation of
the calculated results of blind manoeuvres. A blind manoeuvre is one for which only the initial
conditions are known. The results of a blind manoeuvre prediction include different parameters which
describe the characteristics of different phases of turning manoeuvres at any time such as: turning
track information, velocity and acceleration.
The Parallel Artificial Neural Network (PANN) was applied as a new practical method to enhance the
manoeuvring simulation of turning motion. The developed method was validated for different types of
ship (two containers, bulk carrier and tanker). Training and testing of the system were repeated several
times with different random selection of used data sets. The results of the ANN show that the
developed patterns of ANN deliver sufficient accuracy by simulation the turning motion at any time
and with any rudder angle.
1. Introduction
The goal of the study is to develop a new practical model to predict ship motions by means of non-
traditional strategies. After investigating the different Artificial Intelligence (A.I) approaches, the
artificial neural network (ANN) was selected for the application presented in this paper.
The first step of work was the application of ANN to predict the limits of turning circle manoeuvres as
a maximum advance and total diameter [1]. As the work progressed, it became evident to predict ship
turning manoeuvre track as a second step [2]. The third step was the prediction of all parameters of
turning track such as accelerations, velocities, travel distance, yaw rate, drift angle and number of
revolution per minute of the propeller at any time during the motion.
Faller et al. applied Recursive Neural Networks (RNN) as a simulation tool for submarine
manoeuvring [3]. A recursive network is one that employs feedback; namely, the information stream
issuing from the outputs is redirected to form additional inputs to the network. An initial formulation
of the problem using an RNN model for the use with ships is described in [4]. RNN simulations have
been created by using data from both model and full-scale submarine manoeuvres. In the latter case,
incomplete data measured on the full-scale vehicle was augmented by using feed forward neural
networks as virtual sensors to estimate the missing data [5]. The creation of simulations at both scales
permitted the exploration of scaling differences between the two vehicles which is described in [6].
The technique was further developed for accurate prediction of tactical circle and horizontal overshoot
manoeuvres. Different applications and benefits of ANN can be found in [7, 8 and 9].
1
[email protected]
2
[email protected]
1
127
2. Turning Manoeuvre
Turning circle tests are performed for both port and starboard, at approach speed for the investigated
ships with a maximum rudder angles and small angles. The IMO requirements and the
recommendations of the maritime safety committee have been taken into consideration [11-12].
The applied coordinate systems are shown in figure 1a. U is the actual ship velocity that can be
decomposed in an advance velocity u and a transversal velocity v. Total speed U is measured relative
to the fluid. The ship has also a rotation velocity with respect to the z-axis. This axis is normal to the
xy plane and passes through the reference point (mid ship). β is the angle between U and the x axis and
it is called drift angle. ψ is the ship heading angle and δr is the rudder angle. Data that define the motion
of the manoeuvring ship are velocity (u and v), angular velocity r, acceleration ( u& and v& ) and angular
acceleration r& . Trajectory is defined as (x and y), then, surface motions are presented by the
longitudinal axis x and it is positive towards the bow, y is the transverse axis and is positive to
starboard. The angular motion is taken around the z-axis.
Control data that propel and direct the vessel are propeller rotation speed n, rudder deflection angle δr
(negative to starboard side and positive to port side), rudder area AR and rudder angle ordered δRO.
Data of the ships investigated are given in table 1, where trim angle is neglected due to even keel
loaded condition. Draft, block coefficient and displacement have been modified to the actual loaded
condition.
Once the reference systems have been defined, the ship is considered as a solid body with three
degrees of freedom: surge, sway and yaw. Roll, pitch and heave are not considered in this case. Roll
begins to be important at high Froude numbers when the roll angle is large and affects the
manoeuvrability.
The understanding of the parameters affecting the motion is important to develop a successful ANN. The
equations of motion for three degree of freedom can be written with help of the moving reference
system shown in figure 1a as follows:
The sub indexes stand for: H hull, P propeller, R rudder and A aerodynamic. X and Y are the external
forces. N is the external moment acting on the ship. The terms in the right side of the three equations
are ship resistance; propeller force, rudder force and aerodynamic force respectively. All forces have
components in X and Y direction, while the moment is acting with respect to the vertical axe at ship’s
reference point.
In the experiments, all of the investigated ships are arranged with a right-handed propeller. At the
beginning of the turning manoeuvre, the ship has a steady course ahead without any rate of turn to port
or starboard side. Then, the recording of all manoeuvring data starts at time to when the rudder angle is
ordered. The heading of the ship changes in response to the rudder deflection. The rate of turn
increases gradually till reaches a certain value. All the data is recorded till the manoeuvre ceased after
one complete turn at least. The environmental condition applied is calm weather. The current and the
wind speed are zero, which are excellent for all manoeuvre details.
The most fundamental ship manoeuvre is the turning manoeuvre. The behaviour of the ship during the
turn is the consequence for the resulting forces and moments, which are produced by the flow on the
rudder, hull and propeller. The turning manoeuvre can be divided into three phases. At the beginning, the
rudder angle is zero and the ship has a steady course and speed. In this case, there are no resultant
2
128
forces since the propeller thrust counter act the total drag of the ship. The first phase begins when the
rudder starts to be executed. The rudder forces will direct the ship’s stern to the port side, when it is
required to turn the ship to starboard.
As a result of the high transverse acceleration v& and the angular acceleration r& a quick raise of the drift
angle β, and the rotation velocity r take place. With introduction of these parameters, the ship enters
the second phase of turning. In this phase all parameters change dramatically. Surge, sway and yaw
accelerations will be occurred. Finally in the third phase, the turning ends with the establishment of the
final equilibrium of the forces and moments and the ship settles down to a turn of constant radius as
shown in figure 1b.
In the last phase of the turning manoeuvre, the transversal velocity v and turn velocity r as well as the
drift angle β will be constant. The transversal acceleration v& and the angular acceleration r& equal zero
and the path of the ship is circular. Figure 1b shows a definition diagram for the turning path of a ship.
Generally, it is characterized by four measured values: advance, transfer, tactical diameter, and steady
turning radius.
The required results for training the system were generated using a navigation simulator. All trail
information were recorded and plotted as data files. More than hundred runs (manoeuvres) were
recorded and documented. Sixty percent of the data files were provided to the system for training
while the rest (more than thirty percent) of the data files were used as input data for blind manoeuvres
and for validation. The time interval for recording data was 5 seconds or 10 seconds. The following
items were recorded: simulation time t, starting position as latitude and longitude, speed U, number of
revelation per minute n, ship’s ground course (ψ - β ), ship heading ψ, turn rate r, rudder angle δr,
travel distance, and acceleration. All the data were recorded till the manoeuvre ceased after one
complete turn which is 360 degree on the track course plus the actual drift angle ( β ).
For developing the ANN-mathematical model, it is important to fulfil the following specifications:
− Numerical terms of the model should be as much related to the physical meaning as possible.
− Each term must have the ability to be evaluated experimentally and theoretically.
− The ANN model must be formed logically in order to develop the model for wider prediction
applications.
− The model design must have the ability and flexibility to accept and predict changes of the
input or output data such as changes of the motion behaviour due to modifications of the main
particulars of ship or the main data of the ship’s rudder.
3
129
5. System Mathematical Model Architecture
The network architecture is built and constructed as a parallel multi-layers model to predict and
simulate the ship turning manoeuvres. Many numbers of ANNs are used in parallel to describe the
different ship tracks and the others variables that simulate the characteristics of the different phases of
the manoeuvring turns. Parallel ANNs describe the ship’s trajectory as different positions of the ship’s
track during its motion with any rudder angle, see figures [7-11]. The number of ANNs depends on
travel distance, travel time, track and application purposes. Similar techniques are applied to the others
variables of interest that describe the different characteristics of different phases of a ship turning
motion to both sides (port and starboard), see figures [12-13]. The architectures of ANNs which were
used in the PNN were diverse from each others. Figure 2 shows a block diagram that demonstrates the
developed model.
To construct a successful pattern of an ANN, it is important to observe and measure the system
behaviour. After analysis of the system, it will be possible to define and to generate the measured data,
which will be required as an input for the learn process of the neural network. The behaviour of the
system can be detected in the future for any possible input data with help of the predicted information
during the learning process. The performance of the model system can be optimized to achieve high
prediction accuracy for the unknown patterns by minimizing the error between the predicted system
output and the measured system output.
The performance of the mathematical neural network model prediction is influenced by some
important factors such as computational effort, training sample size, and ability to generalize.
Moreover, in case of on-line training and prediction, other factors are also important such as the
memory effect of time-evolving system. That means the outputs of a system depends not only on the
current input, but also on the system outputs at previous times as applied in the dynamic system of
marine vessels.
The computational effort depends on the number of the layers, the number of nodes in each layer, and
the number of the training samples. Generally, the networks have better system emulation if these
numbers are increased. Obviously, the computational effort dramatically increases with the increase of
the number and the size of the training samples, layers number and nodes numbers. The computational
efforts have certain limits and any increase above this limit will affect negatively the prediction
accuracy.
The size of the training samples should be enough for the purposes needed. The correct choosing and
the large number of the samples are important for sufficiently capturing the characteristics and
behaviour of the system. The knowledge of ocean engineering, naval architecture, and manoeuvring
practice of ships supports the choice process of the samples.
Moreover, the sufficient samples numbers depend on the understanding of the neural network process
and how the network will be trained and used. In the present paper, off-line training has been applied
and that means a large number of training samples are needed. It has been reported that the number of
the required samples for control of marine vessels could be in range of about 1000 to 10000 or even
more. In the off-line training, the network only needs to train once to predict the future behaviour of
the system. Generally, the prediction of the needed pattern takes in some cases short time comparing
to the other prediction methods and the weights will stay unchanged in the prediction of the future
behaviour of the system.
4
130
5.2 Parallel Neural Networks (PNN) Architecture
Multilayer network technique is applied. Each layer in the network has its own role and function. A
layer that produces the network output is called an output layer. All other layers are called hidden
layers except the first layer which is defined as an input layer.
Many numbers of ANNs are used in parallel to describe the different parameters of turning
manoeuvres. For example, the numbers of ANNs which describe the ship track depend on the travel
distance, travel time, track and the application purposes, figure 3.
The sequences in one network architecture are that a node in each layer receives signals from the
nodes in the layer on the left and passes the modified (or weighted) signals to the nodes in the layer on
the right. It is very obvious that the design of the network is to determine the weights so that the
system output from the network structure predicts closely the output of the real system. The system
behaviour is observed and measured to learn the neural network from the measured data. The future
behaviour of the system can be detected with the help of the data gained from the learning steps as in
the following paragraph.
Figure 4 shows the architecture of one NN contained in the PNN and uses the back propagation feed
forward neural network form. The structure is given for the case that the pattern consists of four
layers: one input layer, two hidden layers and output layer. The inputs are fed to the neurons which are
constructed before the hidden layer number one. Each layer has a weight matrix W, a bias vector b,
and an output vector z. To distinguish between the weight matrices, output vectors, etc., for each of
these layers, the number of the layer has been appended as a super script to the variable of interest.
The calculations are performed in the hidden layers and the output layer. The nodes in the four layers
are fully connected by weighted links. Further details are available in [10].
The network mentioned above has M1 inputs that could be so many inputs for wider applications of
motion prediction. The input data for the built mathematical model based-ANN are shown in table 1.
The network in figure 4 has N1 neurons in the first layer, N2 neurons in the second layer, etc. The
numbers of neurons in this application have been changed in each hidden layer form 3 to twenty one.
A constant input of one is fed to the biases for each neuron to reflect the correct magnitude with any
decision output value. Obviously, the outputs of each intermediate layer are the inputs of the next
layer. One of the intermediate layers such as layer two can be analyzed as a one-layer network. The
vectors and matrices of layer two can be identified as N1 represents the inputs; N2 is the neurons in this
layer. The multiplication factors between N2 and N1 are included in the weight matrix W2 while the
input to layer two is z1 and the output is z2. Each layer can be treated as a single-layer network on its
own using the similar notation. The three layers notation can be articulated by the following equations
4-7:
z1 = f1 ( iw1,1, p + b1 ) (4)
z2 = f2 ( lw2,1, z1 + b2 ) (5)
z3 = f3 ( lw3,2, z2 + b3 ) (6)
Equivalent of z3 is as follows:
z3 = f3 ( lw3,2 , f2 ( lw2,1 , f1 ( iw1,1, p + b1 ) + b2 ) + b3 ) (7)
Each layer consists of neurons, which contain a nonlinear transfer function. It processes the input to
the node and produces a decision outputs. The binary sigmoid functions were used in this work to
produce the decision outputs are defined by:
5
131
A
g = f (net ) = ( − λ net )
−D (8)
1+ e
Where: A can be equal 2 or 1 and D is equal 0 or 1 that depends on the pattern problem and the desired
decision outputs. ƒ is a non-linear function and λ is neuron input.
It is common to normalize all variables (inputs and outputs). It is also possible to combine two or more
inputs together to form one input after normalization, which has the advantage of reducing the number
of inputs while their effect on the results is taken into account.
The inputs of the network are processed by the presence of weights and by the non-linear functions as
outputs of each of the various nodes until they arrive at the output layer of the network. The difference
between the target and predicted output is a measure of the error of the prediction:
1
∑
2
Ep = ( t pj − o pj ) (9)
2 j
Where:
E : Error function (least squares)
P : Pattern index
j : Index of output neuron
tpj : The target output for pattern p on node j
opj : Represents the actual output at that node.
The purpose of training is to gradually reduce the error on subsequent iterations. The training
algorithm used in this research is called back propagation feed forward which is a gradient descent
algorithm.
Back-propagation is the most commonly used training algorithm for neural networks. The weights
could be updated using the delta rule (steepest Descent). It is used to minimizing error from hidden-to-
output neurons, see the following equation:
∂E (t )
U wij ( t ) = - η + α U wij ( t - 1 ) (7)
∂ wij (t )
Where -η is the learning rate α is the momentum and t is the iteration number.
The Batch Gradient Descent (traingd) is used in the current training. The weights and biases are
updated in the direction of the negative gradient of the performance function.
For the case shown in figure 5 and 6, the ANNs needs about 100 000 epochs to reach the specific
goals for some manoeuvring points. Where an epoch is defined as a presentation of the time series for
the inputs and outputs for one specific case in the training set. The numbers of trained epochs for the
networks are chosen with the specific reasonable goal and they are depending on realistic and
probability aspects according to the worst condition. The worst condition is the case where the
measurement error is added to the error in the ANN and hence the overall error is maximized i.e. the
two errors have the same signs. The results in this case have to remain within acceptable level to
increase the reliability of the process. The net is tested for its ability to generalize during the training
process with paused every 200 epochs. Consequently, the weights are written to a data file every 200
epochs. The process is continued till the errors lie below an acceptable level.
6
132
7. Results
The method applied has been verified for the three types of ships that presented by the four different
ships (two containers, bulk carrier and V.L.C.C.). In the calculations the rudder angle was varied from
4 degrees till 35 degrees. The minimum ship speed was 6.9 m/s and the maximum was 13.1 m/s. The
highest investigated speeds were for the container ships and the lowest speeds for the bulk carrier.
The results of the trajectory of turning manoeuvres with small and maximum angles of rudder are
shown in figures 7 - 11. The training results for trajectories of turning manoeuvre of the container ship
1 are presented in figures 7.1 -7.2 for two different rudder angles to port side. The comparison
between the calculated and the measured results shows a good agreement except the region of heading
angle equals –180. The trajectory for rudder angle 5° to starboard side is presented in figure 7.3. The
results of the simulations included in figure 7.1 – 7.3 are for the same ship speed. Therefore the
comparison between the three figures gives an indication for the influence of the rudder angle on the
tactical diameter and maximum advance. With increasing the rudder angle, the limits of the turning
manoeuvre are reduced. Figure 7.4 – 7.6 include similar results for the other three ships investigated.
As it can be seen, a good agreement between the results of the training and the measured results has
been achieved.
Some representative results of the trajectory of blind turning manoeuvres of the V.L.C.C. and a
comparison with measured data are shown in figures 8.1 – 8.3. Figure 8.4 includes the corresponding
results for container ship 2. The results in figures 8.1-8.4 confirm that the applied ANN is able to
recognize the difference in the ship behaviour when the ship turns to port or starboard side and to
predict the trajectory for both sides with high accuracy.
The interaction between the propeller and the rudder has a noticeable influence on the maximum
advance and the tactical diameter. For a right handed propeller, the force induced from the propeller
flow on the rudder part above the rotation axis of the propeller is directed to starboard and below this
axis directed to the port side. The direction of resultant force depends on the circumferential thrust
distribution of the propeller and the shape of the rudder. For a spade shape rudder, the area of the
rudder above the propeller axis is lager than below it. That means the resultant force will be directed to
the starboard side. Consequently, the ship will have a smaller maximum advance and tactical diameter
by turning to the port side in comparison to the starboard one. When the rudder has another shape, for
example, a constant chord length, the resultant force will be directed to the port side. In this case, a
ship arranged with a constant chord length rudder may have the opposite behaviour in comparison to a
ship arranged with a spade rudder.
Many turning manoeuvres were carried out to investigate the capability of the developed PANN to
distinguish between the different behaviours of ships by turning to starboard and port side. For these
simulations, the container ship 2 and the bulk carrier were selected. The container ship 2 has a spade
rudder and the bulk carrier has a rudder with a constant chord length. Figure 9 shows a comparison
between the calculated and the measured trajectory of blind turning to starboard and port side. The
corresponding results for the bulk carrier can be seen in figure 10 and 11. For all six manoeuvres, good
agreement has been achieved between the calculated and the measured results. The comparison
between the trajectory for turning manoeuvres to starboard and to port side show that for container
ship 2, the turning manoeuvres to the port side have smaller maximum advance and the tactical
diameter than to the starboard side. The bulk carrier shows the vice versa tendency. Figure 10 and 11
include the results for the same ship velocity at two different rudder angles 30° and 10° respectively.
The comparison between the both figures shows that the effect of the interaction between the propeller
and the rudder on the maximum advance and the tactical diameter is reduced by decreasing the rudder
angle. This effect is also correctly predicted by the developed PANN structure.
The main parameters describing the turning manoeuvre are: acceleration, speed, heading and drift
angles, angular velocity, distance run, revolution per minute of the propeller (R.P.M), etc. For the
simulation of the turning manoeuvre, it is not only important to calculate the trajectory with high
7
133
accuracy, but also it is necessary to have a precise prediction of the variation of these main parameters
with respect to the time. Figures 12 and 13 show a comparison between the calculated time
dependency of main parameters by turn to port and to starboard side. The written number and name on
each curve are the rudder angle and the ship investigated respectively. Figures 12 and 13 have each 14
sub figures, the odd numbers present the test results while the even numbers include the training
results.
Figures 12.1, 12.2, 13.1 and 13.2 show the change of the acceleration during the manoeuvre. All
curves start at zero acceleration because the manoeuvre begins from constant velocity condition. After
that a strong negative acceleration takes place. The peak of the negative acceleration increases with
increasing the rudder angle. The sharp reduction of the acceleration is gradually diminished. The end
value of the acceleration is zero, which means a steady turn condition is arrived.
The change of the ship velocity during the turning manoeuvre is shown in figures 12.3, 12.4, 13.3 and
13.4. By increasing the rudder angle the final velocity of the ship during the steady turn is reduced.
The same can also be seen for the number of revolution of the propeller, see figures 12.5, 12.6, 13.5
and 13.6. When the ship has a non-zero drift angle, the resistance of the ship and the required torque of
the propeller will be increased.
The turning velocity raises dramatically with altering ship’s course, see figures 12.7, 12.8, 13.7 and
13.8. At low rudder angle, the increase of the turning velocity takes place until a maximum value is
reached and this value will be the end value. At higher rudder angle, the maximum value of the turning
velocity will be much higher than the end value. According to the applied definition, turning angle is
positive to starboard and negative to the port side. Figures 12.7 and 12.8 show the same tendency as in
figures 13.7 and 13.8 but with different direction of rotation. The same is valid also for the drift and
heading angles. All the heading angle curves for a turn to the starboard side begin at 0° and end at
360°. The corresponding curves by a turn to the port side begin at 360° and end at 0°. After changing
the rudder angle from zero, the heading angle will remain unchanged for a short period, after that it
increases nearly linear. The slope of the curve increases with increase the rudder angle, see figures
12.11, 12.12, 13.11 and 13.12.
The length of the trajectory during the turning manoeuvre for a certain ship depends on the approach
speed and the rudder angle. Figures 12.13, 12.14, 13.13 and 13.14 show the length of the trajectory in
nautical miles.
The results in figures 12 and 13 confirm that applied PANN is able not only to predict the coordinates
of a complete turn with its advance distance and turning diameter, but also able to simulate the turning
motion in real time. Therefore, the developed method is suitable to be applied in navigation
simulators.
8. Conclusion
Artificial neural networks mathematical models have been developed by means of describing the physical
and operational data of a ship as inputs into the system to predict the turning manoeuvres. The Parallel
Neural Networks application in this work is based on the method Back Propagation Feed Forward
Neural Network (BPFFNN). The system trained and tested with different ships and the results
obtained from the training process and the testing blind manoeuvres showed the satisfactory accuracy
level of the results.
Although the Multi-Layer Perceptron (MLP) network can have an arbitrary number of layers, one
layer of hidden units is sufficient to approximate any function with finitely many discontinuities to
arbitrary precision. In this study a model of two hidden layers and some cases of three hidden layers
was applied. The output layer has different numbers of neurons, which can vary between one and
twenty one neurons. The developed structure dramatically decreases the computational time.
Moreover, the performance and the system accuracy have been improved.
The main future contribution of the ANN mathematical model is the providing of advanced tools to
assist the ship handling training courses to improve the skill of ship’s master, engineers, marine
officers, and pilots. Moreover, regards to port safety, the developed model could be improved to be
applicable for predicting arrival and departure vessels behaviour for the enhancement of vessel traffic
services V.T.S.
8
134
9. Bibliography
[1] Ebada, A., Abdel-Maksoud, M., “Applying Artificial Intelligence (A.I) to Predict the Limits of
Ship Turning Manoeuvres”, STG meeting, News from Hydrodynamics and Manoeuvring,
Hamburg, September 2005.
[2] Ebada, A., Abdel-Maksoud, M., “Applying Neural Networks to Predict Ship turning track
manoeuvring” , 8th Numerical Towing Tank Symposium, Varna, Bulgaria, October 2005.
[3] Faller, W.E., Smith, W.E., and Huang, T.T. “Applied Dynamic System Modelling: Six Degree-
Of-Freedom Simulation Of Forced Unsteady Manoeuvres Using Recursive Neural Networks”,
35th AIAA Aerospace Sciences Meeting, Paper 97-0336, 1997, pp. 1-46.
[4] Faller, W.E., Hess, D.E., Smith, W.E. and Huang, T.T., “Full-Scale Submarine Manoeuvre
Simulation,” 1st Symposium on Marine Applications of Computational Fluid Dynamics, U.S.
Navy Hydrodynamic / Hydroacoustic Technology Center, McLean, Va., May 1998.
[5] Faller, W. E., Hess, D. E., Smith, W.E., and Huang, T.T. “Applications of Recursive Neural
Network Technologies to Hydrodynamics”, Proceedings of the Twenty-Second Symposium on
Naval Hydrodynamics, Washington, D.C., Vol. 3, August 1998, pp. 1-15.
[6] Hess, D. E., Faller, W. E., “Using Recursive Neural Networks for Blind Predictions of
Submarine Manoeuvres”. 24th Symposium on Naval Hydrodynamics Fukuoka, Japan, 13 July
2002.
[8] Charytoniuk, W., Chen, M. S., “Very short-term load forecasting using artificial neural
network”, IEEE transactions on Power Systems, Vol. 15, No.1., February 2000.
[9] Fukukda, T., Shibata, T., “Theory and applications of Neural Network for industrial control
systems”. IEEE Trans. on Industrial Electronics, vol. 39, No. 6, 1992, pp. 472-489.
[10] Hirose, A., “Complex-Valued Neural Networks, Theories and Applications”, World Scientific
Publishing Co. Pte. Ltd, 2003.
[11] International Maritime Organization, MSC/Circ.1053, “Explanatory notes to the standards for
ship manoeuvrability”, IMO Instruments, London-IMO, 16 December 2002.
[12] Resolution MSC.64(67), “Adoption of new and amended performance standards”. (adopted on 4
December 1996). retrieved June 25, 2001,
Web:http://www.Nortek.net/learning_center/comm./imo_standards.htm
9
135
Table 1 Main ship parameter and input data
Number of rudders 1 1 1 1
Type of rudders Semi- Spade Semi-Spade Full. blanced Full-Spade
Position In CL In CL In CL In CL
Area of each rudder,
incl. ½ horn (m2) 52.6 74.60 45.5 82.9 *
100 x total rudder area
/LBP x T ⊗ 2.25 2.35 1.62 1.34
Turning velocity
of rudder °/sec. 2.50 2.80 2.50 2.50
Max. rudder angle ° 35 35 35 35
In addition to the input data mentioned above the following data are used as input to ANN:
number of blades (z), propeller number of revolution (n), rudder angle ( δ r ), ship’s speed (U).
10
136
δr
11
137
Figure 5: Epochs and goal of one case of Figure 6: Results of a training goal
PANN
12
138
Figure 7.3: V.L.C.C. Figure 7.4: Bulk carrier
δr = 30°, turn to the port side δr = 35o, turn to the port side
Figure 7: Training results of the turning manoeuvre motion of different ships using various rudder
angles to port and starboard side
Figure 8: Test results of the turning manoeuvre of different ships using various rudder angles to port
and starboard side
13
139
Figure 9: Test results of turning manoeuvre of container ship 2, δr =050 degrees to both sides
Figure 10: Test results of turning manoeuvre of bulk carrier, δr = 30 0 degrees to both sides
Figure 11: Test results of turning manoeuvre of bulk carrier, δr = 10 0 degrees to both sides
14
140
Test results Training results
15
141
Test results Training results
16
142
Test results Training results
Figure 12: Training and test results of the main parameter of turning manoeuvre to the starboard side.
18
144
Test results Training results
Figure 13: Training and test results of the main parameter of turning manoeuvre to the port side
19
145
Artificial Neural Networks – Application to freight rates
George Bruce, University of Newcastle upon Tyne, Newcastle, United Kingdom,
[email protected]
Gary Morgan, Lloyds Register, London, United Kingdom, [email protected]
Abstract
This paper reports on research which explored the possibility of accurately predicting the
VLCC market with Artificial Neural Networks by modelling VLCC freight rates, the VLCC
orderbook and VLCC new building prices. The results of the analysis of the above offers an
insight into the potential effectiveness of artificial neural networks as a forecasting tool and
hence, a useful assistant in decision making.
1. Introduction
The shipping industry is one of the most dynamic economic systems in the world Stopford (2005).
Those involved in the industry will be fully aware of the importance attached to decision making. A
blend of experience, historical analysis and sheer instinct may be harnessed as means of determining
the nature of a decision. So what of forecasting? It is a certainty that forecasting is essential as an aid
to decision making. However, forecasting has a notoriously poor record and this phenomenon is not
particular to the shipping industry. Indeed, forecasting is shrouded in uncertainty where it has
delivered, historically, diverse outcomes which are often wildly inaccurate. The fabric of the industry
is such that those who are involved are predominantly equipped with vast experience from many
years of association and operation. Forecasting within the shipping industry pursues the addition of
value to decision making and is, therefore, considered to be a means to providing support towards
decision making. Of course, the complex nature of the shipping industry does inflict difficulties upon
forecasting. Bruce,(1999) To date there have not been any tools capable of analysing the complex
nature of the shipping industry.
Artificial neural networks have become more common in use in business in recent times due to the
effective manner in which they manage complex, multiple input situations and provide a single
output. It is well known that neural networks are being used in many of the financial markets. These
include stocks, bonds, international currencies and commodities. Nakamura, (2005).Essentially, a
complex set of data can be modelled using artificial neural networks which are ideal for establishing
patterns in such systems. Freight rates are very important within shipping because they effectively
determine the ship owners’ income from trading their ships. The derivation of freight rates is very
complicated, and calculating what affects freight rates is a daunting task since so many variables can
be cited as influential.
The oil tanker market is typical of the volatile shipping markets, and this paper seeks to explore the
possibility of accurately modelling its heart beat, freight rates, using a relatively innovative technique.
Artificial Neural Networks are harnessed with the intention of modelling VLCC freight rates, the
VLCC Orderbook & VLCC Newbuilding Prices, so as to ascertain the effectiveness of the technique
for predictive purposes within the domain of shipping. The results obtained are very promising and
provide an ideal platform for further study. The data was provided by Clarkson’s plc, without whose
assistance, this study would not have been possible.
146
2. Background & Methodology
Artificial Neural Networks (ANN) are, essentially, relatively crude electronic models based on the
neural structure of the brain. Haykin, (1994). They account for variability and uncertainty in the
impact of input parameters on which an outcome is dependant. This uncertain relationship is evident
when analysing freight rates in the context of shipping and the variables which contribute to the
determination of the rates. In fact there are many possible variables which could be associated with
having an impact on freight rates, thus making artificial neural networks an ideal tool for their
analysis. The process by which ANN learn relationships provides one of the principal reasons for
utilising the model. Complex relationships are learned without the need to propose any mathematical
models to correlate the various variables. Veelenturf, (1995).
The Artificial Neural Network Modelling approach offers a major advantage over other models via
the medium of analysis of each input parameter against the resulting model output. Li and Parsons,
(1996). Subsequent to this, a sensitivity analysis can be generated whereby the sensitivity of an input
parameter in relation to the network output can be ascertained. The sensitivity analysis will illustrate
which input parameters affect freight rates, the orderbook and new building prices and to what
degree. Herein lies a similarity between ANN and statistical models, but the ANN approach is
potentially much more rigorous and accurate. El Saba et al., (1999).
147
2.2. Network Architecture
Irrespective of the type of ANN that is utilised, any of the systems will comprise a large number of
processing elements (PE), nodes or neurons which communicate among one another over a series of
weighted connections. The basic architecture of an ANN encompasses the following components
within a hierarchical structure:
Each processing element computes its output via the utilisation of a transfer function which is either
linear or non linear. ANN have many characteristics which give advantages over traditional
mathematical and statistical models for modelling complex situations. One of the most important is
their ability to “learn” from experience and thus, undergo a process of generalisation on the premise
of prior learning.
148
3. VLCC Case Study
An analysis of the various types of neural network revealed that the most effective network was the
MLP neural network. This was evident by virtue of the fact that the mean square error approached its
threshold more quickly than the other networks implying greater speed, therefore, greater efficiency.
In order to gain an understanding of the effects of the various parameters on the network each of the
parameters was tested at various levels whilst holding all of the other model parameters constant. The
various parameters are listed below:
It must be noted that using neural networks was not an exact science due, in main, to the nature of the
learning process. Outputs will be different virtually every time, however, in general, it was found that
the most effective combination of parameters within the MLP network was:
149
Table 1: Table of input variables for modelling freight rates in the VLCC market
Foreign
300K DWT VLCC inc. VLCC Fleet Crude Oil
VLCC Average Earnings Industrial Production Exchange
change. New-building Development Imports, USA
1990/91-Built ($/Day) Europe (% Yr/Yr) Average
Prices ($ Million) (Million DWT) (M bpd)
USD/JPY
Foreign
VLCC D/H 300K 5 Yr Arab Light
VLCC Average Earnings Industrial Production Mid-East OPEC Oil Exchange
old Second-hand Crude Oil Price
1970s Built ($/Day) Japan (% Yr/Yr) Exports (M bpd) Average
Prices ($ Million) ($/bbl)
USD/KRW
Foreign
Ras Tanura Chiba VLCC
Industrial Production VLCC Demolition Crude Oil Imports, Brent Crude Oil Exchange
Average Earnings Built
OECD (% Yr/Yr) (DWT) Total (M bpd) Price ($/bbl) Average
1990/91 ($/Day)
USD/GBP
The variables basically cover the present and future supply of VLCC’s (considering new building and
scrapping rates), oil prices, industrial output as a determinant of oil demand, oil supply and foreign
exchange markets.
Figure 2 shows the output obtained for the freight rates between the Middle East and Japan and
demonstrates the accuracy of the neural network modelling of the freight rates.
250
200
Ras Tanura Chiba VLCC
150 260K Worldscale Rates
Output
0
1 9 17 25 33 41 49 57 65 73 81 89
Exemplar
Fig. 2: Graphical representation of the optimal neural network for modelling VLCC freight rates
150
This is a graphical representation of the actual VLCC freight rates versus the network output. It shows
that the model performs well and models the highly volatile freight rates with great accuracy. The
average error between the network output and the actual VLCC freight rates is 0.18% which is very
impressive given the nature of this problem. There are 29 inputs in the model, for the neural network
to assimilate such an accurate relationship justifies the power and potential of this technique for
forecasting and decision making.
Figure 3 demonstrates that as freight rates are high, as too is OPEC Crude Oil production. This
indicates that increasing oil production is as a result of increasing demand for oil. In turn, the
implication would be that as demand increases, as too would the demand for sea transport. Hence,
freight rates rise
76
74
72
Output(s)
70
68 Ras Tanura Chiba VLCC 260K
66 Worldscale Rates
64
62
60
58
24 9
24 0
25 1
25 2
25 3
26 4
26 5
26 6
27 6
27 7
28 8
69
9
0
4
0
.0
.4
.8
.1
.5
.9
.2
.6
.9
.3
.7
.0
24
151
network produced remarkable results given the number of inputs to the network. The error between
the actual VLCC freight rates and those derived by the model is impressively low and offers genuine
promise that ANN are a suitable approach and make a case for further study.
4. Concluding Remarks
The authors consider that artificial neural networks do provide a credible and valuable tool which can
be utilised in modelling the shipping industry. Further study will be required in order to ascertain its
effectiveness as a predictive tool for further sectors of the shipping industry. The authors believe
ANN will prove to be as a minimum no less effective than current methodology and, at best, a
powerful industry tool. One of the main advantages of using ANN has been the identification of key
factors by the model. This notifies the user which of the inputs most affect the model output. As an
aid to decision making ANN may become an important tool for those involved with the shipping
industry.
When decision making is being considered by those in industry it is wholly necessary to understand
and comprehend the forces which have influence over the industry. Individuals assert influence upon
the shipping industry by virtue of the fact that their decisions shape and affect its fabric. Therefore,
consideration must be borne of the behaviour of individuals in a psychological cognitive sense, as
well as an economic one. In the long term, behavioural economics will become a more prominent
feature in understanding the shipping industry. Therefore more complex models will be required in
attempts to represent such behaviour.
The role of speculative investment in the shipping industry serves as a sound example of a complex
causal relationship between the shipping industry and individuals. What are the reasons why an
individual decides to invest in the shipping industry? What are the factors involved in the decision
making process? Historically investment in the shipping industry has been seen to be a risky business
and one which may involve long periods of low returns on investment. Generally, however,
expectations play a pivotal role in investment decisions. Expectations of price changes, trading
conditions, political issues and many other disturbances can influence investment decisions.
However, returns can be of staggering magnitude and this is one of the reasons for investment.
Nevertheless, average returns in the long run are similar to those derived via the medium of
investment on the stock market or through other financial vehicles, with the addition of very turbulent
times to wrestle with. It may be that individual involvement within the industry is that of a romantic
nature and association is sentimentally motivated.
The development of economic models which are able to draw attention to potential economic crises
and the circumstances which would act as a vehicle for such economic turbulence provides a
challenging long term goal. An understanding of the causal reasons and conditions which invoke a
financial crisis is absolutely necessary for stable decision making. The financial systems which have
been developed and implemented are complex, and it is important to be able to fully explain them and
their consequences.
The authors believe that a more fundamental understanding and comprehension of the systems and
structures around which the industry operates and performs is necessary in order that forecasts can be
developed with more credibility. The role of economics is an integral one which must always be
accounted for fully. The establishment of a set of possible scenarios is vital in order that the domain
of possibility is appreciated and therefore accounted for. The behaviour of individuals with roles as
actors within the shipping industry’s framework of economic activity and possibility is an area in
need of exploration. Individual behaviour is not as simple as many econometric economic models
assume. Decisions are compound cognitive ones which are comprised of multifaceted inputs, not just
yes or no. The role of behavioural economics is one which has a place in economics in general,
closely linked with psychology. It seeks to model individual behaviour more realistically that than
classical axiomatic preference revelation. The shipping industry has been served relatively well by
manual forecasting techniques in the past, but forecasts are increasingly inaccurate. The use of
152
artificial neural networks has been demonstrated in the study reported in this paper to be an
impressive modelling tool. This gives the authors considerable optimism regarding their future use as
a predictive tool.
Acknowledgements
The data used for the case study was provided by Clarkson’s plc, without whose assistance, this study
would not have been possible. The authors are grateful to Simon Chattrabhuti, Galbraith’s London,
who provided invaluable advice in respect of the shipping markets.
References
153
Differentiating product model requirements for ship production and
product lifecycle maintenance (plm)
Rolf Oetter, ShipConstructor Software Inc., Victoria, B.C., Canada, [email protected]
Patrick Cahill, ShipConstructor Software USA, Inc., Saraland, AL, USA
[email protected]
Abstract
Ship design has steadily evolved from development of two dimensional drawings representing the
vessel from a series of different perspectives to a full three dimensional computer generated product
data model, which includes not only geometry representations, but also a full complement of pertinent
data compiled in the Product Data Model database. Because of the capabilities of product modeling
software it is now possible to generate an exquisite level of detail along with an overwhelming
amount of data in the product model. However, most ship designers and shipbuilders are only
concerned with modeling enough information to build and classify the vessel. On the other hand, most
ship owners and operators can achieve tremendous benefits in lifecycle operations cost reduction by
having extremely detailed product models coupled with third party maintenance, training and
logistics support applications.
The issue remains unsolved as to where modeling for production stops and modeling for PLM starts,
and whether it is possible to do both in the same Product Data Model in a cost effective manner. This
paper addresses the requirements for production design compared to the requirements for lifecycle
design, and proposes a series of design guidelines to use based on the long term intentions for the
Product Data Model.
1. Introduction
Ship and boat design has steadily moved into the computer age, and the relatively recent availability
of affordable computer workstations coupled with powerful modeling tools has generated widespread
interest in using computer aided design tools for all types of ship and boat design and construction.
Three dimensional modeling allows designers to create precise virtual replicas of the finished product,
down to levels of detail that mimic reality. However, extreme detail in design is not always necessary
to support production of a vessel, and creating the details lengthens design time and increases
engineering costs. On the other hand, levels of detail not required for production may be desirable, or
even necessary, for lifecycle uses of the model, including visualization, training, operations and
maintenance. In addition, details not necessary for production may be necessary to support
engineering calculations and approval requirements. It is incumbent on design and engineering
management to define the scope of modeling prior to the start of design in order to develop reasonably
accurate engineering budgets and schedules. This paper presents issues for consideration in
determining design work scope.
The current state of computer aided ship and boat design technology is the three dimensional product
data model, or PDM. There are a number of software systems that support true 3-D product modeling
on PC workstations, including, but not limited to ShipConstructor, Tribon, Catia, Intergraph’s ISDP
and Intelliship, and FORAN. Each of these systems combines computer generated graphics with
attributes stored in a database. The systems are all modular to some degree, with the software modules
corresponding to design discipline areas, such as hull design, structural detailing, equipment design
and outfit design, including piping, HVAC, electrical and hull outfitting. Although the model is
integrated, the geometry model and the data model serve different purposes, and the scope of the
modeling effort includes both graphic design and data entry.
154
Fig. 1: Product Model Data and Geometry
155
Fig. 2: Product Model Information Structure
Most of the ship design software systems on the market today support library or macro driven data
generation, along with automation of reports for data output from the PDM. Similar to the geometry
model, the amount of information contained in the data model is determined by the ultimate use of the
information, which determines the level of effort required to populate the data model.
Understanding that the PDM is a two part computer model is the first step towards scoping the
modeling effort when starting a project. The designer or engineer has two tasks; (1) developing the
geometry to describe and visualize the vessel, and (2) populating the database to define and quantify
the physical properties of the vessel. The two tasks can often overlap when using a state of technology
product modeling system which automatically populates the database while defining the geometry.
156
4. Product Modeling for Construction
The most basic, and primary need for a product model is to support construction of the vessel. At the
most rudimentary level this means producing enough information, usually in the form of dimensioned
drawings with a basic bill of material, to provide production crafts with the necessary information to
fabricate and assemble the components of the vessel. State of technology provides for developing
geometry and associated information to support engineering analysis, advanced manufacturing
including CNC cutting and robotic welding, direct planning support through a build strategy interface
and support for material definition, acquisition and distribution.
With a 3-D PDM the traditional drawings are an offshoot from the greater content of the model; with
most systems designed by defining the visualization plane and then generating the graphics for the
components in that plane. The content of the PDM is directly related to the capabilities and the
production techniques of the end user of the model. A shipyard like Odense that has extensive robotic
welding capabilities needs detailed structural geometry, broken down to assembly and sub assembly
levels, with edges and tool path defined for the production systems. A shipyard that is building push
boats on the Gulf of Mexico that subcontracts cutting work and assembles all parts on the ways needs
the ability to produce NC part definition (nest tapes) and may not even need detailed assembly
drawings to put the vessel together. However, it is the contention of the authors that transitioning from
production of NC part data to the development of detailed assembly drawings that are automatically
generated by the product modeling software is a natural step that results in tremendous production
savings while costing a small amount of additional engineering effort.
157
Fig. 4: ShipConstructor Generated Assembly Drawing
Modeling for production goes beyond structure. Modeling of piping systems results in development of
spool drawings which are used to fabricate pipe spools for pre-outfitting, and can be used to develop
subsystem assemblies that can be shop tested prior to installation in the vessel. The same applies to
HVAC modeling, and development of ductwork spools. Equipment modeling defines the precise
location of structural and distribution system connection points, enabling the shipbuilder to define
secondary and foundation structures, as well as the precise locations of piping, electrical and
ventilation systems that connect to various components.
If the naval architects or engineers plan on using simple engineering models to perform primarily
hand calculations, there is little need to have detailed models to export system designs to more
capable CAE tools. If the more powerful tools are to be used it may be possible to significantly reduce
the engineering effort by increasing the level of design effort and creating the analysis models while
developing the PDM.
158
Production engineering documents usually consist of a series of drawings that detail the fabrication
and assembly of the vessel’s component parts, with a bill of materials, pertinent production
information such as weld types, coatings and special instructions. Drawings are created for structural
assembly, pipe spools and HVAC ductwork fabrication and installation, electrical cableways,
equipment locations and foundations and whole array of other purposes. Production drawings were
first created using hand drafting, which slowly evolved into 2-D computer aided drafting, then 3-D
computer aided design, and then eventually to the current state of technology where a 3-D product
data model is created and the production drawings are extracted from the model.
The more detail that is developed during the modeling phase of design, the less has to be added to the
drawings extracted from the PDM. If the structural model is not sufficiently detailed it is likely that
additional drafting man hours will be involved in adding the details to the production output of the
system. The same is true for each of the other system models.
A further consideration in modeling for production support is the need to develop files to support
CNC equipment. The most common application is the development of nest drawings and “nest tapes”
or NC cutting files to drive plasma, oxy-fuel, laser or water jet cutting machines. Typical control files
include information on external profiles, internal contours, reference line marks and text marking of
piece numbers and, in some cases, other production information. Some control files include
intelligence to support bevel cutting as well as square edge cutting.
159
Fig. 6: Structural Plate Nest Drawing
The information in the control files is derived from the nested parts, which are obtained from the
lofted parts, which are extracted from the product model. The more information that is included in the
product model the less information needs to be manually added to the nest drawings prior to
generating the NC codes.
As shipyards slowly progress towards robotics the need for a comprehensive 3-D product model
becomes even greater. Robotic welding of structural assemblies requires a complete 3-D model from
which tool paths can be inferred. The same is true for robotic pipe welding or any other multi-axis
robotically controlled process, which can include burning, welding, painting or even lifting and
handling. The model that is fed to the robotic controller needs to have sufficient detail of all
surrounding structures and components (even if it is not part of the robotic weldments) in order to
create a robotic control code that eliminates collisions and identifies unreachable parts of the
160
structure. Equally important is that the model must be representative of the assembly at the time of
welding. The most effective way to do this is using a build strategy concept, where objects are linked
into the model in the same sequence that they are installed in the production environment.
The production crews need complete bills of material in addition to drawings. The most efficient way
to present the bill of material is also in a series of staged views, where the material is listed that
applies to only the production activities shown on a particular drawing sheet. This is where the
integrated PDM becomes critical. The graphical representation is linked to the data in the database,
which includes the piece-part descriptions, dimensions, stock identifiers, coating systems, end
preparations and other pertinent information. The more information that is input to the data model, the
more comprehensive the information provided to the production crews.
It is obvious that a detailed PDM is necessary to automate the development of detailed production
schedules, but the increased level of engineering work has to balance against the work saved in
production planning, and the increased speed with which a plan can be established. Increasing speed
results in decreased cycle times, which in turn results in overall productivity improvements.
161
PDM is a comprehensive repository of all the material in the vessel. The PDM can be used to populate
an MRP system with the Bill of Materials (BOM) for the entire vessel. Using a build strategy tool
breaks down the BOM into the logical production stages, which have an association to start and finish
dates when linked to a Production Planning system. The production need dates can be used to back-
schedule the delivery, order and Request for Quote dates needed by the materials procurement
department.
Direct links to third party applications such as the U.S. Navy funded Common Parts Catalog system
provides enterprise wide visibility of material specifications, availability, cross reference with other
shipyard’s excess inventory systems and other useful procurement related information. Once again,
the level of effort that is put into detailed development of the PDM and the associated Bill of
Materials can be offset by the reduction in effort required for materials procurement activities.
162
Fig. 10: Product Model Data Reporting
5. Lifecycle Support
The primary purpose of this paper is to provide some insight to designers and engineers as to how to
differentiate between modeling for vessel construction and modeling for lifecycle support. The first
part of the paper demonstrated how development of a more detailed PDM can result in savings in
other areas of ship construction by linking supporting systems into the PDM and re-using the data
rather than creating it in separate and disparate supporting systems.
The PLM, however, is used at higher management level than the baseline PDM. The PLM can be
used to provide configuration management over the life of a ship or class of ships. It can be used to
develop photorealistic renderings of all or portions of the vessel, with links to the PDM database for
data queries to support both operations and maintenance activities. The PLM can also be used as a
training tool in ways that have only begun to be explored, primarily at the system and subsystem
level, but with extensibility to whole ship operations, environmental simulations, location specific
applications and an unlimited number of other areas.
163
5.2 Configuration Management
Configuration management using PDMs and PLMs, particularly across a multiple hull class of
vessels, is deserving of its own paper. However, it needs to be addressed at a high level here, as it is a
very important function of the PLM which is not well understood or implemented. Configuration
management falls into one of two categories; single hull change management or multiple hull
configuration management. The PDM and PLM have unique and differing roles in depending on the
case.
Once the vessel is delivered the owners will likely incorporate a variety of changes to the vessel
during its lifetime, ranging from minor subsystem or outfitting changes to replacement of major
equipment components. If the owner has the foresight to pay for the development of a detailed PDM
and include the PDM as a contractual delivery item then he can save significant time and money in
ship checks and alteration design by (1) maintaining the PDM as a PLM, (2) developing changes in
the PLM to ensure seamless compatibility with the as-built design, and (3) using the PLM to update
the logistical database for the vessel with new information once the changes are incorporated.
It should be apparent that the PDM and PLM are important deliverables with a new vessel, but few
owners and operators have recognized the utility of the model due to being accustomed to receiving
conventional drawings which do little more than accumulate dust once delivered. Obviously, the more
detail that is incorporated in to the PDM, the more functional the PLM will be for future change
management of the vessel. The vessel owner should consider the lifecycle costs of ship checks and
change design when negotiating the baseline modeling effort.
In a conventional PDM the model can be updated, but the integrity of the model is quickly lost as
numerous split applicability changes are incorporated. It becomes virtually impossible to maintain a
single model for multiple configurations of the same basic hull. Maintaining separate models for each
hull can be cost prohibitive with conventional PDM/PLM tools. Each change has to be created in each
model for each hull. This can result in labour hour multipliers into the hundreds when incorporating a
dozen or more changes into 10 or more hulls.
DDROM, or Data Driven Relational Object Modeling, is a new feature in ShipConstructor 2006 that
could completely change PLM based configuration management. With DDROM, the geometry and
the attributes are stored in the database, rather than having the geometry stored as a linked drawing
file in some native CAD format. Split applicability changes can be performed in a single PLM, and
then updated as a database change in the other effected hull PLMs. Prior to updating, the boundaries
of the change have to be identified, related to database entries, compared for exactness across affected
hulls, and then updated where applicable. With this capability, change management in multiple hulls
of a single class becomes effective and consistent, while maintaining a true and correct PLM of each
hull in the class.
164
In order to make this possible the PLM must be detailed to the level of expected changes, which
means that the baseline PDM must be well developed, incorporating all systems and components as a
true virtual representation of the vessel.
The operations, maintenance and training aspects of the PLM are value added aspects of the
development of the model that should be accounted for when estimating the cost of modeling and
determining the total scope of the modeling effort. It is imperative that the engineers or designers
provide the owner/operator with an accurate assessment of the value of the PLM and the cost of the
extra effort required to develop an effective PLM.
5.3.1 Operations
Shipboard operations require a complex interaction of multiple systems and human interfaces. Most
modern systems have some degree of computer or electronic control, and the interfaces are generally
computer screens with a GUI, or Graphical User Interface. Many of these systems require additional
modeling or database population of the ship’s components as input. Linking a well developed PLM to
the systems can save substantial time and cost in starting up the operations systems. The modeling can
be performed as part of the base design, support fabrication, installation and testing, or it can be done
after the fact at additional cost if the system design is not performed in the baseline model.
The PLM can also be used to support operational simulations, such as cargo loading and off-loading,
container stacking and nesting, rolling stock parking, stability analysis including automated weight,
trim and heel calculations, port entry and docking simulation and analysis, sea state effects simulation,
and a whole host of other operational scenarios.
5.3.2 Maintenance
The PLM can be especially valuable when integrated into a well designed maintenance program. The
model holds a graphical representation and attributes data of every component on the ship that is
modelled. Each of these components can be linked to a multitude of other databases, including
maintenance schedules, maintenance records, maintenance manuals, vendor websites, warehouse
records for spare or replacement parts, purchasing systems to place orders for parts, real-time
monitoring systems for trend analysis, as well as other databases or systems.
The PLM can be uploaded to tablet PCs, handhelds or other devices for portable fast access and data
entry. Developments in modeling and visualization technology allow large and complex models to be
incrementally uploaded to such systems, providing paperless access to maintenance data and
information.
The PLM is especially handy when incorporating changes on an existing vessel. The PLM allows the
existing vessel configuration to be compared to the as-designed (and, if done properly, as built)
configuration. Any differences can be incorporated into the model, and the rip-out and installation
plans can be developed from the model, providing continuity to the logistics databases and an
accurate model for the remaining life of the vessel. This could be especially valuable to an
owner/operator on sale of the vessel, since the PLM is a value added asset.
The amount of data and the quality of data links that can be accessed for maintenance and logistics is
entirely dependent on the quality of the model. If the model is not created with sufficient detail
(including every significant piece of equipment), the data links for maintenance are inadequate.
165
The following series of pictures demonstrates a small portion of what can be done by exporting a
model from ShipConstructor to NavisWorks JetStream, adding Smart Tags, and hyper linking the
smart tags to other data sets or files.
Fig. 11: Product Model Exported to NavisWorks with Selected System View
166
Fig. 13 Smart Tag Hyperlinks
167
Fig. 15 Hyperlink to Vendor Web Page
5.4 Training
Microsoft Flight Simulator is an excellent example of a software tool that allows a user to enter a
virtual environment to learn how to operate a complex system such as an aircraft. Simulators using
computer generated environments are becoming increasingly more common as hardware and software
become more affordable. Combat troops train in photorealistic rendered environments that precisely
model real combat zones, particularly in urban environments, and learn to deal with unexpected
actions that would result in their deaths in the real world. Ship operators and crew face similar
technically complex and physically hostile environments, and can benefit immensely from training in
realistic virtual environments.
168
The PDM is usually completed some months (or longer) before the vessel is delivered, and, as a PLM,
can be used to provide crew training. The training can vary from complex simulations to simple
visualizations of system components and distributed system locations. A detailed PLM will show
every detail on a console, gauge or mechanical component, while a less detailed, production oriented
model will not.
An example of using the PLM to visualize a system location is shown in the figure above, where the
PLM of an OSV is exported from ShipConstructor to NavisWorks, and all components in a particular
electrical system (identified by a system code) are visualized in red.
6. Conclusions
The scope of design and engineering of a vessel will vary depending on the intent and ultimate use of
the design. The Product Lifecycle Model is a tool that has only recently become available to ship
owners and operators as a result of advances in computer software and hardware technology. The
PLM can be an affordable and valuable asset to the owner/operator if the potential uses are
understood and it is budgeted into the funding for the vessel when contracting the design and
engineering effort. If the uses of the PLM are specified in the vessel contract as a functional
specification (similar to the actual ship specifications) the engineering scope can be accurately defined
and estimated prior to starting the project. Developing a PLM as a post-delivery item adds time and
costs similar to negotiating changes after start of construction.
Acknowledgements
The authors would like to thank Bender Shipbuilding and Repair Co., Inc. and Robert Allan, Ltd. for
the project file images used in the paper.
169
The Competence Monitor as a Management Information System –
Controlling the Strategic Development of Competencies
in the Maritime Industry
Christian Nedeß, Hamburg University of Technology, Hamburg/Germany, [email protected]
Axel Friedewald, Hamburg University of Technology, Hamburg/Germany, [email protected]
Mathias Kurzewitz, Hamburg University of Technology, Hamburg/Germany, [email protected]
Abstract
European shipyards try to strengthen their position in the international shipbuilding market by focus-
sing on technological leadership through process and product innovation based on highly skilled em-
ployees. This requires a systematic approach to maintain and further expand the technical competen-
cies of a shipyard’s workforce. A concept to support a shipyard’s management in controlling this
process has been developed and realised as an information system. Technical competencies that are
necessary to perform a certain activity in the value-added chain have to be derived based on an
integrated multidimensional life phase model of a vessel. This model integrates process, product,
knowledge, resource and organisational data into one representation. Furthermore competence pro-
files which describe the ability of a certain team to perform a given task have been developed by
assessing the team’s competencies. In this way different competencies like the capability to perform
an evacuation simulation which is necessary to produce the general arrangement for a new vessel
design can be evaluated. Therefore an indicator-based approach using output quality (e.g. required
space per passenger), process quality (e.g. number of iteration loops) etc. is applied. Assessment
indicators and life phase model have been prototypically implemented in a relational database. This
database can be queried through the Competence Monitor for analysis about teams, departments or
processes on different aggregation levels of competencies. In order to acquire the necessary data
several strategies like manual evaluation or automatic data transfer have been implemented in a
workflow application which manages the competence controlling process.
1. Introduction
Strong international competition forces European shipyards on the one hand to provide their
customers with innovative vessel concepts and on the other hand to continuously improve the
processes of the value-added chain. Both product and process innovation strongly rely on the
competencies of the shipyards’ employees.
Although technical skills and knowledge play a very important role for the overall development of a
shipyard these resources are up to date not sufficiently controlled like other issues such as the
assembly progress of each ship or the financial situation of the company. Nevertheless, sufficient and
target-oriented actions for a sustainable competence development within a shipyard can only be taken
if an effective controlling process is implemented. Competencies and strategic goals have to be
synchronized and the significant data has to be generated to inform management and the heads of the
departments about the current demands and deficits.
Following a concept for controlling a shipyard’s competence situation is introduced and its
prototypical application is presented. Starting from an analysis of the current situation in the maritime
industry characteristics of competence are elaborated. Existing approaches for competence assessment
and reporting processes are being discussed. From this analysis requirements for the controlling
concept are derived. The concept that is introduced consists of the process-oriented identification of
competencies, their assessment and reporting strategies. Finally the Competence Monitor and its
integration into a workflow application to control the reporting process is presented as a prototypical
management information system.
170
2. Competence Management in Shipbuilding
In this chapter the potentials of a strategic competence controlling for producing companies are
analysed based on a study of the current situation in the shipbuilding industry and a discussion of
existing approaches for managing competencies and reporting processes.
171
2.2. Characteristics of Knowledge and Competence
Developing technical competence in line with a company’s strategic aim is a crucial element for
designing and producing competitive products. The efficient management of the required
competencies is an important topic in research. Approaches to this issue have been developed from a
management theory perspective – the enhancement of the resource-based view to the competence-
based view – as well as from a knowledge management perspective. Both perspectives stress the
interrelation of knowledge and competence.
The term “competence” is defined in several ways and in different levels of detail. Most of these
definitions find a close relation between competence and knowledge. Knowledge is seen as e.g. “a
fluid mix of framed experience, values, contextual information, and expert insight that provides a
framework for evaluating and incorporating new experiences and information”, Davenport, Prusak
(1998). In this paper knowledge is understood more solution-oriented in that manner, that it enables a
person to solve problems without any further informational instructions, Koch (2004). North defines
competence as an enhancement of knowledge through ability and action, North (2002). Schiller adds
legitimation as a further resource attribute, which if bundled create competitive advantages, Schiller
(2000). These approaches to add up resource attributes to competence support the relationship of
knowledge and competence. Nevertheless one can neither specifically conclude the nature of this
relationship nor derive a way competencies can be utilized in competition. A more strategic emphasis
of competence delivers Sanchez who explains competence as “the ability of an organization to sustain
coordinated deployments of assets and capabilities in ways that help the organization achieve its
goals”, Sanchez (2001). Freiling adds the ability to combine available input goods in processes that
are oriented on market requirements, Freiling (2001). These definitions interpret competence and
knowledge as resources which serve as input to the value-added process of a company. Yet it is not
clarified how competence can be incorporated into the value-added process. Other authors introduce a
person-based focus of competence in that way that the goal-oriented application of knowledge,
capability and other resources on operational activities lead to competence. The personal reference
can originate from persons or organizational units which are owners of a certain competence, Thurnes
(2003). The assignment of competence to a single person is also stressed by Metzenthin, Metzenthin
(2002). He defines competence as all knowledge-based capabilities bound in a single person which
under the precondition of motivation and organizational legitimation is able to fulfil a specific task for
the value creation within a company. The analysed approaches to define and characterize competence
can be summed up to the following attributes of competence:
Knowledge-based
Bound to a person
Depending on the personal capabilities of the competence owner
Necessary to perform tasks within the value-creation process
Basis for competitiveness and realization of strategic goals of a company
Bound to motivation and legitimation
Therefore in the context of developing competencies in line with a company’s strategic goals
competence is understood as knowledge based capability of a single or several persons which
represent the basis to perform a certain task and emerges when applied within a company’s value-
added process. Aspects of motivation and legitimation will not be considered in the following as it is
the aim to specify the interrelations of competence and not to create a system for incentives or
organizational legitimation.
172
2.3. Competence Controlling
The more strategic approaches of managing competencies have been derived from the resource-based
view of a company. Based on this management theory competencies have been evaluated regarding
their characteristics in order to create competitive advantages. Accordingly, a core competence is seen
as a bundle of competencies and can provide access to a variety of markets, increases perceived
customer benefits and is difficult to imitate for competitors, Prahalad, Hamel (1990). A number of
different ways to assess core competencies have been introduced and criteria have been defined for
industries with series production, e.g. Kraus (2005) and Ittner (2004). Although these approaches can
improve the process of developing product and market strategies the following problems for assessing
the performance of design groups in a shipyard can be brought up:
Core competencies as bundle of many sub-competencies are too general to identify precise
measures for development actions.
Only focusing on core competencies neglects important other key competencies which are
crucial for ship design, fabrication and assembly.
The concept does not consider the detailing into different competencies on the employee
level.
Approaches from human resource management concentrate on the measurement of competencies of
employees on a very detailed level. However, these approaches focus mainly on social competencies
and not on technical competencies. There are few concepts dealing with methodological and
professional competencies available, Rosenstiel, Erpenbeck (2003). But the application has been
developed for large groups of employees with comparable qualification and working requirements,
like e.g. several hundred call centre agents or the clerks in a bank. Therefore these concepts can not be
applied to the shipbuilding industry where for example often only very few designers are scheduled to
be able to work on the same task.
173
acquisition including the integration of different systems, the steering of the workflow as well as the
integration of the monitoring tool itself possesses great potentials for competence controlling. Figure
2 shows these potentials for a workflow application.
174
Fig. 3: Levels of detail in the life phase model
Figure 3 shows the characterization of each hierarchy level. It is obvious that the life phase level is too
general to derive competencies. The result would be very broad and inaccurate like “design
competence”. The focus on the detailed process can be too precise and in combination with that very
time-consuming to analyse each process step. Therefore the level of aggregated functions promises to
be an efficient and accurate starting point. Nevertheless only focusing on processes might lead to
neglecting important competencies which a company needs and can not be identified easily from the
process perspective. Further dimensions for deriving competencies have to be examined in order to
enlarge the perspective.
Dimensions of Analysis
Using the attempt of integrated modelling of an organization, the perspectives that have already been
utilized are product data and resources (including knowledge). For achieving a more sophisticated
approach these shall be separated into input and output product data as well as into resources like
application systems, machines etc. and knowledge as a resource. The following figure shows the
different dimensions of analysis including the activities from the process perspective and an example
for each perspective.
175
Fig. 4: Dimensions of analysis for competence identification
Quantifying a Competence
In order to provide the management or the head of a department with sufficient information about the
competence situation of a team competencies have to be quantified. A first approach to measure a
competence can be found by using a simple input and output relation between incoming product
description and the quality of the produced result. In this way input for a task is measured with
indicators such as quality and precision of input data and the quality of output, time spent on the task
etc. Relating these sets of indicators for input and output of task conducted a performance indicator
can be found. However this indicator does not regard the quality of the process execution itself, such
as the degree of documentation and traceability of the solution. Additionally parameters like the
availability of a knowledge database to learn from past experiences can not be integrated in such a
concept. Meyer and Mattmueller have developed a three-dimensional approach to assess the quality of
service processes. By measuring the quality in a more accurate way than just input and output more
precise information can be regarded. Therefore the approach of 3D-service quality from service
engineering has been adopted to measure the competence to perform a task, Meyer, Mattmueller
176
(1987). The three dimensions are process quality, potential quality and output quality. In combination
with the attributes of competence the following dimensions can be identified:
Potential:
o knowledge based: experience in task, education, systems developed, training, fairs
and conferences visited
o personal capabilities are regarded in conventional employee evaluation like, ability to
solve problems etc., that are already available and have been excessively analysed in
both theory and practice.
o Availability of knowledge, quality of input data
o …
Process:
o Documentation of solution
o Number of iteration loops
o Design method used
o …
Output
o Space needed per passenger in 2nd class
o Tons steel per lane meter
o Fabrication hours per lane meter
o Modifications due to design failures
o Number of claims
o Cost per m2 public space
o …
The presented indicators have to be weighted in order to be aggregated for each dimension. Therefore
the indicators are compared in a matrix to determine which indicators are more important than others.
The following figure 5 shows the computation of weights as well as the way each indicator is
evaluated. The evaluation is based on a university grade scale which is very easy to understand for the
users of the system. For each grade a value or occurrence has to be defined to allow an impartial
evaluation for every single indicator.
177
As a result of the introduced method a competence such as the cable plan design can be quantified in a
three-dimensional way (process, output and potential) based on a weighted indicator approach.
Furthermore the same indicators can be used to compute a planned value for each dimension based on
own targets or if the data is available based on an industry average or even a best practice example.
For reporting purposes the distance to the planned value has to be evaluated. Nevertheless deriving
planned values strongly depends on the strategic targets of a shipyard which will be addressed in the
following.
Strategic Relevance
The strategic relevance of a competence can be deduced from the attributes that describe a
competence on a more global level within a company’s value creation system. Using the life phase
model to identify a competence has already fulfilled the necessity within the value-added process.
Furthermore the character regarding competitiveness and realization of strategic goals has to be
analysed. Companies define strategies for different areas like product strategies, personnel strategies,
procurement strategies etc. Usually strategic goals are defined and detailed for the affected
departments and business units. To determine the relevance to fulfil a strategic goal a matrix on a
department or team level is used to assign a competence to different strategic goals. The more
strategic goals are affected by a single competence the more emphasis should be put on this one. In
order to distinguish the strategic relevance three categories from low to high significance are used for
a ranking.
Although the concept of core competencies is too general to be used for detailed competence
assessment their characteristics have to be regarded when determining if a competence is important
for a shipyard or not. Especially the capability to create a sustainable competitive advantage needs to
be part of the assessment. Otherwise the yard could possibly loose critical competencies which in the
future might be of essential benefit for being successful in competition. Therefore the relative strength
of a competence compared to other shipyards has to be evaluated. An analysis on a very detailed level
of a yard’s own and its competitor’s performance is not feasible due to insufficient information. Thus,
competencies have to be bundled to fields which can be subject to a comparison with others.
178
Profile Structure
The introduced methods to quantify and determine the strategic relevance of a competence are the
basis to provide management and supervisors with necessary information about the competence
situation within teams, departments or the company as a whole. Therefore competence profiles have
to be developed which support the transportation of the significant information very quickly and do
not distort the situation by an inappropriate aggregation of the indicators.
Profiles must be defined for competence areas such as the design of accommodation and cargo decks
for a ropax vessel. The competencies that are necessary to perform a task within this area a listed and
described by the computed weighted values for output, potential and process. For an improved
overview of the current competence situation a traffic light system is used as described in figure 7.
Furthermore the strategic relevance is listed in the profile. In this way an overview with an adequate
level of detail is presented. It can be detailed by examining the computation of each value for the
dimensions output, potential and process.
3.3.2. Reporting
Reporting Structure
The reporting about the competence situation of a shipyard relies on a hierarchical structuring of the
relevant information as well as the possibility of drilling down critical information to the single
indicators that do not match the plan. In three steps different hierarchy levels of competence are
aggregated to a management view on the current situation of the yard:
Single competencies like the development of the cargo deck layout have to be evaluated based on the
three dimensions potential, process and output. As output and potential are seen as more important
factors than process, more significance is set on a negative development of these indicators. If the
performance of a task is not very well documented this is not as relevant as an insufficient outcome or
a lack of specific knowledge due to the loss of experienced employees. In figure 8 these aggregation
rules are defined in a portfolio where the colours green, orange and red represent the aggregated value
for each combination of the values of the indicators for process, output and potential. In the example
179
potential and output are green, process is orange. The three dimensions can be summed up to an
overall green for the competence “design cargo deck layout”.
These single competencies are then aggregated to competence areas. Their evaluation is depending
on both the single occurrence as well as their strategic relevance. The aggregation is conducted in two
steps. First each single competence is evaluated on the portfolio. This classification is based on a
competence’ strategic relevance and its value. In the second step the worst combination of strategic
relevance and competence value is aggregated at the next level. In the example in figure 8 the
competence area “design accommodation and cargo decks” is set on orange because the single
competence “integration competence structure and GA” is orange. The single competence “design
cargo deck layout” is set on green because it has a medium strategic significance and a green overall
value.
Competence areas including their evaluation are displayed in the Competence Monitor which gives a
broader overview on a department level.
180
4. Controlling Shipyards’ Strategic Competence Development
The concept for competence controlling in a shipyard which has been introduced in chapter 3 has
been realized prototypically by the Institute of Production Management and Technology at Hamburg
University of Technology. It serves as a management information system to support the management
and heads of different departments in developing their employees in systematic and target-oriented
manner. This application is the basis for a project called “KOMPASS” - funded by the German
Federal Ministry of Education and Research - which involves two major German shipyards and one of
their suppliers. The project targets the further development of the presented concept for an
implementation into a shipyard’s IT and controlling environment. In the following the Competence
Monitor which holds the controlling data and the process control of the reporting process itself are
presented.
Using the developed framework to create competence profiles and reports for different teams and
departments of a shipyard a database structure has been defined. Different perspectives on a
company’s internal structure are regarded for designing the data model. The following views are used
as a starting point:
Process model of an organisation (e.g. producing the general arrangement of a vehicle deck as
part of the ship design)
Organizational structure (e.g. the design team for e-systems as part of the outfitting
department)
Competence areas, single competencies and the according indicators (e.g. fabrication cost per
lanemeter as an indicator for the vehicle deck development which is part of the design
competence for accommodation and cargo decks)
These views are used as entity types within the entity relation ship model and are described by
attributes in order to create the necessary tables within the relational database. An interface to enter
data has to support the user by providing at first the selection of the right indicators for the
competence area to be evaluated – ideally using tick boxes for a more friendly use of the system. Then
values for each indicator have to be entered and the automatic data transfer from other systems has to
be triggered. The query functionalities have to regard that different types of information have to be
obtained from the database. Profiles for competence areas are supported as well as a monitoring at
different levels of detail. Furthermore a drill down into single competencies and indicators is
supported in order to allow the analysis of reasons for plan deviations on a very detailed level. Figure
9 shows the database structure and the functionalities of the user interface.
181
Fig. 9: Structure of the Competence Monitor
182
Fig. 10: Controlling process and workflow implementation
183
In figure 12 the querying of the Competence Monitor is demonstrated. The user selects a competence
area and drills down from the single competencies to the indicator level. Each drill down level is
integrated into the web page so that information can be directly displayed and no problems occur
when browsers suppress pop up windows.
The presented Competence Monitor supports the maritime industry in controlling and managing the
development of the technical competencies of the employees. In this way skills and knowledge can be
enhanced with more target-orientation and the competence development is accelerated. Possible
current and future gaps in the competence configuration are identified at an early stage and the
relevance of different competencies is addressed. For the future the Competence Monitor will be
extended with different features. A stronger focus on product orientation will be implemented and
standardized reports will be defined.
References
DAVENPORT, T. H., PRUSAK, L. (1998): Working knowledge: How organizations manage what
they know, Harvard Business School Press, Boston, Mass.
ERDL, G; SCHOENECKER, H.(1995): Workflowmanagement, FBO-Verl., Wiesbaden
ERPENBECK, J., ROSENSTIEL, J. VON (2003): Handbuch Kompetenzmessung, Schaeffer-
Poeschel, Stuttgart
FITZEK, D. (2002): Kompetenzbasiertes Management – Ein Ansatz zur Messung und Entwicklung
von Unternehmenskompetenzen, Universität St. Gallen, St. Gallen
FREILING, J. (2001): Terminologische Grundlagen des Resource-based View. In: Bellmann, K.,
Freiling, J., Hamann, P., Mildenberger, U. (eds.): Aktionsfelder des Kompetenz-Managements, Dt.
Univ.-Verlag, Wiesbaden, 4-28
184
FREILING, J. (2004): Competence-based view der Unternehmung, in: Die Unternehmung, 58. Jg.
(2004) 1, 5-25
ITTNER, T. (2004): Quantitative Bewertung von Kernkompetenzen in der Automobilzulieferindustrie
am Beispiel des Presswerkzeugbaus, Shaker, Aachen
KOCH, J. B. (2004): Unterstützung der schiffbaulichen Projektierung durch Repräsentation von
Erfahrungswissen, Dissertation TUHH, Hamburg
KRAUS, R (2005): Strategisches Wertschoepfungsdesign, Dt. Univ.-Verlag, Wiesbaden
METZENTHIN, R. (2002): Kompetenzorientierte Unternehmungsakquisitionen: eine Analyse aus
Sicht des Kompetenzlueckenansatzes, Dt. Univ.-Verlag, Wiesbaden
MEYER, A., MATTMUELLER, R.(1987): Qualitaet von Dienstleistungen, in: Marketing ZFP (3),
August 1987, 187-195
NEDESS ET. AL. (2003) : Workflowmanagement-Systeme am Beispiel des Beteiligungscontrolling -
Eine Marktuntersuchung, TUHH
NEDESS, C., FRIEDEWALD, A., KURZEWITZ, M. (2005): The Knowledge Lounge as an
Application to Increase Customer’s Benefit, ICCAS 2005
NORTH, K. (2003): Wissensorientierte Unternehmensführung, Vol. 3, Gabler-Verlag, Wiesbaden
PRAHALAD, C. K., HAMEL, G. (1990): The Core Competence of the Cooperation, Harvard
Business Review, 68 (3), Boston, Mass.
SANCHEZ, R. (2001): Managing Knowledge into Competence: The Five Learning Cycles of the
Competent Organization. In: Sanchez R. (ed.): Knowledge Management and Organizational
Competence, Oxford University Press, Oxford
SCHILLER, T. (2000): Kompetenz-Management für den Anlagenbau : Ansatz, Empirie und
Aufgaben, Gabler-Verlag, Wiesbaden
THURNES, C. (2003): Konzept zur modellbasierten Ermittlung von Kompetenzbedarfen, Dissertation
Uni Kaiserslautern, Kaiserslautern
185
Simulation of material flow processes in the planning of production spaces
in shipbuilding
Abstract
Manufacturing, assembly and outfitting operations in shipbuilding as well as the product itself
require a large amount of space in the production facilities of a shipyard. Due to the fact that the
required space is a resource for time-critical processes with restricted capacities on shipyards and
other shipbuilding companies, the space allocation has to be handled as one of the most important
tasks in production planning. Production topologies of shipbuilding enterprises consist of assembly
and storage spaces, such as building grounds, slipway or external storage areas, as well as the
production spaces on board. In the majority of cases the available space can not be completely used
for the building process, because the stacking of products is not possible by reasons of weight and
stableness. Therefore calculations of the occupied and available floor space are necessary for
production planning. These models’ level of detail is the most crucial factor for the quality of the
planning results, because the more detailed the models are, the more accurate the planning results
will be. This aspect was implemented in the simulation module for space allocation within the
Simulation Toolkit Shipbuilding (STS). The 2-dimensional perspective of the simulation module was
expanded by additional attributes like the height of products and operations, in order to enable the
simulation of stacking as well as 3-dimensional production and transport operations. The simulation
of transport operations on floor space, which represent moving vehicles and employees, were
integrated with path finding and basic collision control functionalities. The development and
integration of all functionalities has led to a complex simulation module for the space allocation.
Because of the growing complexity of the simulation module the system performance decreased. In
order to manage the high complexity of the simulation models, the space allocation module can be
integrated into distributed simulation processing. Additionally optimisation methods are developed to
increase the system performance by advanced allocation and path finding algorithms.
1. Introduction
Simulation of material flow processes in production enables pre-estimation and testing of production
scenarios. Additionally the evaluation of alternative scenarios in an adequate realistic simulation
model lead to significant cost savings. Because of this, process simulation is regarded as an effective
tool for the modelling of material flow processes and efficient production planning and control
Dawood, Marsini (2002).
In most cases a special simulation application is utilized for industry specific functions. The
development of these tools requires much time and effort. The reusability of the simulation models is
one of the most important criterions in their development. The concept of a simulation toolset can
help to achieve this objective Chong, Sivakumar, Gay (2002).
The development of an industrial specific simulation toolset is a methodical process, which has to be
handled accurately, because of the essential precision of the tool needed in the production planning
and control. Three steps in the development process have to be regarded to lower the initial
development efforts:
The typological characteristics of the enterprise have to be analysed and used in the tool conception
The State-of-the-Art in industry specific planning tools provides information about specific solutions
and approaches,
During conceptual design and implementation stages requirements of user groups have to be taken
into account by means of interviews in order to determine their needs for the practical utilization.
186
2. Shipbuilding industry characteristics require special simulation modules
According to the rules of methodical modelling some aspects have to be regarded in case of
developing a simulation module for production resources:
• Accuracy of representation
• Reusability
• Transferability
The quality of the modelling and the adequate solutions are affected by the typological characteristics
of enterprises. In addition the typological characteristics of shipbuilding differ in some fundamental
aspects from other industries, as shown in Fig. 1.
The average size of shipbuilding enterprises varies from small to large companies, even though the
small and medium sized enterprises prevail in the maritime industry Seefeldt, Pekrul (2005). The
mainly traditional organizational structures and the grown infrastructures of the shipyards have led to
a great number of specific supporting tools for production processes.
The material flow in shipbuilding production is dependent on a network of building site, job shop and
line production. In contrast to the listed industries of plant construction, aircraft construction and
automotive industries this complex network of the three areas of material flow is one of the most
crucial differences. The resulting complexity of material flow increases the requirements for the
production planning and control, because the dependencies in the material flow cannot be easily
determined. Therefore a tool like production simulation software offers efficient support for
production planning and control, if it is supplemented by an industrial specific simulation toolkit.
Therefore the included simulation modules have to be adaptable.
Additionally the mainly one-of-a-kind and small series products impose high demands on the
production control, so that besides the production planning this aspect has to be regarded in particular.
187
Only in the plant construction industries production control has to deal with a greater amount of one-
of-a-kind products.
The product range in shipbuilding is mainly determined by a customer’s specification, so that in most
cases the ships differ in construction characteristics and equipment. Therefore the production process
model for a shipyard has to be aligned to every single product, as there are crucial differences in the
product attributes.
In comparison with the automotive industry, where the transfer lines, which consist of machine tools
and conveyors, are in most cases the capacity bottlenecks, in shipbuilding as well as in plant and
aircraft construction the essential resource is the production space, in this context the resource is
called bottleneck resource.
In Fig. 2 an overview of the material flow process in shipbuilding is shown. The percentages of man
hour budgets, which are indicated for three areas of the material flow, are counted for those process
steps and operations with production space in use. According to these shares it is assumed that the
operating efficiency of shipbuilding is highly dependent on the space utilization, so that space
allocation has to be planned and controlled accurately. The process steps are emphasized, in which
space is planned and utilized. Some examples of production process steps with space utilization are:
In Prefabrication single parts and semi-finished products are cut from metal sheets, palletised and
containerised. Here a space allocation has to be made, according to the required amount of space for
storing the produced parts and the transport equipment. Afterwards the palletised products are
transported with ground vehicles, like fork lifts, to the assembly sites.
Block and module assembly is one of the most important process steps with space utilization. Here the
products are placed on special building sites, which support the assembly and welding of the blocks
188
and modules. In block and module assembly planning space allocation, scheduling and order
dispatching are based on several levels of detail in planning data. The transport operations of the
blocks and modules, which are accomplished by gantry cranes and module carriers, have to be
regarded under consideration of vacant transport routes.
During outfitting mechanical, piping and electrical systems are installed and furnishing processes are
conducted. This work takes place in separate rooms on board of the ships, so the space of each room
is one of the resources which has to be taken into account for production planning. In addition some
installations pass through several rooms or special installations are spread widely throughout the
whole ship. In this stage of shipbuilding the production process has to follow the specified order
sequence, which is defined by the technological assembly levels and has to be adjusted to the
deployment of transportation and personnel.
4. Requirements for the development of a specific simulation module derived from the
production planning and control (PPC) core functions.
As explained above almost every production process step in shipbuilding requires an allocation of
assembly space. From the characteristics of the different process steps, like prefabrication, block
assembly and outfitting, criterions for PPC core functions regarding planning tools and methods can
be derived:
The available data for planning provides the base for the production planning and control functions.
Accurate planning data is needed to ensure an adequate degree of reliance for the planning results. In
addition simulation also requires an adequate data structure to run properly. Therefore the available
data is important for the development of a simulation module, which defines the practicable basic
structure of this module.
The order dispatching manages the order sequencing and scheduling. Herein the technological
sequences and the capacity requirements are planned and controlled.
The resource planning aims at fulfilling the material requirements. Therefore all available resources
and their dependencies have to be regarded.
In the transport management the logistics within the enterprise have to be planned. Transport
management has to take the available transport resources as well as the store for accessibility of
products and material into account for the planning procedures.
The planning horizon characterizes the available amount of time for planning as well as the needed
degree of accuracy in planning.
The typological characteristics of shipbuilding enterprises show that the bottleneck resource “space”
has to be regarded in each core function of the PPC-model. For the development of the simulation
module for production space in shipbuilding three core functions of the Aachen PPC Model Luczak,
Eversheim (1998) were examined.
During production program planning the sales plan is checked for feasibility as well as the
customer-anonymous and the customer-specific planning is adjusted. Therefore the products, which
have to be build, are assigned to a specific planning interval according to their type, quantity and date
of delivery. The planning horizon varies in general from 0.5 to 2 years, but it highly depends on the
type and range of the products. In the majority of cases the planning horizon is counted as long-term.
The production requirements planning aims at the realization of the production program, and
therefore has to plan and control the resource in a medium-term planning horizon. In this planning
function the capacities are balanced to reach an equalised resource utilization and to ensure the
resource availability for the in-house production planning and control. This PPC core function
breaks down the in-house production requirements to orders for the manufacturing and assembly.
Here breaks in production and the current production status have to be regarded to ensure the
adherence to the delivery dates Luczak, Eversheim (1998).
189
In Fig. 3 the PPC core functions were characterized by planning criterions to assess shipbuilding
specific requirements for the development of space simulation module.
The different initial situations of the planning process in the PPC core functions result in several
levels of detail for available planning data. In short-term in-house production planning and control the
planning data is almost highly detailed available, while the production program planning is mostly
based on rough data out of the experiences from the past, such as the production data from formerly
build ships of the same class. Therefore a requirement for a simulation module has to be a flexible
data structure for adaptable modelling.
In production program planning the customer-anonymous pre-planning has to be adjusted to the
customer-specific order planning. Herein the order planning is regarded in case of different order
allocation strategies. Different order control strategies, such as first-in-first-out or last-in-last-out
etc., have to be included in a simulation module for space allocation.
Derived from the resource planning criteria, three different requirements for the resource allocation in
the simulation module of production space are defined. The rough-cut resource planning uses only the
approximate quantity of capacity load from the planned process, therefore the simulation module has
to provide a rough-cut capacity load calculation, which is only computing the approximate space,
that is occupied by a product or for a process. On the opposite in production control the space
allocation has to be planned accurately, so that a function for specified allocation is required in the
space module.
In middle-term production requirement planning each production space is regarded as one single
capacity, which is planned in terms of capacity load. Herein no exact space allocation has to be
accomplished, because the available data is mainly based on estimates, so that a capacity balancing is
sufficient enough. Therefore an automated space allocation is required in the space module, which
can evaluate and determine the lead time scheduling by a simulation model.
The transport management has to plan the in-house logistics, which connects the single production
sites. The planning of transports is almost totally accomplished in short-term production planning and
190
control, where transport orders are managed under consideration of accessibility and actual production
requirements. Therefore a method for transport routing on production space has to be implemented
in the space module.
The criteria of the planning horizon shows on the one hand the need for an adequate system
performance depending on the urgency of the planning process and on the other hand the demand for
an acceptable degree of reliance.
As a part of the national research project SIMBA, funded by the German Federal Ministry of
Education and Research, the TUHH in cooperation with FSG shipyard developed the simulation
module “space” (Fig. 4). This simulation module was integrated into the Simulation Toolkit
Shipbuilding (STS), which is a software tool developed and further maintained by the Simulation
Cooperation of Maritime Industries (SimCoMar).
Fig. 4: Space - a resource module of the STS and its functions – Nedeß, Friedewald, Hübler (2004)
The STS was developed around a core of computer simulation based on a commercial software. This
commercial software provides important functions for simulation modelling of material flow
processes:
Object-oriented modelling: the application to hierarchical simulation modelling is needed to decrease
complexity and improve transparency of the models
Event-driven simulation: the process models are mainly characterized by the events in process flow
2D-simulation and 3D-animation: as a result of the data requirements 2-dimensional modelling of the
production processes is more flexible than within a 3-dimensional model data structure. Geometrical
data has to be more specific in 3D than in 2D.
The STS contains several modules for production simulation in shipbuilding, such as product, process
and resource modules as well as supporting modules for the simulation process itself. The space
simulation module is primarily designed as a resource module for simulation of outfitting and block
191
assembly. Additional functions regarding the module’s utilization were developed Steinhauer, Hübler,
Wagner (2005).
Therefore a matrix was chosen to represent the production space (Fig. 5). The matrix fields, which are
scalable in size, provide a modelling method for the different levels of detail, so that the space module
can fulfil the requirements for all planning horizons. In the lowest level one matrix field matches a
building site with 20m²; on a higher level one matrix field equals 0.1m² or less. The objects, e.g.
blocks and modules, and areas are represented in this matrix by their planar projection, which is
approximately matching the real dimensions as far as the matrix field size allows. This structure
provides the basis for spatial oriented production simulation and is required for the modelling of
several production operations in shipbuilding.
The approach of vector graphics to represent the voluminous aspects of space and objects like in 3D-
CAD-Software was excluded, because a very high level of detail in input data is required, which can
not be compiled in all phases of the shipbuilding process. However storage and assembly operations
as well as transports need 3D-model data as a basis for simulation. Therefore stacking operations and
the height of building sites were integrated into the space module to allow stacking of objects on the
space and accordingly the simulation of 3-dimensional operations.
As mentioned before the space module is able to represent a layout plan of the production space with
all topological areas, which are actively or passively affecting the production process. Some of those
topological areas represent building sites, where sections and other products are placed for assembly,
outfitting and other production processes. These sites are essential elements for example in block
assembly. Beside those building sites the space module can also display the blocked zones in the
production space, such as walls, columns or lanes, which can normally not be utilised for assembly
and fabrication purposes.
192
5.2 Order control strategies
The order control strategies represent the methods for order sequencing and scheduling. Within the
space module the orders are represented by requests for production space. The terms for submitting a
request are based on the optional settings in the simulation module, such as submitting requests only
after a failed space allocation. Those requests can be sorted by specific priorities, e.g. delivery date or
part type, and are processed in this sequence. Additionally the type of request processing, as there are
batch and single processes, provides an adaptation to different levels of detail.
193
By using the adequate allocation strategy the space utilisation can be optimised. Especially the least-
area-strategy is supposed to achieve a very high degree of space utilization. This strategy can be used,
if the space allocation has to be more efficient than normal allocation. The least-area-strategy uses a
basic control strategy and valuates the results of the search algorithm to find the most space-saving
allocation. A major drawback of this allocation strategy is that it drastically reduces the system
performance and is affecting the processing time of the simulation model.
For the long-term production program planning, like in the sales planning for a new range of ships,
the quick estimation of capacity bottlenecks requires a rough-cut capacity load calculation of the
available production space. Therefore a calculation algorithm was implemented, which computes the
quantities of occupied and available space during the simulation run.
194
Fig. 7: Transport routing and optimisation in a simulation model for block assembly
195
In order to achieve this objective it is planned to prepare application scenarios, which are classified by
specific optimisation parameters. These optimisation parameters comprise the optimisation
objectives, the control variables and the constraints:
The optimisation objectives are the simulation results, which are evaluated.
The control variables are the parameters, which can be affected for the specific application scenario.
Constraints are the restricted parameters, which are set by environmental conditions in the application
scenario.
For example: in a simulation scenario for middle-term planning with the space module, the
optimisation parameters would be space utilisation as an objective, space allocation strategy as an
control variable. The space topologies, like building sites, as well as the level of detail would be the
constraints. Based on the preparation of the application scenarios and the identification of
optimisation parameters a simulation-based optimisation system will be designed for utilisation in
PPC of shipbuilding.
Fig. 8: Modelling for outfitting process simulation in case of emergency lighting set
The outfitting operations in shipbuilding are mainly system assembly and integration processes, like
main engine and generator installations or piping for the installed engines. But also the furnishing on
board of the ship is part of the outfitting. The service technicians are given a construction plan, which
contains information about geometries, assembly positions and sequences, to support the outfitting on
board. This information is used in outfitting simulation for the layout modelling of the production
space, like shown in Fig. 8.
The assembly positions are mapped as specified allocation positions or building sites for the
appropriate parts. The outfitting operations require space for the proper execution of work, which is
represented by the safety margins around the assembly positions in the space simulation module.
Those margins are used to separate several work operations from each other and to ensure, that no
interferences occur. As a result of the outfitting process simulation the user receives information about
196
the possible work load and adherence to due date, best order sequence as well as the resource
utilization. To ensure best possible and current simulation results the user has to add information
about production status, breaks in production and real finishing dates to the simulation data.
The outfitting operations on board of a ship are mainly structured and organised by the room
segmentation of the decks. Those rooms are defined as closed structures as they are restricted by steel
structure. However equipment is partially installed and the service technicians have to work in several
rooms, so that a flow of personnel resources and material has to be controlled and planned in
outfitting. In this case the simulation model complexity would increase drastically, if a whole deck or
the building site for the system, which has to be installed, are represented by one space module.
Additionally these large space modules lead to a reduction of the system performance.
Instead of such an extensive simulation model each room is represented as a single space module.
Therewith the complexity is reduced by partial models, which are connected in the total material flow.
This is a comparable to the distributed simulation (Fig. 9). Each single space module has its own
functions and parameters and acts as an independent module in the simulation model, but the
integrated controls, which can be used as orders and requests to the space module, affect the internal
processes of the space module. This function allows the simulation modeller to build up a network of
single space modules representing a whole deck of the ship with all of its rooms for outfitting
simulation. In case of systems passing through different rooms the junctions have to be identified and
modelled, where the material flow has to change the module and how it affects the operation.
197
7. Summary and outlook
Simulation support in production planning and control is widespread through the industries, because
the certainty of planning can be drastically increased. The methodical development of a simulation
module has to be accurately controlled and the typological characteristics of the specific enterprise
have to be regarded.
The production space is an essential element in shipbuilding and therewith an important module in
production simulation of this industries. The developed space module can represent different levels of
detail for application in every planning phase. Additionally the functionalities for distributed
simulation modelling and optimisation of the resource were integrated into this module. In the
coupling of optimisation and simulation far more potentials are expected, so the next step in the
development of the simulation for shipbuilding industry has to regard this aspect.
Literature
CHONG, C. S., SIVAKUMAR, A. I., GAY, R. (2002), Design, development and application of an
object oriented simulation toolkit for real-time semiconductor manufacturing scheduling, IEEE, San
Diego
DAWOOD, N., MARSINI, R. (2002), Visualisation of a stockyard layout simulator ”SimStock“ – a
case study in precast concrete products industry, Elsevier Science Publ., Amsterdam
LUCZAK, H.; EVERSHEIM, W. (1998), Produktionsplanung und –steuerung – Grundlagen,
Gestaltung und Konzepte, Springer, Berlin
NEDESS, C.; FRIEDEWALD, A.; HÜBLER, M. (2004), Simulation im Schiffbau -
Zukunftssicherung durch Planungsgenauigkeit, HANSA, Hamburg
SEEFELDT, M.; PEKRUL, S. (2005), Erfolgsfaktoren der Schiffbauindustrie – Ein Vergleich mit der
Bau- und Anlagenbauindustrie, hansebuch, Hamburg
STEINHAUER, D.; HÜBLER, M.; WAGNER, L. (2005), SIMBA: Entwicklung eines
Simulationsbausteinkastens für die Schiffausrüstung, Schlussbericht, Flensburg
198
GL.ShipLoad: An Integrated Load Generation Tool for FE Analysis
Christian Cabos, Germanischer Lloyd, [email protected]
Henner Eisen, Germanischer Lloyd, [email protected]
Matthias Krömer, Germanischer Lloyd, [email protected]
Abstract
With GL.ShipLoad, Germanischer Lloyd provides a user friendly computer application for the
efficient load generation for global FE analyses of ship structures. The graphical user interface
facilitates the convenient application of ship and cargo masses to the FE model. Hydrostatic and
hydrodynamic computations are integrated into the program. GL.ShipLoad supports the generation
of loads from first principles (realistic inertia and wave loads for user supplied wave parameters),
but the program also aids in the selection of relevant wave situations for the global strength
assessment based on bending moments and shear forces according to GL’s rules. The result is a small
number of balanced load cases that are sufficient for the dimensioning of the hull structure.
1. Introduction
The reliable computation of loads is crucial for an accurate global FE analysis of a ship. In its
“Guideline for the global strength analysis for container vessels” (Germanischer Lloyd (2006)),
Germanischer Lloyd uses the design wave approach (see e.g. FOLSO et. al. (2003)) to find those load
combinations which are most relevant for the dimensioning of the structure. In contrast to the loading
approaches in the common structural rules for bulkers and tankers (IACS (2006)), the hydromechanic
pressure and the ship accelerations are here taken from first principle hydrodynamic computations for
regular waves. As an aid in applying the loading procedure, the software GL.ShipLoad has been
developed by Germanischer Lloyd.
Loads on the structure result from acceleration of masses (inertial loads) and from external loads
(mainly pressures). GL.ShipLoad provides support for modeling the mass distribution of ship and
cargo as well as for computing static and hydrodynamic pressures due to waves, and for the
combination of both types of loads into balanced quasi-static load cases.
For finding the most relevant regular waves, GL.ShipLoad analyzes a large number of wave
situations. With an easy to use mechanism for defining selection criteria (e.g. “maximum total vertical
bending moment” or “maximum wave torsional moment”), the user can specify which waves shall be
chosen for the global strength analysis. By choosing the loads as specified in the guideline
(Germanischer Lloyd (2006)), the rule envelope curves for sectional moments and forces are
approximated. For any chosen load case, the longitudinal distribution of sectional forces and moments
is immediately displayed.
The GL.ShipLoad program can easily be applied in a typical design environment, as e.g. at a
shipyard, since it only requires a global FE model of the ship as input, and its output consists of nodal
forces which can be applied in any standard FE program. In the implementation, nodal loads have
been given preference over surface loads, since for global analysis they yield sufficient accuracy and
their application is straightforward in any FE code.
The following main steps of a typical program run will be described in detail in this paper:
199
5. Computation of reference wave amplitudes from prescribed (rule-)bending moments
6. Specification of the scan range of wave parameters
7. Computation of pressures/section loads for this scan range
8. Selection of load cases (manually or automatically by section load extrema)
9. Generation of balanced nodal load cases
Since – even with good software support – the definition of a realistic mass distribution of the ship for
different loading conditions is a laborious task, GL.ShipLoad has been designed such that all mass
definitions can also be used for a finite element vibration analysis at a later stage.
From the software engineering point of view, the aims in developing GL.ShipLoad were:
• Convenience (clear layout, re-use of data from other programs, copy and paste)
• Objectivity (reproducible results independent of the person at work and according to the GL
guideline for global FE analysis)
• Reliability (assessment of results by graphical feedback)
• Efficiency (easy to use and fast to apply)
In particular these goals are met by the following features of the program
• Any user inputs into the software are stored in an XML file. In this way, data input is
transparent to other developers and GL.ShipLoad can be integrated into other environments.
• For the internal storage of hydrodynamic results, a data model has been developed, which is
independent of the hydrodynamic method. The corresponding files use HDF5 (NCSA (2005))
as an efficient binary exchange format. Because of this approach, the current strip method can
easily be replaced by more detailed approaches at a later stage.
• Using copy and paste, data can be exchanged with e.g. spreadsheet programs, and any prior
inputs to Germanischer Lloyd’s scantling software POSEIDON (e.g. container input) can be
reused.
• If desired, the global strength analysis can immediately be performed with the integrated FE
solver GLFRAME.
• Users acquainted with POSEIDON will quickly get accustomed to GL.ShipLoad because of
the familiar look and feel.
2. General remarks
Apart from the major tasks of defining masses and selecting hydrodynamic load cases, some more
general input is required. Most prominently, input and output files must be specified. GL.ShipLoad
deals with files for reading information (FE model, hull description), writing results (loads), storing
user input data, and for communicating with external programs (hydrodynamics, export for e.g.
NASTRAN).
GL.ShipLoad operates on the basis of nodes, elements, and materials as used in any standard FE
program. This data is loaded from a finite element model in Germanischer Lloyd’s BMF file format.
Converters between NASTRAN and ANSYS formats and BMF are available. The resulting nodal
loads (representing inertial loads that result from acceleration of the mass distribution, and static and
dynamic pressures) are either appended to the FE model file or are directly output as ANSYS or
NASTRAN nodal loads.
200
Fig. 1: The graphical user interface is divided into the areas “tree” (on the left), “output” (at the
bottom), and “workspace”. User input windows are opened in the workspace by clicking on the tree
items (additional symbols for actions that are specific to the active window may appear in the
toolbar). Some windows have a “preview” for graphical feedback of the user input in the white input
fields. (The figure above shows a section of the FE model and the current container bay – a tool tip
displays information about the object at the mouse position.) The items of the tree are arranged in
such a way that, by proceeding from the top to the bottom, the user is guided through all required
steps from the input of the principal dimensions to the generation of FE loads. A progress bar
indicates the progress of longer computations. Information, warning, and error messages are written
to the output area as a persistent log of the program run.
Apart from the detailed geometrical information contained in the FE mode (Fig. 1)l, some principal
dimensions have to be entered (mainly for the computation of bending moment rule values). A frame
table can be specified that allows the addressing of longitudinal positions by frame number rather
than by coordinate value.
Normally, hatch covers are not explicitly modeled by finite elements (as they must not contribute to
the overall stiffness of the model). Hence, hatch cover definitions have to be entered in GL.ShipLoad
as they are required for the load application of deck containers.
For the computation of trim and for the conversion of hydrodynamic pressures to nodal loads, the
elements of the FE model that represent the shell have to be specified. This can be done automatically
by specifying the height up to which the FE model is water tight (in this case, starting with the
bottom-most element, all elements below this height that are connected via common edges are
selected), or one or more element groups of the FE model can be specified. In either case, a procedure
that ensures consistent orientation of the elements is invoked automatically.
Additionally, a shell representation of (open) polygons at cross-sections of the hull is required for
hydrodynamic computations; these polygons can be imported from a (e.g. NAPA-generated) XML
file or can be derived from the FE model. From these polygons, linear and non-linear shell
representations for strip method and non-linear pressure extrapolation are generated by the program.
201
2.1. Load groups
GL.ShipLoad uses an efficient concept for defining nodal loads. Any load case, which is later applied
to the finite element model, is a linear combination of so called load groups. Since in particular the
inertial loads of the ship are always a linear combination of the same six load groups (corresponding
to three translational and three rotational accelerations of the rigid ship), this approach leads to a
much more efficient storage of the loads in the case of many different wave situations. The array of
factors connecting load groups to load cases is also output by the program, an example is shown in
Fig. 2
The following load groups are created by GL.ShipLoad for each loading condition of the ship:
The combination of the first three types of load groups (with factor 1.) results in the balanced (no
residual forces or moments) hydrostatic load case (load groups 1, 2, and 9 in Fig. 2). In order to get
balanced hydrodynamic load cases, the factors for the unit load groups are computed from the
condition that no residual forces and moments shall result from the linear combination with the
hydrodynamic load group – the factors are then equivalent to the rigid body accelerations.
All mass definitions entered into GL.ShipLoad can be used not only for load generation for the quasi-
static analysis, but can later be exported as masses for a dynamic analysis. This feature will be
available in a future release.
202
3. Mass distribution
Typical components of a mass distribution (e.g. of a container vessel) are: steel weight, equipment
and accommodation (resulting in the light ship weight), bunkering, water ballast, and cargo. While
some components differ for different loading conditions (e.g. bunkering for departure and arrival
conditions), some loading conditions share the same components (e.g. light ship weight is the same
for all loading conditions). For this reason, GL.ShipLoad supports the assembly of basic mass items
(e.g. a single storage tank) to assembled mass items (e.g. departure bunkering) in order to facilitate
the convenient access to and re-use of the “building blocks” of typical loading conditions.
Fig. 3: Making use of assembled mass items for definition of reusable “building blocks” of a typical
container vessel loading condition.
In GL.ShipLoad, both basic and assembled mass items are represented internally by (sparse) mass
matrices. A mass matrix relates nodal loads to nodal accelerations (via f = Ma ). Nodal accelerations
are derived from the computed hydrodynamic rigid body accelerations (translational rigid body
acceleration is directly applied to all nodes, whereas rotational rigid body acceleration is converted to
translational nodal accelerations that depend on the distance from the axis of rotation). Generally, all
mass matrices couple the nodes to which they are connected. Nodal masses (encountered for element
group masses and box masses, see below) are represented by diagonal mass matrices. Container mass
matrices actually couple the bearing nodes in the FE model. A physically correct representation of
tank masses (e.g. loads perpendicular to tank walls) would couple all wet nodes within the tank and
requires a more complex computation. Since this coupling has only a minor effect on the global
strength examination, static tank masses are excluded from the mass matrix approach, and dynamic
tank masses are treated as nodal masses.
203
Structural masses are therefore represented by element group mass items. They are computed from
the element geometry and the associated material density without further user input, but can be
manipulated by factors in the assembled mass item (see below). Typically element group mass items
are used to model e.g. outfitting. At the load generation step, nodal loads are generated from element
group mass items (as element loads are not supported by GL.ShipLoad).
Fig. 4: Superstructure, mass box, nodal masses. Note that “weak” nodes are excluded from the mass
distribution and mass per node depends on nodal density.
3.3. Tanks
GL.ShipLoad aids in applying tank loads by automatically identifying (topologically) closed regions
in the FE model. These closed regions are also called closed cells in the following.
Tanks are defined by (optionally composite) boxes (analogous to mass boxes), either as surrounding
boxes or as enclosed boxes. In the case of a surrounding box, the combination of all closed cells that
lie completely inside the box constitutes the tank; in the case of an enclosed box, the smallest closed
region that completely encloses the box constitutes the tank.
Often, the volume of a tank found in this way differs somewhat from its designated volume (as e. g.
specified by the loading manual). The reason is that normally the FE model does not perfectly
represent the actual ship geometry. Therefore, if the designated tank volume is prescribed by the user,
the computed tank volume is scaled accordingly in the computation of nodal masses/pressures.
In the program, the tank geometry is defined independently from its actual fill level and fluid density.
These latter two parameters are defined when referring to the tanks in describing an assembled mass
item. This has the significant advantage, that the same tank can be used for different loading
conditions by referring to the same tank geometry but specifying different fill levels and fluid
densities.
The actual mass distribution within a tank is computed by finding the position of its free surface in
204
the still water floating position of the ship. The resulting hydrostatic pressure is correctly transferred
to the FE model by forces that are perpendicular to the tank walls and increase linearly with the
distance from the free surface. The dynamic tank pressure (resulting from the ship’s acceleration
minus gravity) is transferred to the model as nodal masses (see page 5 for more detail). Here, the
mass box approach as described above is used, but with a further limitation to nodes at tank wall
elements below (or directly above) the tank’s free surface.
Fig. 5: Tank definition by boxes. The left hand figure shows the definitions of the boxes for a ship. On
the right hand side, the tanks (closed cells in the FE model) which have been identified automatically
by the program from this input are shown.
3.4. Container
Container mass items represents (hold or deck) container bays. At this point in the program, only the
container arrangement is specified. The average mass per container and the vertical center of gravity
are specified later, during input of the assembled mass items.
First, a container bay is defined by its longitudinal center of gravity and the aft and fore positions at
which horizontal loads should be applied to the structure. GL.ShipLoad supports (ISO) standard
containers of 20’ and 40’ length, which are denoted by their bay id (odd for 20’, even for 40’
container). Secondly, container stacks are defined by their lateral center of gravity, their vertical
support, and their vertical extent in terms of upper and lower tier id. Deck containers are identified by
a lower tier id greater than or equal to 82. Stacks that are present on portside and starboard can be
marked “symmetric” and need to be entered only once.
While the general procedure for the transfer of container masses to the ship structure is the same for
hold and deck containers, the involved nodal degrees of freedom differ for hold and deck containers.
They will also differ for strength or vibration analysis. For hold containers for strength analysis all
lateral loads are applied to the aft and fore transversal bulkheads independent of the still water
floating position and the hydrodynamic acceleration. In a future program version, attributes “guided
aft” and “guided fore” will determine if lateral loads for 20’ container will be applied to longitudinal
bulkheads at the gap between 20’ containers. Longitudinal loads are applied to the fore transversal
bulkhead.
For deck containers, hatch cover definitions control to which structural nodes loads are applied.
Vertical loads are applied at the aft and fore edges of the hatches at the lateral positions of the corners
of the container stack. Horizontal loads are applied at the stopper positions. Loads resulting from
containers that completely or partially overlap the hatch cover are applied to the nearest node on deck,
as their support is generally not explicitly modeled in the FE model.
When structural nodes have been identified according to the aforementioned criteria, the stiffness
matrix of an auxiliary beam model connecting the container’s center of gravity and the structural
nodes (all degrees of freedom) is assembled. By a first condensation, nodal degrees of freedom are
released according to the designated degrees of freedom as described above. By a second
condensation, the container mass matrix is computed, for which multiplication with an acceleration
vector results in the same total forces and moments for the center of gravity specified.
The export of container mass matrices for a vibration analysis will be possible in a future version of
205
GL.ShipLoad (in this case, both vertical and horizontal loads will be applied to structural nodes at the
bottom of the hold container stacks). These mass matrices represent the condensed mass effect of the
container, applied at the structure nodes, but with correct center of gravity. The stiffness of the
structure is not changed in this approach.
Fig: 6: An assembled mass item is used to combine basic mass items and other assembled mass items
into a mass distribution. Here, predefined assembled masses “light ship”, “bunkering”, “cargo”
(specifying factors), and some water ballast tanks (specifying fill rate and density) are combined into
a loading condition that will be used for the computation of hydrodynamic load cases.
206
4. Hydrostatics
For the correct application of hydrostatic pressure and for the determination of trim and heeling angle,
the floating position in still water must be found. In the case of GL.ShipLoad, static trim is computed
by a Newton iteration of draught, trim angle, and heel angle until hydrostatic equilibrium is achieved,
i.e. until the buoyancy forces and moments balance the gravity forces and moments of the mass
distribution. Buoyancy forces are computed by integration of hydrostatic pressure over the hull
described by the shell elements, gravity forces are obtained by multiplication of the mass distribution
with the current gravity vector (in ship coordinates). The Jacobian matrix, which is required for the
Newton iteration, is computed numerically by finite differences.
Besides output of the draught at the aft and fore perpendiculars and the heel angle, output of section
loads in graphical and tabular form aid in assessing the mass distribution; if needed, adjustment of the
mass distribution and re-computation of static trim is possible. Furthermore, the static load groups
“buoyancy”, “weight (w/o tanks)”, and “tanks” are generated, as well as the unit load groups for
masses and tanks.
On the basis of the computed trim, the hull description for the linear hydrodynamic computation is
generated by clipping the polygon representation of the hull at the water plane, selecting frames, and
distributing points as required by the strip method. An additional hull description with refined frame
distance that extends above the water plane is generated for the non-linear pressure extrapolation.
5. Hydrodynamics
Using the “linear” hull description (clipped at the waterline) from hydrostatics, hydrodynamic
potentials are computed by a strip method (see e.g. Bertram (2000)).
The strip method is a frequency-domain method for computing hydrodynamic potentials,
accelerations, and in this case also pressures for regular (harmonic) waves by dividing the hull into
cross sections (strips) and thus reducing the three-dimensional computation to a series of two-
dimensional computations. The strip theory is valid for slender bodies like ships. A further
simplification results from treating the wave height as infinitely small. Thus all boundary conditions
can be applied with linear dependence on the wave height. Obviously, some inaccuracies result from
such simplifications, but due to their superior computational efficiency and because of the availability
of well validated software, strip methods are still a standard tool for seakeeping computations. In
GL.ShipLoad, the computation of hydrodynamic potentials and ultimately hydrodynamic pressures is
performed by a strip method. It is modularized in such a way that this method can in future easily be
complemented by a more elaborate method.
From the hydrodynamic potentials and the global mass data of the selected mass distribution (total
mass, center of gravity, tensor of inertia), hydrodynamic pressures can be computed for arbitrary
wave parameter combinations such as wave height, wave length, wave direction, phase angle, and
ship’s speed. To account for finite wave height, a non-linear correction is applied: pressures are
extrapolated above the waterline using the extended hull description (including hull form above still
water line). Thereby, the hydrodynamic pressure is adjusted to the real wave contour. These non-
linear load effects are significant, e.g. due to the characteristic hull form of container ships
(pronounced bow and stern flare). The load magnitude including non-linear effects can differ
considerably from the linear response.
207
6. Selection of load cases
The definition of loads is one of the most important steps in a global strength analysis for a ship.
Several design sea states with different wave heights, lengths and headings have to be investigated
systematically for the application of loads in a realistic way. The focus of GL.ShipLoad is the
application of loads according to the GL guideline for global strength analyses. The procedure can
only be summarized here, see Germanischer Lloyd, (2006) for more detail.
For every loading condition, the hydrodynamic pressure and ship motions are calculated for different
heading angles using strip theory. First, the pressure distribution is determined according to linear
analysis below the still water line. Since the ship motions are based on the results of the linear
analysis, the imbalance of forces due to the non-linear correction of pressures is then compensated by
adjusting the ship accelerations. Inertia forces of the ship and hydrodynamic pressure are then in
equilibrium. Using this procedure, numerous wave situations are systematically analyzed with
varying wave lengths, wave crest positions and headings, taking the hull shape fully into account.
From the precomputed wave situations, load cases have to be chosen which cover the vertical and
horizontal wave bending and the torsional moments according to the GL Rules I 1-1, Sec.5. For every
loading condition of the ship, approximately 20 load cases are finally selected for the finite element
analysis.
In the program, the user can principally select the waves required for the strength analysis manually
by specifying the height, length, direction, and phase angle of the wave, and the speed of the ship. In
this case, the user may directly proceed with the generation of nodal loads, see below.
GL.ShipLoad also facilitates the automatic selection of wave parameters based on section load
extrema as described above. Obviously, the wave height must be excluded from the parameters that
are varied in the search for section load extrema, as there is no upper bound on the hydrodynamic
forces if the wave height is allowed to become arbitrarily large. Instead, reference wave amplitudes
are derived from hogging and sagging wave bending moments (which can be computed according to
GL’s rules by the program from the principal dimensions). The actually applied wave height will be a
function of the reference wave amplitudes, wave length, direction, and phase angle.
1
Due to the non-linear relation between wave height and bending moment, the reference waves have to be found iteratively.
The algorithm (demonstrated for a prescribed bending moment M ) is:
1. Initialize the amplitude A to some arbitrary value
2. Find (for this amplitude) the wave parameters for which M takes on the maximum M max
3. Scale the amplitude according to A ← AM max /M
4. Repeat 2. and 3. until convergence is achieved
208
The computation of section loads (and later, the generation of FE load cases) requires balanced loads,
i.e. the combination of inertial loads and hydrodynamic pressures shall not result in any residual
forces or moments. This may not be the case if the inertial loads are computed from the mass
distribution using the acceleration resulting from hydrodynamics (e.g. because of the different shell
representations). For this reason, factors for the inertia unit load groups are computed such that no
residual forces or moments result from a linear combination with the hydrodynamic pressures, these
factors are then used as “effective” acceleration components.
Waves are selected by user defined section loads extrema (optionally including static section loads,
like “maximal vertical wave bending moment” or “maximal total torsional moment”). Plots of section
loads and hydrodynamics pressures assist in adjusting the criteria for the automatic selection or the
prescribed wave parameters for “fixed” entries. Usually, approximately 20 relevant waves are
selected, each defining a finite element load case.
Fig. 7: Section loads, pressures and displacements (result of the FE Analysis) for the maximum
vertical bending moment and the maximum torsion load cases.
For all selected waves, hydrodynamic pressures are converted to nodal loads acting on the shell of the
FE model. These loads are combined with the unit load groups resulting from (dynamic) mass
acceleration (again, the factors are computed such that no residual forces and moments result) and the
static load groups (weight and buoyancy) into balanced load cases suitable for the FE analysis.
Hydrodynamic pressures (acting on the hydrodynamic shell representation) are converted to nodal
loads (acting on the FE shell representation) in a way that the total forces and moments remain
identical. A major problem is posed by the hydrodynamic shell description (strips) being generally
much coarser than the FE shell description, such that simple interpolation of the pressure only to the
nearest FE nodes leads to large localized forces (and in consequence to large displacements, rendering
the results unusable for a detailed local analysis). For this reason, a more elaborate interpolation
routine that distributes pressures homogeneously to FE nodes is provided with GL.ShipLoad.
Nodal loads can be appended to the FE model (in the BMF file format). They are then stored in the
form of load groups and load factors. Alternatively, nodal loads can be output directly to ASCII files
suitable for processing in ANSYS or NASTRAN.
209
8. Conclusions
The application of loads to a ship in regular waves is an important task in a global strength analysis
with the Finite Element Method. It is fundamental for an application of the design wave approach.
GL.ShipLoad integrates the necessary steps for this task such that starting from the global FE model,
all necessary input for the generation of pressure and inertial loads can be given in one environment.
Particular care has been given to an efficient and comfortable input of the mass distribution for
several loading conditions. This includes cargo, outfitting, and tanks. The optional direct output of FE
masses (instead of inertial loads) supports the later dynamic analysis of the ship.
Due to the use of an efficient and proven hydrodynamic method, the loads for hundreds of different
wave situations can be generated in an interactive session. Non-linear correction is applied to obtain
realistic pressures also above the still water line. Using section forces and moments as criteria, the
most relevant waves can be selected either manually or automatically. In particular, GL.ShipLoad
facilitates the application of the GL Guidelines for Strength Analyses of Container Ship Structures.
Acknowledgements
Partial support of this work by the German Federal Ministry of Education and Research is gratefully
acknowledged.
References
BERTRAM, V. (2000), Practical Ship Hydrodynamics, Butterworth-Heinemann, Oxford
FOLSO, L., RIZZUTO, E. (2003), Equivalent Waves for Sea Loads on Ship Structures, OMAE 2003
GERMANISCHER LLOYD (2006), Rules for Classification and Construction, Chapter V-1-1,
Guidelines for Strength Analyses of Container Ship Structures, Hamburg
IACS (2006), Common Structural Rules for Bulk Carriers,
http://www.iacs.org.uk/csr/bulk_carriers/index.html
IACS (2006), Common Structural Rules for Double Hull Oil
Tankers,http://www.iacs.org.uk/csr/double_hull_oil_tankers/index.html
NCSA (2005), HDF5 Home Page, http://hdf.ncsa.uiuc.edu/HDF5
210
Global Ship Vibration Analysis
Razvan Ionas, PhD Student at University of Galati/Romania, [email protected]
Ionel Chirica, Prof. Dr.-Ing., University of Galati/Romania
Abstract
In the paper some aspects concerning the global vibration analysis of a ship is presented. Taking into
consideration the real dynamic characteristics, the structural modeling was analyzed. Typical large
substructures, such as the aft part of the ship, the deckhouse and the double bottom are coupled in a
way that they cannot be considered isolated. In the model were represented primary structural
components with the aid of shell elements. The first 30 natural frequencies are determined. Using the
measurements of the pressure pulses amplitudes acting on the ship’s shell, due to the propeller with
weak cavitation, the resulted excitation force amplitude is calculated. The excitation frequencies are
taken as equal to the first and second harmonics of the blade. The amplitude response (displacements
and accelerations) of the structure in certain points was calculated.
1. Introduction
In shipbuilding practice, with regard to the effect of vibrations on human beings, it should basically
be noted that existing standards are aimed solely at ensuring comfort and well-being. Since the
periodic excitation forces of the propulsion plant – especially of the propeller – are subject to a
certain degree of variation, the vibration values, too, are measured with the corresponding variance.
Even if the limits of human exposure to vibrations are not exceeded in the accommodation area of a
ship, vibration problems can nevertheless occur in other areas in which these limit values do not
apply. Due to the considerable vibrations of the ship, a case of resonance should be possible, so that
considerable dynamic magnification of structures may occur and the risk of damage resulting from
inadequate fatigue strength will then particularly high.
The material, structural details, (stress concentrations), vibration mode, welding processes applied,
production methods employed and environment (corrosive media) are the factors that influence the
fatigue of strength. Due to these factors, the bandwidth for the possibility of occurrence of cracks is
large. Because the low cost building and operation aspects of a ship increasingly influence the design,
vibration problems occur more frequently. The following design trends contributed to this: light
weight construction and, therefore, low values of stiffness and mass (low impedance); arrangement of
living and working quarters in the vicinity of the propeller and main engine to optimize stowage
space or to achieve high service speed; small tip clearance of the propeller to increase efficiency by
having a large propeller diameter; use of fuel-efficient slow-running main engines.
On the other hand, the consistent application of labour legislation rules and higher demand for living
comfort underline the need to minimise the vibration level.
The simplest way to avoid vibrations is to prevent resonance conditions. This procedure is successful
as long as natural frequencies and excitation frequencies can be regarded as being independent of
environmental conditions. In questions of ship technology, this prerequisite frequently remains
unfulfilled. In global vibration analysis of the ship is to consider that typical large substructures, such
as the aft part of the ship, the deckhouse and the double bottom are coupled in a way that they cannot
be considered isolated. From today’s point of view, classical approximation formulas, simple beam
models and other simplified methods for determining natural bending frequencies of a ship’s hull are
in many cases no longer adequate. So, FE analyses using 3D models, in which may be considered all
the structure details of the hull became the useful and complete computation tool.
211
2. Ship hull structure natural vibrations calculation
A barge is analysed for structure dynamic behaviour determination. So, natural frequencies and
forced vibrations due to the pressure pulses amplitudes acting on the ship’s shell, induced by the
propeller with weak cavitation are determined.
In the spite of the fact in the global vibration analysis it is not necessary to model the middle and the
forward part of the ship with a high level of detail, in our analysis the almost all part of structure
details were considered.
In the analysis process a lot of stages regarding the level of detail structure were considered. Also, a
certain number of stages regarding the extension of modelling along the ship length were considered.
The difference between the natural frequencies of each stage occurs. Finally, a complete and detailed
structural model of the hull has resulted. In the model are represented primary structural components
with the aid of shell elements. Large web frames of the decks and wall girders are also modelled by
shell elements. The thickness and inertia characteristics were determined from the condition to obtain
the same natural frequencies for both models – real panel and simple plate panel.
Due to the fact should be possible to occur vibrations due to the hydrodynamic interaction between
propeller and rudder, the rudder structure and its axis are modelled.
The main engine was modelled by shell elements having the thickness and mass density so that to
obtain the same weight as the real main engine. The shafts were modelled with beam elements.
The vibration calculation was performed with package soft COSMOS/M.
To compute a certain number of natural frequencies, as starting vectors the subspace iteration method
was chosen. For inertia matrix the lumped mass matrix was preferred. In Table 1 only the first 30
elastic body natural frequencies are shown.
212
Mode Frequency Mode Frequency
No. [Hz] No. [Hz]
1 3.47 16 14.28
2 4.33 17 15.97
3 5.94 18 16.63
4 7.43 19 17.04
5 8.94 20 17.87
6 9.23 21 18.04
7 10.04 22 18.25
8 10.27 23 18.55
9 10.41 24 18.93
10 10.64 25 20.24
11 10.76 26 23.24
12 12.07 27 23.67
13 12.86 28 24.25
14 13.23 29 24.68
15 13.79 30 25.65
Excitation forces induced by the propeller are transmitted into the ship via the shaft line and in form
of pressure pulses acting on the ship’s shell.
For the present ship, the main forced vibration is produced by the propeller. That is the excitation
forces are introduced into the ship’s structure by the pressure pulses acting on the ship’s shell.
Pressure fluctuations acting on the shell are a result of several physical causes:
- displacement effect (thickness effect) of the rotating propeller. This effect is independent of the
wake field and its contribution to the overall pressure amplitude for the propeller of a merchant ship
is about 10 to 30%.
- portion resulting from or induced by the pressure difference between the back and the face of the
blade. This effect, too, occurs independently of the wake field and contributes up to about 10% to
the overall pressure amplitude.
- displacement effect of the fluctuating cavitation layer that typically forms when the propeller blade
is
moving through the wake peak in the region of the outer radii.
213
Fig. 3: Second vertical global vibration mode (f=7.43 Hz)
Pressure pulses on the shell are also caused by the induction and displacement effect of the propeller
tip vortex and the collapse of the individual cavity bubbles. Whereas the former process mainly has
an effect in the frequency range corresponding to the higher harmonics of the propeller blade, the
latter phenomenon mainly influences the excitation characteristics in the noise frequency range.
From above mentioned contributions to the overall pressure amplitude, it can be concluded that high
excitation forces can be expected only in case of cavitating propeller.
Using the measurements of the pressure pulses amplitudes acting on the ship’s shell, due to the
propeller with weak cavitation, the resulted excitation force amplitude are calculated.
The excitation frequencies are equal to the blade frequencies. For the vibration analysis the first and
second harmonic of blade are considered.
The measurements of the pressure pulses were done on the ship’s shell in an area above the propeller.
Due to the known variation mode of the induced pressure above the propeller, for resulted force
determining, certain supplementary points were imposed so due to the symmetry and due to the side
area zero value.
After integration, for the loaded ship, the value of resulted amplitude forces is F=19.704 kN.
Taking into account the propeller has 4 blades and the rate is 178 rev/min, the excitation frequencies
are equal to the first and second harmonics of the blade, that is f1=11.87 Hz; f2=23.74 Hz.
214
The point where this force is acting is located in CL, above the propeller.
The interested vibration modes for forced vibration analysis are those placed in the area of blade
harmonic frequencies (11.87 Hz and 23.74 Hz).
For this study, the excitation force induced by propeller is considered, due to the fact the main engine
excitation is damped by special dampers. Also, the measurements so for pressure pulses and for
amplitude response are in accordance with the calculus.
The calculation of forced vibrations is performed separately, only for relevant orders and as
excitation source is considered the force induced by propeller. The position of the excitating force,
excitation frequencies and the points locations in the accommodation area are taken into
consideration, in accordance with the measurements.
The excitation force amplitude is determined based on the pressure pulse amplitudes measured in 6
points located on the shell above the propeller. The point where this force acts is located in central
line above the propeller (fig. 4). The point locations for response determination (accelerations and
displacements) are on the accommodation decks, as it is shown in figure 4 (noted as P1, P2 and P3).
Since natural frequencies vary for different loading, conditions and also because it is possible to exist
a particular revolution rate, investigations were carried out over a large frequency range (0-30 Hz as
it is seen in figure 5). FEM damping modal coefficient for natural frequencies, such as 0.001 for the
first natural vertical vibration mode and 0.00095 for the second one are taken into account.
According to ISO 6954 standard, determination of overall frequency weighted r.m.s. values can be
derived from amplitudes calculated for individual excitation orders.
In fig. 5 the vibration vertical displacement spectra for FEM node 23633 (P3 point in fig. 3) in the
ranges of the first and second harmonics of the blade are shown.
As it is seen, the characteristic curve of the first order excitation differs greatly from that of the
second one.
One principal advantage of this kind of diagram is that trends on the amplitude level for varied
natural and excitation frequencies are illustrated. In the figure 5, the acceleration spectra response for
FEM node 23633 is shown.
As it is seen, around the first and second harmonics blade important maximums are not occurring.
If these harmonics should placed in the area of important maximum, a case of resonance should be
possible, so that considerable dynamic magnification of structures may occur and the risk of damage
resulting from inadequate fatigue strength will then particularly high.
215
Fig. 5: Acceleration spectra (in dB) in node 23633 for vertical dynamic vibration
Even in this case, local maximums are presented (as it is shown in Fig. 5) and certain vibrations in the
accommodation spaces occur. These vibrations exceed the limits of human exposure stated in
standards and in the norms of the registers of ship’s classification.
4. Remarks
The calculation of forced vibrations of ships requires a considerable amount of numerical effort.
The remarks show how questions regarding ship vibrations can be dealt with comprehensively from a
contractual, theoretical and experimental point of view.
Main hull and superstructure vibration and local of ship structures are considered permissible if the
root-mean-square (r.m.s.) values of vibration rate or vibration acceleration, do not exceed the values
(levels) (stated in [3]) for each of three inter-perpendicular directions about ship axes: vertical,
horizontal-transverse and horizontal-longitudinal (main vibration) of the direction normal to the
structure plane (plate members, panels, grillages and their girders and stiffeners) or for the direction
corresponding to the lowest bending rigidity for isolated girders and beam members (local
vibrations).
In many cases, the effect of the draught on pressure pulses at propeller blade frequency is
pronounced.
In many cases, an acceptable vibration level is achieved by two ways: a new propeller concept
and modifying the structure in certain areas.
The new propeller concept means installing a new propeller in nozzle, to decrease the
pressure pulses amplitudes, or changing the number of propeller blades to modify the resonance
points.
The modifying structure is to perform in certain area, so that with minimum changing to obtain
maximum benefit (changing of the natural frequencies of local area structure).
These changing methods are in the designer charge that is to choose the best way. The challenge here
is to find a reasonable compromise between cost and benefit.
216
References
1. International Standard ISO 6954 (1984): Mechanical Vibration and shock – Guidelines for overall
evaluation of Vibration in Merchant Ships.
2. GL Technology, Ship Vibration (2001); Information from Germanischer Lloyd Group, Issue No.5.
3. RRS - Russian Register of Shipping (1990); Rules for the Classification and Construction of Sea-
going Ships.
4. MARIN - Report No. 19831-1-TM
217
Combined analysis methods used to investigate
the steering capabilities of a river pusher
Abstract
Just recovering after the dramatic fall of Danube traffic caused by the Serbian war, the river fleet of
the Romanian operators is facing a new challenge. The new European regulations regarding the ma-
noeuvring capabilities for inland waterways vessels, as well as the new rules for pollution prevention
and noise reduction, requires drastic measures to be taken by the ship owners.
It seems that, at least for the moment, the most efficient way for solving the matter is to modernise the
existing river pushers, in our case, by replacing and/or upgrading the propulsion and steering sys-
tems. From engineering point of view, the most important task of this project was to evaluate and op-
timise the manoeuvring characteristics, in order to comply with the requirements. This was solved us-
ing combined methods of hydrodynamics, finite element method and hydraulic systems calculation.
This paper presents these methods, used for investigation and optimisation of manoeuvring and steer-
ing capabilities of a 30m-river pusher 2x1600 HP.
1. Introduction
The upcoming integration in the European Union combined with the hard competition between the
fleet owners has lead to the modernisation of more than 40 river pushers, that taking into account the
different configurations sums up to 5 different design project. Maybe the most representative is the
30m-river pusher, type 809. The ship is equipped with 2 nozzle propellers and 4 “Balabal” system
rudders simultaneously activated by 4 hydraulic cylinders thru a equalisation bar system.
As a result all the onboard equipment has been changed, replacing the original 20 years old engines
with state of the art Caterpillar engines increasing the installed power from 2x1200 HP to 2x1675HP.
An analysis of the resistance of mechanical elements needs to be carried out in accordance with the
new increased power.
The kinematics of the rudders allow a slewing angle from –900 to +900, taking into consideration that
between 450 and 900 the main engines are working at a reduced power and the steering effect of the
rudders decrease, producing a stopping effect.
The ship is designed and constructed under Romanian Naval Authority for class of Inland Waterway
Vessels.
The steering system should comply with the following Conventions and Rules:
2.1 Germanischer Lloyd, Inland Waterway Vessels, Rules and Guidelines 2004
2.2 Reglement de Visite des Bateaux du Rhin (issued by Commission Centrale pour la Navigation du
Rhin), Ch.5…7.
2.3 Annex 1 of the Romanian Ministry of Transports Order No.595/2003
Rudder and rudder stock dimensions are shown in the figure below. All rudders and rudder stocks are
the same size.
218
f d0=180
d'0=170
u
_ ~1
f~
u
d3=190
d =190/215
d2=220
r2=0.485m
h=1.08m
r 1=0.448m
2
Ac=2.84m2
Checking of the dimensions has been done two different manners. In the first place according to
Romanian Classification Society and secondly through direct calculation.
219
- Axial resistance modulus for diameter d2:
Wid2 = WPd2/2 = 1045 cm3
3.2.2 Hydrodynamic force calculation, induced by the propeller jet acting on a rudder at 45°
angle.
Calculation carried out according to Gofman (1988).
2
VC
Pn1 = ρ ⋅ ⋅ cn ⋅ Ac , where: (3)
2
ρ = 1000, kg/m3 water density;
k = 0.78;
2 ⋅T 2 ⋅160000
CT = = = 17.18 (6)
ρ ⋅ V A ⋅ AEL 1000 ⋅ 2.5 2 ⋅ 2.98
2
Resulting:
VtC = 2.5 ⋅ 1+ 17.18 = 10.6m/sec., (7)
Also:
cn = cy ⋅ cos450 + cx ⋅ sin450 = 1.05 ⋅ 0.707 + 0.65 ⋅ 0.707 = 1.2 (9)
Resulting:
8.2682
Pn1 = 1000 ⋅ ⋅1.2 ⋅ 2.84 = 116485 N = 116.485 kN, (10)
2
Mt1 = 116.485 ⋅ (0.76 – 0.485) = 116.485 ⋅ 0.275 = 32.0 kN⋅m = 3200 kN⋅cm (11)
3.2.4 Checking of the rudder stock upper minimum diameter d’0 for torsion
M t1 3200
τt/d’o = = = 3.32 kN/cm2 < τa (12)
WPd 'o 964
220
3.2.5 Checking of the rudder stock maximum lower diameter d2, for bending and torsion
M t1 3200
τt/d2 = = = 2.28 kN/cm2 < τa (14)
WPd 2 2090
3.2.6 Conclusions
Actual diameters of the rudders stock ( existing on board) are satisfactory and do not need
modifications.
- The total force necessary to be achieved by four double action cylinders is:
FT4 = min. 80.2 tf.
- Slewing time = 20 sec. (from –450 to +450)
4. Transmission Elements
Input data for the calculation of the transmission elements is the torsion moment in the rudder stock
as previously calculated. Calculation of the forces and moments in the transmission elements has been
done using a 2D beam finite element model that simulates the articulated transmission system at a 45˚
rudder position. Calculation of the articulated arms has been done using a 3D SHELL3T finite
element with 6 degrees of freedom per node.
Although the rudders have a maximum rotation angle of 90˚, the maximum stress have been
considered to appear at a 45˚ angle due to the fact that between 45˚ and 90˚ the engines do not work
at full power, and the steering capabilities of the ship are reduced by drag.
In conclusion calculations have been carried out considering the torsional moment in the rudder stock
a 45° angle for the rudders.
4.1.1 Model
For the model SAP90 has been used.
In Fig.3 we have the model used to establish the forces in the system articulations. Model comprises
beams (numbers in rectangles) and nodes (numbers without rectangles).
Geometrical characteristics of the beams are in accordance with the medium section of the articulated
elements.
221
4.1.2 Boundary conditions
Nodes have three degrees of freedom (in-plane shift and rotation normal to plane), except for fix ar-
ticulations (nodes 7, 14, 21, 28), type 1 arm (nodes 4, 11, 18, 25) and hydraulic cylinders (nodes 1, 8,
15, 22). The ends of the bars have also conditions where bending moments are null.
4.1.3 Loads
Model has been loaded with 4 bending moments according to rudder stock torsional moments.
8
26
10
9
14
11
7
10
13
22
8
9
12
5
1
3
2
3
2
4
4
5
6
25
21
1
29
15
23
27
16
17
21
18
11
15
12
20
24
13
14
19
26
24
16
18
23
17
19
28
25
20
27
2228
222
4.1.4 Results
In Table I are the force and moment values for each beam in the model.
In Table II are the reactions in the fix articulations, where R = Fx 2 + Fy 2 .
Table I
Table II
Node Fx Fy Mz R
[daN] [daN] [daN.cm] [daN]
1 22430 477 0 22435
4 -27820 -1795 0 27878
7 11640 1319 -313600 11714
9 -12350 -5666 0 13588
12 6618 5666 -313600 8712
13 21150 -574 0 21158
16 -19230 -1319 0 19275
19 11640 1319 -313600 11714
21 -20710 -5091 0 21327
24 6618 5666 -313600 8712
Calculations for all the elements in the transmission have been made in order to establish the maxi-
mum stresses and the allowable tensions.
223
4.1.5 Conclusions
Except for the hydraulic cylinders all the existing elements are of sufficient strength. During the cal-
culation the friction existing in the articulations has not been considered.
4.2.1 Model
Model has been originally drawn in AutoCAD after pictures and full size paper model of the original
arms. After numerous checking to ensure the accuracy of the shape it has been exported to
COSMOSM and a 3D SHELL3T model has been produced.
224
4.2.2 Boundary conditions
Although the model is 3D, boundary conditions are imposed as for a 2D model as shown in Fig. 5.
4.2.3 Loads
For loading the model the reactions calculated at Chapter 3.1 have been used.
4.2.4 Results
If for the type two arm the tensions are between allowable limits for the first type they greatly exceed
these limits.
An efficient and economical solution was needed to lower the tensions in the arms. A bracket has
been placed in the tension concentration area. Welding the bracket is an easy and cheap construction
task.
225
Fig. 7: Type 2 Arm original shape
226
4.2.5 Conclusions
Although only one arm type needed modifications, for safety reasons both arm types have been added
brackets that lead to a significant decrease in the stresses.
5. Trial Program
References
227
Computer Support for Hull Condition Monitoring with PEGASUS
David Jaramillo, Germanischer Lloyd, Hamburg/Germany, [email protected]
Christian Cabos, Germanischer Lloyd, Hamburg/Germany, [email protected]
Abstract
Corrosion is one of the major issues for the structural condition of a vessel during its service life. In
case of tankers and bulk carriers this aspect is even more crucial due to the exposure of the structure
not only to salt water, but also to other abrasive substances in the cargo spaces. In order to monitor
the condition of structural components, condition assessments are conducted within the scope of class
surveys, statutory surveys and ship owner's surveys in regular time intervals. Information to be
recorded consists of thickness measurements and other findings affecting structural strength like
cracks, coating and anode condition. Today, thickness measurements are typically recorded manually
on ship drawings or tables, i.e. the recording and handling of such information is dominated by paper
work, manual copying of data, and spreadsheets. The amount of data and increasing requirements
with respect to condition assessment demand efficient computer support.
Currently, there is no standard for the storage of thickness measurement data. In this paper the
development of a new computer tool is presented, which supports the thickness measurement process
from planning through recording to visualization. The PEGASUS system makes use of a neutral data
format for hull condition monitoring data, which has been developed for this purpose in the EU
funded research project CAS. The foremost purpose of this data format is the easy association of
survey findings with their location on the ship.
1. Introduction
The condition of the steel structure of ships is subject to requirements of classification societies as
stipulated in the rules for classification on the one hand and to international requirements being
controlled by statuary regulations specified in several IMO resolutions (e.g. A744(18) and
MEPC.94(46)) on the other hand. Also from the business and operational perspective, the condition
of the structure of any marine vessel (ship or any other floating unit) is a major concern for the
owning/managing company during the whole service life. Since dry-docking stays represent an
interruption of the service of the vessel, they must be kept to a minimum. Furthermore, for ship types
most vulnerable with respect to corrosion (i.e. tankers and bulk carriers), special requirements from
the cargo owner, vetting procedures and additional inspections such as CAP (Condition Assessment
Programme), are common practice.
Usually, structural defects on ships are identified by means of visual inspections (marine surveys) and
measurements using special instruments (e.g. crack detection, tightness, plate thickness, etc.). These
inspections are carried out by surveyors of the classification societies, by specialized firms and
sometimes by crew staff. Structural defects have often been classified into the following categories,
e.g. in Weydling et al. (2003):
- Material deformation (buckling)
- Material Rupture (cracks, breach)
- Material Degradation (Corrosion, Abrasion)
While the first two categories are relatively easy to localize (except for cracks), quantify, and
characterize, in the third case, especially when considering all possible types of corrosion (general
corrosion, pitting corrosion, grooving corrosion, etc.), the characteristics of the defect are more
difficult to describe, collect and report.
Corrosion is one of the most common types of deterioration of metal structures. It cannot be fully
avoided in maritime business, since most marine vessels are made of steel being more or less exposed
to water or aggressive substances during its service life. The focus of this paper is on monitoring of
corrosion as an important mechanism to have some control over the process and to prevent for
instance structural failure or collapse, resulting in major damages for life and the environment.
228
2. Thickness Measurements (TM) for Corrosion Monitoring
Corrosion monitoring in maritime business is today typically performed through ultrasonic thickness
measurements (UTM) carried out by qualified operators using specialized measurement equipment.
Procedures for UTM are well established and mostly governed by requirements of the individual
Classification Societies. The International Association of Classification Societies (IACS) has
introduced so called Unified Requirements (UR) and Procedural Requirements (PR), covering
explicitly the execution of UTM as part of the classification survey procedure, IACS (2004) and
IACS (2006). The individual requirements are available for common ship types and include details on
the scope and locations of measurements as well as recommendations for the reporting format (so
called IACS TM tables, see Fig. 1).
Ship's
Chess Class Identity No. 30999 Report No. RMAB 849
name
STRAKE
POSITION 8th strake from keelstrake, upper bilge strake
No. Org. Maximum Forward Reading Aft Reading
PLATE allowable Mean Diminution %
or Thk. Diminution Gauged Diminution P Diminution S Gauged Diminution P Diminution S
POSITION
Letter mm mm P S mm % mm % P S mm % mm % P S
7th J15 17,5 2,03 17,5 17,6 -- -- -- -- 17,5 17,5 -- -- -- -- -- --
6th J14B 17,5 2,03 17,7 17,8 -- -- -- -- 17,7 18,1 -- -- -- -- -- --
5th J14A 35,5 3,00 35,4 35,6 0,1 0,3 -- -- 35,3 35,6 0,2 0,6 -- -- 0,4 --
4th J13 35,5 3,00 35,7 35,4 -- -- 0,1 0,3 35,4 35,4 0,1 0,3 0,1 0,3 0,3 0,3
3rd J12 35,5 3,00 35,3 35,2 0,2 0,6 0,3 0,8 35,4 35,2 0,1 0,3 0,3 0,8 0,4 0,8
2nd J11 35,5 3,00 35,3 35,8 0,2 0,6 -- -- 35,4 36,0 0,1 0,3 -- -- 0,4 --
1st
J10 25,5 2,75 26,0 25,8 -- -- -- -- 25,4 25,7 0,1 0,4 -- -- 0,4 --
forward
Amidships J9 25,5 2,75 25,7 25,5 -- -- -- -- 25,7 25,9 -- -- -- -- -- --
1st aft J8 25,5 2,75 25,5 26,0 -- -- -- -- 25,7 25,8 -- -- -- -- -- --
2nd J7 25,5 2,75 25,8 25,9 -- -- -- -- 25,8 25,9 -- -- -- -- -- --
3rd J6 25,5 2,75 25,4 25,5 0,1 0,4 -- -- 25,5 25,6 -- -- -- -- 0,4 --
4th J5 25,5 2,75 25,4 25,2 0,1 0,4 0,3 1,2 25,2 25,6 0,3 1,2 -- -- 0,8 1,2
5th J4 22,0 2,43 21,6 22,1 0,4 1,8 -- -- 21,9 22,0 0,1 0,5 -- -- 1,1 --
6th J3 22,0 2,43 21,7 21,7 0,3 1,4 0,3 1,4 22,0 22,6 -- -- -- -- 1,4 1,4
7th J2 11,5 1,50 11,7 11,5 -- -- -- -- 11,5 11,6 -- -- -- -- -- --
8th J1 11,5 1,50 11,7 11,2 -- -- 0,3 2,6 11,4 11,6 0,1 0,9 -- -- 0,9 2,6
Considering the above mentioned aspects, the logical step for achieving an improvement in
processing time and data quality is the provision of adequate IT support in terms of tools for efficient
data input, assessment, and exchange and correspondingly tailored electronic data formats. For the
last years this has been the subject of Research and Development activities at Germanischer Lloyd in
close cooperation with industrial and academic partners. An example of such activities is the EU
229
Research Project CAS, in which Germanischer Lloyd in cooperation with two other IACS society
members (Bureau Veritas and Russian Register of Shipping) and other partners is developing a
neutral Data Model for collecting, transporting and storing TM data as described in the following
section. Building on this Data Model, Germanischer Lloyd has implemented the software tool
PEGASUS, for TM data collection, visualisation and reporting. This tool is introduced in later
sections.
Typically, the CAD Drawings and the Excel tables represent today’s maximum degree of IT support
for the TM procedure. There is no standard electronic format for the TM data itself. Results of a
process analysis, Jaramillo et al (2006), show clearly, that a standardized format to be used by all
participants in the process in combination with adequate software tools would solve many of the
aforementioned problems.
Consequently, the Hull Condition Model (HCM) data format developed in the CAS project aims
specifically at improving the current TM process, Jaramillo et al. (2005). Furthermore, HCM covers
other aspects of the Hull1 Condition Monitoring and Assessment (HCMA) process beyond the scope
of the Thickness Measurements, such as pitting corrosion, coating condition, buckling and cracks.
HCM is based on XML technology and contains the data constructs which are necessary to transport
information about the structural condition of a ship. The Data Model will be proposed to IACS for
standardisation with the aim to reach wide acceptance and hence easy data exchange between the
parties involved in the TM process.
HCM is a data model focussing on the service phase of the vessel. The foremost purpose of HCM is
the easy association of survey findings with their location on the ship. Serving mainly this goal, the
complexity and detail of the model is kept to a minimum.
For that reason, HCM uses simplified geometry in contrast to complex, topology-based structural
definitions used for other purposes in shipbuilding, mainly during the design phase (strength
calculation, manufacturing preparation, etc.). In HCM, simplified geometry is used
- for the graphical representation of the structural parts for both data collection and visualisation
purposes and
- for mapping measurement results to a more complex analysis model.
The level of accuracy of the geometry representation in HCM is comparable to the one found in
sketches and drawings prepared for the same purpose in the current process. The focus of the model is
on the identifiable shape of each individual structural part and its position in the vessel. In contrast,
the relationships between the parts and their accurate shape are less important in this case. In
particular, gaps or overlaps between adjacent plates in the model pose no problem as long as it is
possible for the user to identify the displayed part and associate it with what is seen on the steel
drawings or in the real ship.
The association between measured value and the corresponding structural part can be established by
the user either by means of the position of the measurement point (transported in the HCM file) or by
other linking mechanisms such as a naming scheme (i.e. a globally unique identifier) for each
structural part. As soon as a measurement point has been entered, within the data model it uniquely
references a plate and its local position on the plate.
1
The term "Hull" is used here to refer to all structural aspects of a vessel in contrast e.g. to machinery
or parts of the electrical installation.
230
3. TM Process with PEGASUS
PEGASUS is a software tool developed at Germanischer Lloyd to support the TM process. The tool is
intended to be used by TM firm inspectors, by GL surveyors on site and by GL Hull Condition
experts at the Head Office in Hamburg. When designing PEGASUS, special attention has been given
to achieve fast and easy input and visualisation of TM data. Fig. 2 shows a screenshot of PEGASUS
in action.
The concept of the new TM process using PEGASUS is shown in Fig. 3. Starting from an available
POSEIDON structure model, the initial HCM file, containing the simplified geometry representation
of the structure parts, will be generated by means of a corresponding data interface. At this stage, the
HCM file contains general information about the ship, the frame table, information about each
structural part to be measured (plates and stiffeners), the compartment definitions and a list of
structure members. This initial HCM file (pre-survey status) is prepared at Germanischer Lloyd and
sent to the corresponding TM firm.
231
Ultrasonic
Tabular & Gauging Device
HCM File
(XML) Graphical
POSEIDON Display
Pegasus Pegasus
TM Report
fleet-online TM firm
GL
HCM File
(XML)
Ship Owner
Pegasus Viewer
A TM configuration consists of a set of measurable structure parts, which are arranged according to a
specific measurement task. For instance, if the task is to measure the whole outer shell plating, the
common arrangement for data collection and reporting is strake based (based on the shell expansion
drawing), but if the task consists of the measurement of a "belt", then a cross section based
arrangement of the structure parts at the specific frame position is more convenient.
The PEGASUS screenshot in Fig. 2 shows typical strake based and cross section based TM
configurations. Data Collection can be achieved in both the tabular and the graphical view.
Furthermore, the gauging values are visible independently of the configuration; e.g. readings taken in
cross section view are visible in the corresponding position in the strake view and vice-versa.
232
3.2 Data Collection
PEGASUS has been designed to provide support in different user scenarios depending on aspects like
the different way of working of TM firms (e.g. number of operators) and the available equipment
(simple or sophisticated gauging devices, data loggers, etc). Data collection is achieved by entering
the measured values directly in the tables or in the graphical representation. Both views are
interconnected and can be hidden or shown as required. The data can be entered in PEGASUS
directly on site or later in the office. In the latter case, graphical views of the measured areas can be
printed out and taken onboard for data collection. This approach corresponds to the current way of
working, with the advantage, that once the data is entered in PEGASUS further processing is much
easier than today with respect to reporting, visualisation, and assessment.
Additionally, a direct connection to an UTM device is supported. Depending on the used device, it is
possible to have a one or two directional data exchange. In the first case the measured values are
transmitted to PEGASUS and assigned to the corresponding position. In the second case a complete
measurement plan can be sent to the UTM device. The inspector goes onboard with the programmed
UTM device, in which the measurement plan can be displayed in tabular form, takes the
measurements and sends all the results back to PEGASUS. Fig. 5 shows a TM configuration in
PEGASUS and the corresponding view on the display of a Krautkraemer DMS-2 UTM device.
3.3 TM reporting
After the data collection phase, the HCM file containing the measurement values is available (post
survey status). At this stage the TM reports can be generated in PEGASUS according to the IACS
requirements. To facilitate integration with word processing software, the Rich Text Format (RTF)
has been chosen for this purpose (see Fig. 6). In fact, automatic generation of IACS compliant TM
reports represents one of the major improvements in terms of time saving with respect to the old
process. PEGASUS provides flexibility in the composition of the TM report by means of a reporting
wizard. Individual reports for each configuration (e.g. for daily reports) or a global report containing
all available TM results can be generated.
233
Fig. 6: Automatically generated TM Report
234
Measurement points are displayed in different ways. The displayed information can be configured,
e.g. showing the id of the point, the measured value, or an assigned numbering for the UTM device
configuration.
As the result of a TM campaign is contained in an HCM file, which is a neutral data format, the
information about the corrosion status of the vessel can be provided to the ship owner via online
services (e.g. GL's fleet-online). An HCM Viewer (e.g. a special version of PEGASUS) will be
provided as a tool for the Ship Owner to adequately visualize the data.
A new software tool supporting the thickness measurement process has been developed by
Germanischer Lloyd. The target users of PEGASUS are TM firms and GL staff (Surveyors and
Experts) for data collection and data assessment, respectively. Ship owners will be able to view
results of thickness measurements in 2D and 3D visualizations with a special viewing program.
Instead of preparing sketches and drawings after taking the measurements, a structural model of the
ship will be prepared before the inspection. A derived HCM model can then be used to plan the
measurements. By directly associating gaugings to the model during inspection using PEGASUS, the
complete results of the measurement campaign can be available directly afterwards. Thereby,
inspection results can be presented earlier to the ship owner and potential sources of error are reduced
because of significantly less manual interaction during measurement and reporting.
Acknowledgements
The HCM Data Model is being developed in cooperation with the members of the CAS consortium
(Bureau Veritas, Russian Register of Shipping, Materiaal Metingen, SENER, Intertanko, Lisnave,
Instituto Superior Tecnico, Total and Cybernetix). This work is supported by the European
Commission in the Sixth Framework Programme. The joint work and the funding provided by the
European Commission are gratefully acknowledged.
The data interfaces to and from UTM devices have been developed in cooperation with GE Inspection
Technologies (formerly Krautkraemer).
References
BRUCE et al. (2003), Inspection and Monitoring, in: Proceedings of the 15th International Ship and
Offshore Structures Congress, Mansour, Q.E., Ertekin, R.C., eds., 2003, Vol. 2, p.37-69.
IACS (2004), PR 19 "Procedural Requirement for Thickness Measurements", Revision 3, June 2004,
http://www.iacs.org.uk/preqs/PR19R3.pdf
IACS (2006), UR Z "Requirements concerning Survey and Certification", Revision 12, 2006,
http://www.iacs.org.uk/ureqs/URZ.pdf
IACS1 (2004), Guideline 77 "Guidelines for the Surveyor on how to Control the Thickness
Measurement Process", Revision 1, July 2004, http://www.iacs.org.uk/_pdf/Rec77.pdf
IACS2 (2004), PR 23 "Procedures for Reporting Information on the Approval of Thickness
Measurement Firms", Revision 1, December 2004, http://www.iacs.org.uk/preqs/PR23R1.pdf
235
IMO (2001), Resolution MEPC.94(46) "Condition Assessment Scheme", 2001.
IMO (2003), Resolution MEPC.111(50), Amendments to regulation 13G, addition of new regulation
13H, 2003.
JARAMILLO D., CABOS C., RENARD P. (2005), Efficient Data Management for Hull Condition
Assessment, International Conference on Computer Application in Shipbuilding ICCAS 2005, Pusan,
Korea, Sept. 2005.
JARAMILLO D., MIKELIS N., CARTAXO A., ROUTISSEAU L., MOERLAND P. (2006), CAS
Deliverable D-1-2-1 "Business Process Analysis and User Requirements", Draft version 0.7, March
2006.
JARAMILLO D., CABOS C., RENARD P. (2006), Efficient Data Management for Hull Condition
Assessment, International Journal of CAD/CAM Vol.6, No. 1, 2006
WEYDLING C., KREBBER K. (2003), Erfassung und Verarbeitung von Dickenmessungen,
Abschlussbericht Teilprojekt WIPS II B3, GL-Report BFC 2003.282, 2003.
236
Ship domain in navigational situation assessment in an open sea area
Abstract
Shipboard navigational systems in use nowadays enable the collection of detailed data on vessels
encountered. Therefore, the data can be used in an analysis and assessment of a navigational
situation. The assessment provides a basis for making a decision on the type and scope of a
manoeuvre to be performed. The ship domain as a criterion for navigational situation assessment has
been more frequently used recently. This domain can be determined by artificial intelligence methods.
These methods allow to utilize the knowledge and experience of navigators. In the article the results
of ship domain determination for various encounter situations in an open sea area are presented.
Certain ship's parameters have been taken into consideration in the research.
1. Introduction
Safe steering of a ship calls for continuous identification and assessment of a navigational situation.
The assessment is based on some defined criteria which classify the membership of a given situation
to a specific category /safe situation, dangerous situation/. One natural criterion of navigational safety
is the distance of a ship to other objects recognized as dangerous to navigation. In an open sea area
these are generally other ships or submerged or surface navigational obstructions, such as a wreck,
shallow water, drilling rig etc. Minimum distances from these objects for all heading angles of the
ship determine an area around it that should be clear of other objects. This area is called ship domain.
While determining a ship domain, navigators refer it to other objects. Therefore, it is natural that both
the geometrical dimensions of an encountered object as well as its velocity vector are taken into
consideration. Methods for the determination of a ship domain are statistic, analytical and those
employing artificial intelligence tools. The latter are capable of acquiring and using declarative
(descriptive) knowledge of navigators concerning situation assessment.
The process of identifying the domain shape and size can be carried out automatically, by using
available information on present parameters of the encountered objects. The sources of such
information are, among others, AIS (Automatic Identification System), LRIT (Long Range
Identification and Tracking) VTS (Vessel Traffic Services) or hydrographic databases.
According to the definitions of ship domain as formulated by Fuji [2], Goodwin [3] and Coldwell [1],
by using this criterion we can assess a navigational situation classifying it as either a safe or
dangerous situation. Practically, certain flexibility is allowed in this clear-cut classification,
concerning an entrance of objects into the ship domain area as well as maintaining a ‘slightly’ larger
area than the one determined by ship domain. Therefore, it seems justified to build domains the shape
and size of which would be dependent upon the navigational safety level assumed by the navigator.
The safety level can be described in linguistic terms (safe situation, unsafe, dangerous, very dangerous
or others) or by numerical values in an adopted range, e.g. <0, 1>. This type of domain is referred to
as the ship fuzzy domain [4].
This article will present a method for the determination of a ship domain and its use in a navigational
situation assessment.
Safe navigation means safe manoeuvring and is strictly connected with the process of navigational
situation identification and assessment. The result of the assessment will affect taking proper
decisions and actions.
237
2.1. The closest point of approach and the guard distance
The distance criterion is commonly used in navigational practice in such equipment as radars and anti-
collision systems. The distance is, in fact, the closest point of approach /CPA/. The navigator sets the
limit CPA, which is a boundary value of distance to other encountered ships that should be
maintained. When another ship comes closer than the CPA, the situation is interpreted as a danger and
necessitates action to be taken to avoid it. Other distances are also used, as defined in COLREGs. It is
recommended to search on long ranges of the radar to make sure there is no risk of collision. That is
why the navigator often sets guard zones for these heading angles for which navigational situations
dynamically change (high values of other vessels’ relative speeds). Consequently we can say that the
guard distance dg determined a certain boundary, from which all the objects will be monitored and
their approach will increase the danger (Fig. 1).
The navigator identifies and assesses navigational situations (ship encounters) in the distance interval
d CPA ; d g . All situations for the distance interval (0; dCPA ) are critical ones, while situations in the
L L
The method adopted for the determination of ship domain boundary is essential for both its shape and
size as well as domain interpretation.
sets theory have been engaged for the purpose. The tools allow to define the degree of membership of
each distance to a danger d (of another ship) to the set of dangerous distances (Fig. 2). Thus we will
obtain a fuzzy set A of ordered pairs:
A = {(d , μ A (d )) | d ∈ X } (3)
where,
d ∈ R+ – distance from a danger (another ship)
μ A (d ) : X ⇒ [0, 1] , – function of membership to the set ‘dangerous distance’
238
Fig. 1: Area of the navigational observation Fig. 2: Function of membership to the set ‘dangerous
distance’ μA(d)
According to the definition of a fuzzy set, the ship fuzzy domain DSF on the heading angle ∠Ki is
described as follows:
Assuming that the ship fuzzy domain DSF on the heading angle ∠Ki is described by the membership
function μDSFKi , we can express the navigational safety level γ in a situation when the other object is
on this heading at a distance dKi by this formula (Fig.3):
γ = μ DSFKi (d Ki ) (5)
y
γ = 0.9 γ = 0.8 γ = 0.6 γ = 0.4
Fig. 3: Fuzzy domain; domain boundaries for various levels of navigational safety γ (γ ∈ < 0, 1 >);
γ = 0 – very safe situation; γ = 1 – very dangerous situation (collision)
239
3. Research
Expert research has been carried out in which ship encounter situations were assessed, in the
conditions of good visibility. The research was aimed at the determination of safe distances for a
variety of situations in the open sea.
The participants were navigators – ship captains and watch officers with diversified sea service
background. The research method was that of questionnaire type. The navigators were asked to
determine safe distances for various encounters of two ships. The following parameters were
considered: - ship size,
- ships’ courses,
- heading angle of the encountered ship.
Three ship lengths were examined: 100m, 200m and 300m. The situations that were assessed were
classed into nine groups by ship length combinations. The research has gathered 10 686 fact sets,
which made it possible to include the above parameters in the navigators’ assessment.
0,40
0,30
0,20
0,10 h
h 100 300
0,00
0 1 2 3 4 5 6
Safe distance [Nm]
Fig. 4: Function of density of the safe distance (normal distribution) for encounters of own and target
ships with lengths: a) 100 and 100 [m]; b) 100 and 300 [m]; c) 300 and 300 [m].
Examining the curves of the distributions we can see that the value of safe distance to another ship is
more agreed on in cases of smaller length. The smaller the ships, the more their estimated mutual safe
distances in accordance (with smaller scatter). The respective accuracies for the above presented
results range from h100=0.53 (100 m long ships) to h300=0.33 for the longest ships. Navigators must be
aware of the fact that in extreme cases the differences between safe distances as fixed on the two ships
may amount to a few nautical miles. Such a situation may lead to misinterpretation of mutual
manoeuvres, and consequently, to a wrong assessment of a navigational situation.
3.2 Assessment of a navigational situation including momentary values of target ship’s course
The research also focused on the effect of the target ship course on the safe distance between ships.
From the facts gathered the function of density of safe distances for various length ships encounters.
The distances were assessed in relation to the heading angle at which the target ship was observed.
The largest differences in the assessment of safe distance were found for the longest ships, when the
target ship is abeam to starboard, steering the relative course 270°, (Figs. 5 and 6).
240
0,45
0,40 a
0,35
b
0,30
c
0,25
0,20
0,15
0,10
0,05
0,00
0 1 2 3 4 5 6
Safe distance [Nm]
Fig. 5: Function of density of the safe distance (normal distribution) for own and target ship
encounter situations; own ship on course 000°; target ship on heading angle 045°, course 315°: a)
own and target ships are 100 [m] long; b) own ship is 100 [m] long, target ship is 300 [m] long; c)
own and target ships are 300 [m] long;
0,40
Target courses
0,35 0
0,30 315
270
0,25
225
0,20 180
0,15
0,10
0,05
0,00
0 1 2 3 4 5 6
Safe distance [Nm]
Fig. 6: Function of density of the safe distance (normal distribution) for own and target ships; own
ship on course 000°; target ship on heading angle 045°; for various courses of target ship; both ships
are 300 [m] in length
The navigator assessing a navigational situation often uses general (averaged) assessment criteria.
These enable rough and fast evaluation. In many cases such assessment is sufficient to make
unequivocal identification. In case of doubt or need for continuous assessment of the situation (action
of the ship having right of way) specific criteria will have to be applied. These criteria will take into
account momentary values of navigational situation parameters. The curves shown above present the
safe distance determined from the current course of the target ship and its location relative to own
ship. This is a specific criterion that can be developed for each navigational situation in an open sea
area.
3.3 Navigational situation assessment for target ships on various heading angles
Another stage of the research was concerned with an analysis of safe distance to other ships, taking
into account heading angles on which a given encounter takes place. The full variety of target ships
courses was examined. It was found that there was a visible difference in the scatter of safe distance
values (Fig 7). The differences are much larger for those heading angles at which the dynamics of
navigational situation changes is large (high values of relative speeds). From the point of view of
navigational safety this is not a good situation. The estimated safe distance should satisfy the
expectations of the target ship as well. Then the assessment of a navigational situation and following
manoeuvres will be understandable for both parties involved. The navigator has to be aware of the
241
differences in determining the safe distance to the ships. The knowledge of the potential scatter will
only increase the safety of marine navigation.
0,45
Heading angles
0,40 0
45
0,35
90
0,30 135
0,25 180
225
0,20
270
0,15 315
0,10
0,05
0,00
0 1 2 3 4 5 6
Safety distance [Nm]
Fig. 7: Function of density of the safe distance (normal distribution) for encounters of own and target
ships being 300[m] long; own ship is on course 0°; target ship is on selected heading angles.
From the performed research and calculations the boundaries of examined ships’ domains were
determined. The use of such criterion as the ship domain for navigational situation assessment allows
to unequivocally assign the examined situation of ship encounter to one of two groups: safe situation
or dangerous situation.
[Nm] 0
4 Avarage domain
3 Avarage domain +σ
315 45
2 Avarage domain - σ
270 - 90
225 135
180
Fig. 8: A domain of a ship 300[m] long during an encounter with a ship of the same length
242
accounting for the courses of both ships and the heading angle on the target ship. In addition, the areas
accounting for the value of standard deviation of the determined domain boundary are marked.
0 Avarage domain
0 Avarage domain 5,0
5,0 Avarage domain + σ
Avarage domain + σ 4,0
315 45
4,0 Avarage domain - σ
315 45 Avarage domain - σ 3,0
3,0
2,0
2,0
1,0
1,0
270 0,0 90
0,0 90
225 135
225 135
180
180
Fig. 9: Momentary domain for encounters of a ship on the heading angle 0450, course 3150: a) ships
are 100 [m] long; b) ships are 300[m] long;
a) b)
Fig. 10: Ship fuzzy domain: a) encounter of ships 300 metres long;
b) encounter of ships 100 metres long.
243
a) b)
Fig. 11: Momentary fuzzy domain in an encounter situation; target ship on course 315°
a) ships are 100[m] long; b) ships are 300[m] long.
Summary
The use of ship domain criterion makes it possible to assess a navigational situation in a manner
readable for the navigator, reflecting his/her idea of a safe area around the ship. The application of the
declarative knowledge of navigators in the process of domain determination makes up an essential
issue. The acquisition and representation of navigators’ knowledge can be achieved through expert
research, which in turn, can be done by statistical methods or by means of methods and tools of
artificial intelligence. The domain criterion allows assessing a navigational situation in the two-degree
scale, while the fuzzy domain enables taking into account various levels of safety. This makes it
possible to follow the changes: increase or decrease in the safety level, which is significant for the
navigator’s decision making. Both ship domain and fuzzy domain may be used for supporting
decisions to be made by the navigator in the process of situation assessment and the planning and
optimization of ship movement trajectory. Taking into account a larger number of parameters
describing a navigational situation, as is the case in momentary domains, allows assessing a
navigational situation with higher accuracy, which is significant for manoeuvre planning, including
the prediction of target ship movement.
Bibliography
1. COLDWELL T.G. (1983), Marine traffic behaviour in restricted waters, Journal of
Navigation, No 36.
2. FUJII Y., TANAKA K. (1971), Traffic capacity, Journal of Navigation, No 24,
3. GOODWIN E. M. (1975), A statistical study of ship domain, Journal of Navigation, No 28.
4. PIETRZYKOWSKI Z.(2001), The analysis of a ship fuzzy domain in a restricted area, IFAC
Conference Computer Applications in Marine Systems CAMS’2001, Elsevier Science Ltd
5. PIETRZYKOWSKI Z., URIASZ J. (2004), The ships domain in a deep-sea area, 3rd
International conference on Computer and IT applications in the maritime industry”,
Compit’04, Siguenza.
6. PIETRZYKOWSKI Z., URIASZ J. (2005). Methods and criteria of navigational situation
assessment in an open sea area, 4th International conference on Computer and IT applications
in the maritime industry”, Compit’05, Hamburg.
244
Proc. COMPIT’06, Leiden, 08-11 May 2006, eds. H. Grimmelius & U. Nienhuis, pp.XXX-YYY 1
Abstract
The past few years it has become apparent that fundamental application problems can
be experienced with the current probabilistic damage stability rules. Added to the fact that
the legislation will be adapted as well as broader applied in the new harmonized MSC.80
rules, it may be time for a novel application approach. Such an approach, based on numerical
integration of the probability functions, is presented in this paper, along with a dedicated
computer program.
1 Introduction
Since 1992 probabilistic damage stability regulations for dry cargo ships have been in effect.
Although since that time a lot of practical experience was gained, ship designers and shipyards
still may find these regulations irksome, for which there can be a number of reasons. Those can
be the intrinsic technical problems of the regulations, a subject that will be discussed in the
next section, or that a designer loses the ‘touch’ with the profession. And even after the damage
stability properties have been established properly, the approval procedure may slow down and
confuse matters further, because there appears to be room for interpretation differences, although
the legislative documentation aims at a uniform application. If classification societies or shipping
inspections interpret different from the designer, significant differences can occur between issued
and verification calculations. This may be amplified by the fact that a designer can try to
squeeze the calculation to its limits, by means of an extensive optimization procedure, while
the assessment body does not always has the opportunity to cope. All these issues can lead to
mutual misunderstanding or irritation.
In order to improve the situation, an ad hoc discussion forum between the administration,
classification societies, yards and designers has been organized in the Netherlands. Also some
elementary test cases were designed and evaluated. It was striking to see how easily errors can
be made, even with simple cases. Not only human errors were noted, but also systematic or
software errors. This emphasized the necessity of a detailed and open communication between
all parties involved, a conclusion which was subscribed by all participants.
1. Only one damage per compartment is taken into account. A compartment with multiple
branches must be split virtually into different parts.
2. The method relies on a regular compartment layout, which means a configuration where
all transverse bulkheads extend over the entire hull, and where longitudinal bulkheads
and decks extend from one transverse bulkhead to another. However, in practice these
restrictions are often violated, so the layout will become non-regular, examples are sketched
in Figs. 1.a and 1.b. An irregular layout cannot be processed without fictitious subdivision
in order to make it virtually regular.
A
D
B
C
b.Schematic example
c.Warped bulkhead
a.A complete side tank
Figure 1: Examples of non-regular compartment lay-out
3. The method is designed for vessels with strictly vertical bulkheads and horizontal decks.
Compartments bounded by one ore more warped bulkheads, e.g. as sketched in Fig. 1.c,
are not foreseen.
4. The formulae of SOLAS 1992 may, especially with long and arrow side-compartments,
intrinsically lead to negative probabilities. This remark does not apply to SOLAS 2009,
where a revised set of equations is used.
5. The probability of damage to two or more adjacent compartments involves the subtraction
of the probabilities of the smaller damages. In the forward and aft regions of a ship, where
the waterlines are narrowing, this may lead to negative probabilities.
6. According to the explanatory notes of SOLAS 1992, IMO (1991), the maximum pen-
etration at side compartments shall not exceed twice the minimum penetration. This
constraint can limit the penetration depth very severely. In Fig. 2.a the case is sketched
where only compartment 1 is damaged. The evident penetration is according to the angled
line, with the penetration depth indicated by b. However, in this case b1 = 0 and b2 > 2b1
so the penetration limitation rule is violated. The only solution to comply with this rule is
to set b2 also to zero, which results in penetration depth b as depicted in Figure b. With
a hollow waterline b would even become negative, which leads to a probability of damage
of zero for this rather realistic damage case.
In the regulations of SOLAS 2009 the penetration limitation rule has been slightly changed,
the criterion is now that the mean penetration shall not exceed twice the minimum penetra-
tion. With b > 2b1 this rule is clearly violated in Fig. 2.a. What would be an appropriate
penetration depends upon the interpretation of the rule. If the mean penetration is taken
as the absolute value of b, and the minimum penetration is only measured at the extrem-
ities of the damage case, a penetration as indicated by the dashed line in Fig. 2.b would
result. If the signed value of b is used, no penetration whatsoever can exist which complies
with the rule.
246
Proc. COMPIT’06, Leiden, 08-11 May 2006, eds. H. Grimmelius & U. Nienhuis, pp.XXX-YYY 3
We consider this whole subject of penetration limitation rather unfortunate. It may have
its purpose, but in the present form it is confusing and unrealistic.
7. For the determination of the probability of damage a number of crisp damage boundaries
are applied. These may appear to be evident at first sight, but will be less appropriate
when the actual compartment boundaries are not so crisp. For instance, the factor x1 in
SOLAS 1992 is defined as ‘the distance from the aft terminal of Ls to the foremost portion
of the aft end of the compartment being considered’. But take a damage to compartment
D in Fig. 1.b, what is here the aft end of the compartment, and the applicable foremost
portion of it? Nobody can explain this on a normative basis (which might be the reason
that the explanatory notes IMO (1991) contain predominantly examples, instead of ex-
planations).
The whole concept of crisp boundaries is introduced by the modeling of stochastic events,
where the common approach is 1) record the events, 2) construct a histogram on the basis
of these recordings, 3) normalize this histogram, and approximate it with a Probability
Density Function (PDF) and 4) integrate this PDF in order to obtain a Cumulative Distri-
bution Function (CDF). This CDF can be evaluated conveniently to predict the occurrence
of events below or above a certain magnitude. This approach should also be used in the
context of probabilistic damage stability for ships, where in the fourth step, the integra-
tion, where the crisp boundaries are introduced, after all the CDF is the definite integral
of the PDF.
b1=0 2 b1=0 2
b 1 1
b2 b
b2=0
1. A zone. Whereas a zone is a portion of the vessel between two longitudinal boundaries
(e.g. transverse bulkheads). The use of the zonal concept forces the subdivision model into
regularity, thus avoiding certain pitfalls as described in Section 2. The zonal-model is arti-
ficial; it is an abstraction of the actual subdivision, and as such will produce a less accurate
result. It is funny to see that the zonal concept has become rather popular, although it
is not even mentioned in SOLAS 1992 (it is mentioned, however, in the explanatory notes
IMO (1991)). In SOLAS 2009 the terms zone and compartment are entangled, but the
zone is not even defined at all.
2. A compartment. This is the most obvious choice, for it corresponds to the actual subdivi-
sion and it matches the terminology of the regulations.
e.g. as indicated by the dotted lines, creates two entities which can both be damaged.
Another example is presented in Fig. 3, where the assumption that each compartment is
affected by a single damage does not hold for compartment 1. A further division of this
compartment, in entities called sub-compartments, for instance along the dotted line, will
make it affected by two damages: B-C and D-E. Of course for the determination of the
probability of survival, the compartment is always taken as a whole.
3 1
2
A B C D E
4. None. If the PDF’s are not a priori integrated, the whole usage of crisp boundaries disap-
pears. Consequently there is no need for any atomic portion concept. Instead the PDF’s
are integrated numerically, as proposed in Koelman (2005). This numerical integration
method, for which the algorithm is depicted in Fig. 4, takes into account the true compart-
ment shape, including possible niches, irregularities and warped or even curved boundaries.
The application of numerical integration on the subject of probabilistic damage stability
can be compared with the developments in the area of structural strength. Initially, an-
alytically determined standard solutions for the deflection of beams were utilized, but for
more complex structures the division into very small, but Finite Elements proved to be
more flexible.
Apart from the practical advantages of the numerical integration method, it may offer also
a benefit for the derivation of the statistical model. It is common practice that stochastic
events are approximated by PDF’s from simple functions, because the analytical integra-
tion process is eased that way (see Pawlowski (2004) for an in-depth discussion). With
numerical integration this reason can be discarded further allowing the use of PDF’s which
approximate the observed events more accurately. And what’s more, the events do not
necessarily have to be modeled by one of the standard statistical functions. For instance,
a spline or any other smooth function through the data points would also do. An ex-
plorative investigation into the modeling of the damage length, based on published data
Lützen (2001), is presented in Table 1. From this table it can be concluded that there
is room for an increased accuracy, which can be exploited by the numerical integration
method. The author recognizes the fact that this aspect is only theoretical, because the
accuracy of the PDF modeling is at this moment not an object of practical concern, but
the point remains.
distinction is made between ‘SOLAS 1992’ and ‘Reconstructed SOLAS 1992’. The reason lies
in the way the longitudinal and transverse subdivisions, as expressed in the factors p and r, are
treated. The PDF of the product of the two can be obtained by differentiation of the equations
SOLAS (2004), and is plotted in Fig. 5. However, according to the prescriptions of the regula-
tion, p and r must be determined separately. For multi-damage cases this leads in effect to a
product pr as plotted in Fig. 6. One would expect reduction factor r to be in the interval [0,1],
but instead the resulting r-values appear to have values between −∞ and ∞. The reason for
this phenomenon is that in some cases pr 6= 0 while at the same time p ≈ 0, so that r = pr/p
has a very large positive or negative value. With such large values of |r| it is useless to draw
r as a function of dimensionless damage length y and dimensionless penetration z, so in Fig. 6
r is represented by a color distribution which is painted on the graph of pr, while, in order to
avoid extreme values, |r| is limited to 1000. People who consider the wild character of Fig. 6
surprising are able to verify our findings with the stand-alone 188-line Pascal computer program
solaspdf.pas, which is available on Internet1 .
Due to this anomaly, the results as obtained by numerical integration of the theoretical PDF’s
differ from those as acquired with the regulatory CDF’s. However, by means of reverse engi-
neering another PDF was derived which is different from the theoretically one, but which gives
numerical results in line with a conventional calculation. This alternative PDF is not available
in closed form, instead tables of probabilities, similar to the output of solaspdf.pas, are used
directly. Summarized, the ‘SOLAS 1992 ’ method is theoretically in line with SOLAS (2004),
but gives defiant results, while the ‘Reconstructed SOLAS 1992 ’ method is theoretically non-
sense, but provides the user with results which are compatible with a conventional calculation.
This whole question does not play with SOLAS 2009, because its foundation is much more solid
by treating p and r combined.
1
Solaspdf.pas can be obtained from www.sarc.nl, ‘Download’ section, login as non-registered user, go to the
probability of damage subdirectory.
249
Proc. COMPIT’06, Leiden, 08-11 May 2006, eds. H. Grimmelius & U. Nienhuis, pp.XXX-YYY 6
Table 1: Standard errors of a few approximation models for the PDF of the damage length
4.2 Calculation switches and general functionality The new computer program was
designed for maximum flexibility. That implies that as many choices as feasible are not pre-
programmed, but offered to the user, at least, as long as that particular subject is applicable to
the chosen calculation method. Possible switches are:
• Five supported regulations: A.265, SOLAS 2009, dr-67 (for hopper dredgers with reduced
freeboard), SOLAS 1992 and reconstructed SOLAS 1992.
• Choice between the global and local penetration rule for multi-compartment damages (see
Koelman and Pinkster (2003), Sub-section 4.3).
• Choice between two penetration limitation rules (as discussed in Sub-section 2.6): b1 and
b2 both < 2min(b1, b2) (according to SOLAS 1992) and bmean < min(b1, b2) (according to
SOLAS 2009). Furthermore, four application scenarios for the penetration limitation rule:
a) do not apply the rule, b) apply the rule, except for damages which extend to centerline,
c) apply the rule, except for damages with an inner boundary parallel to centerline and d)
apply the rule in all cases.
250
Proc. COMPIT’06, Leiden, 08-11 May 2006, eds. H. Grimmelius & U. Nienhuis, pp.XXX-YYY 7
• Whether or not to set a, the contribution of each damage case to the attained subdivision
index, at zero if it happens to be negative.
• The determination of the critical VCG (at a selected draft) so that the attained subdivision
index A will equal required subdivision index R.
• Generation of damage cases (taking into account possible cross-flooding by pipes or ducts),
and automatic determination of damage boundaries (for the compartment-based and sub-
compartment-based methods). Besides from being generated damage cases can also be
defined or modified manually (see Fig. 7 for an example of the input window).
• Output of intermediate results to text file, for human reading, and to spreadsheet for
further analysis.
• For higher processing speeds several tasks which can be performed simultaneously are dis-
tributed over multiple processors or processor cores, if available. In this way the processing
speed is doubled on computers equipped with the latest hyperthreading or dual-core tech-
nology.
4.3 Results The presentation of the results depends on the applied regulations, and particu-
larly on the chosen calculation method. In particular, attention is focussed on the aggregated
P
probability of damage, prv, which should theoretically approach or equal unity, and which can
P
be used as a measure of completeness. A prv of less than 1 indicates an incomplete calculation,
while a value greater than 1 is a sign of overcompleteness, although both phenomena may also
be the numerical expression of the problems of Section 2. Anyway, in this respect complete must
be distinguished from accurate, for instance a zonal calculation with a modest number of zones
can be complete, because the whole vessel is covered, but still inaccurate, because the zonal
division approximates the true subdivision only roughly. For the different calculation methods,
the following observations can be made:
251
Proc. COMPIT’06, Leiden, 08-11 May 2006, eds. H. Grimmelius & U. Nienhuis, pp.XXX-YYY 8
• Calculations according to the zonal method have a conventional output and format, see
Fig. 8 for an example. In the past, the so-called probability triangle was occasionally
requested to be plotted for each damage case. As shown in the example this belongs to
the possibilities now, although the author considers this rather useless. Due to a lack of
P
experience with the zonal method we are not in the position to indicate values for prv’s
although it may be expected that due to the regular subdivision it will approximate unity.
• The results with the compartment-based method are essentially the same as with the older
P
PIAS module. For SOLAS 1992 calculations prv commonly lie in the range between 0.9
and 1.20. For a conventional cargo vessel, a couple of hundred damage cases are typically
employed.
252
Proc. COMPIT’06, Leiden, 08-11 May 2006, eds. H. Grimmelius & U. Nienhuis, pp.XXX-YYY 9
• The numerical integration method works as expected. With the reconstructed SOLAS
1992 ’regulations’ the results are more or less the same as for the compartment-based
method, which is no surprise because these ’regulations’ have been designed to mimic the
conventional behavior of SOLAS 1992. For SOLAS 1992 and SOLAS 2009, with 0.998 <
P
prv < 1.000 the completeness is pretty high. Another phenomenon is the occurrence of
zero-compartment damages. These are damage cases which can occur, according to the
system of regulations, but which lie outside the vessel’s hull, e.g. the shaded areas in Fig. 9.
Provided that the vessel complies with the survival criteria in intact condition (which will
normally be the case), these damage cases contribute to the attained subdivision index.
Finally, it can be mentioned that, although the basic algorithm is rather simple, if a small
integration increment is applied, the number of integration steps can become so high that
the idea that the calculations can be verified by a human must be abandoned. Therefore
the program is equipped with the option to store all intermediate integration steps in a
text file, which could be utilized subsequently by an automated verification system.
• For SOLAS 1992 the achieved A is proportional to the accuracy of the applied method.
The zonal method ranks low (sub-)compartment medium and numerical integration high.
For SOLAS 2009 and reconstructed SOLAS 1992 and SOLAS 2009 the ranking from low
253
Proc. COMPIT’06, Leiden, 08-11 May 2006, eds. H. Grimmelius & U. Nienhuis, pp.XXX-YYY 10
• For this particular vessel SOLAS 2009 is a more severe requirement than SOLAS 1992, or,
in other words, has a higher safety margin.
• The listed values are ‘raw’, that means that no ‘optimization’ is applied which could
increase A. With the (sub-)compartment method, our previous experience is that by e.g.
removing damage cases with a negative probability prv, the final A value can be increased
by a couple of percent points.
• The number of damage cases for the numerical integration method is rather low. Indeed
the program reported that missing damage cases account for a combined prv of abtout
0.05. Increasing the number of damage cases might rise ∆I to some extent.
• One aspect that is not evident from this table is the effect of the penetration limitation
rule of Sub-section 2.6. This rule is taken into account automatically with the (sub-
)compartment method, while it is not applicable at all with the numerical integration
method. However, with the zonal method it is the user’s responsibility to choose the zone
boundaries in such a fashion that this rule is complied with. However, for the ship under
consideration that is a laborious task, which was not completed. Consequently, the zonal
calculation does not fulfill the penetration rule.
6 Conclusions This paper has described the application of a numerical integration method
for the determination of the probability of damage, and its application on the existing and
forthcoming SOLAS rules. It has been shown that this method gives more stable results than
conventional approaches, while a number of practical pitfalls are avoided. Preliminary applica-
tion of the different calculation methods on both types of regulations suggests that for a smaller
cargo vessel, the new rules are more stringent than the existing ones.
References
IMCO. Res. A.265: Regulations on Subdivison and Stability of Passenger Ships as an Equivalent
to part B of Chapter II of the International Convention for the Safety of Life at Sea. London,
1974.
IMO. 11. 1991 “Resolution A.684: Explanatory notes to the SOLAS regulations on subdivision
and damage stability of cargo ships of 100 metres in length and over.”.
Koelman, H.J. “Damage Stability Rules in Relation to Ship Design - Freedom is just another
word for nothing left to lose.” Proc. WEMT’95 (West European Conference on Marine Tech-
nology): Ship safety and protection of the environment from a technical point-of-view.. 1995,
45–56.
Koelman, H.J. “On the procedure for the determination of the probability of collision damage.”
International Shipbuilding Progress 52 (2005): 129–148.
Koelman, H.J. and J. Pinkster. “Rationalizing the practice of probabilistic damage stability
calculations.” International Shipbuilding Progress 50 (2003): 239–253.
Lützen, M. Ship Collision Damage. PhD thesis, 2001.
MSC. 6. 2005 “MSC 80/24/Add.1: Report of the Maritime Safety Committee on its eightieth
session.”.
254
Proc. COMPIT’06, Leiden, 08-11 May 2006, eds. H. Grimmelius & U. Nienhuis, pp.XXX-YYY 11
NN. 2000 “Guidelines for the Construction and Operation of Dredgers Assigned Reduced
Freeboards; DR-67).”.
Pawlowski, M. Subdivision and damage stability of ships. Gdansk, Poland: Politechnika Gdańska,
2004.
SARC. PIAS: User manual of the Program for the Integral Approach of Shipdesign, 2005.
SOLAS. Consolidated text of the International Convention for the Safety of Life at Sea. London:
IMO, 2004.
Dyck, P.van . “The changing face of naval architecture.” The Motor Ship, October (2004):
60–61.
255
Modelling thruster-thruster interaction for preliminary design
consideration
Sylvia Reinders, Nevesbu, The Netherlands
Hugo Grimmelius, Delft University of Technology, The Netherlands
Do Ligtelijn, Wärtsilä Propulsion Netherlands BV, The Netherlands
Joost Moulijn, Wärtsilä Propulsion Netherlands BV, The Netherlands
Abstract
Prediction of manoeuvring capabilities of ships in an early design stage is often based on ‘rule of
thumb’. With new and alternative propulsion concepts these rules may not be applicable anymore. A
special area of interest is the interaction between two azimuthing thrusters when they operate in close
vicinity, as for instance is the case with ASD tugs. To support the concept selection a Concept
Exploration Model for Manoeuvrability (ManSim) was developed, intended to be used in the
preliminary design stage. This programme was developed as a sales support tool for Wärtsilä
Propulsion Netherlands B.V. It offers great flexibility in number and location of propulsors, rudders,,
tunnel and azimuthing thrusters. It allows for both pre-defined (standard) and custom manoeuvres.
Key feature is the automatic generation of simulation models from the GUI, which are then available
for further inspection. This paper will give a general introduction to ManSim and describe its main
features. The modelling of thruster-thruster interaction will be described in some detail. Results from
the programme will be compared with measured data.
1. Introduction
When designing ships a choice about the means of propulsion has to be made in an early design stage.
In most cases decisions are based on arguments of desired speed, fuel economics and/or available
engine space. Decisions concerning the required manoeuvring capabilities are generally based on rules
of thumb, or previous experiences with a similar type of ship.
Only in special cases the actual manoeuvring characteristics are obtained during the design process,
either through computer simulations or later through model tests. These characteristics are then used
for evaluating whether or not the selected propulsion configuration provides the desired
manoeuvrability. The result of such a procedure is that the behaviour probably suffices but is not
optimised. However, optimisation of the manoeuvring capabilities can be justified economically if for
example a cruise ship making some 200 port calls a year needs less tugboat assistance due to better
manoeuvring capabilities.
In order to be able to be of more service to its customers in the early design stage Wärtsilä Propulsion
Netherlands B.V. (WPNL) together with the TU-Delft designed and developed a computer program
capable of predicting the influence of propulsion configurations on a ship’s manoeuvrability in the
concept design stage.
A computer program in Matlab/Simulink® is available now, capable of evaluating the changes in
manoeuvring behaviour due to changes in the propulsion configuration when only the ship main
particulars are known. This program uses a generic modelling technique that is described in this paper
for the case of a manoeuvring model. Most of the work was done as part of the MSc project of T.Dirix
(2002).
2. Degrees of freedom
If a co-ordinate system having positive directions as shown in Figure 1 is assumed then it can be said
that for a body with six degrees of freedom, symmetric in the x-y plane and having a principal axis of
inertia coinciding with both the x and z axis, the following equations apply:
X = m ⋅ ( u + qw − rv )
Y = m ⋅ ( v + ru − pw ) (1)
Z = m ⋅ ( w + pv − qu )
256
K = I xx p − ( I yy − I zz )qr
M = I yy q − ( I zz − I xx )rp (2)
N = I zz r − ( I xx − I yy ) pq
i K,p
j X,u
M,q k
N,r
Y,v
Z,w
Since the main goal of the model is to simulate the manoeuvring behaviour of ships, it is evident that
the motions in the horizontal x-y plane need to be studied. That is, the forward and sideway
movements (surge and sway) and the rotation around the z-axis (yaw). If it is assumed that no waves
are present and that the mass of the ship does not change during the time involved in a manoeuvre, it is
allowed to assume that no forces, other then the constant gravity and buoyancy forces are present in
the z-direction. Furthermore, if it is assumed that the distribution of mass does not change during the
manoeuvre there will be no moments acting around the x and y-axis.
As a result there will be no vertical movement, heave, or rotation around the x or y-axis (pitch or roll).
The degrees of freedom, and therefore the number of equations necessary to describe the movements,
can be reduced to three:
X = m( u − rv )
Y = m( v + ru ) (3)
N = I zz r
From this it follows that calculating the horizontal forces, X and Y, and the moment around the vertical
axis, N, suffices to make a comparison between the manoeuvring characteristics for various propulsion
configurations.
Note that the assumption of constant mass was already made in the derivation of equation (1) and (2).
Hirano and Takashina (1980), proved that for high-speed manoeuvres the influence of roll on the
turning circle of a ship cannot be neglected, especially for car-carriers and large container vessels. This
effect is neglected here because it is not the aim to make a highly accurate quantitative prediction of
the manoeuvring characteristics but rather to be able to predict the qualitative differences in these
characteristics for the same ship with different propulsion configurations.
3. Introduction to ManSim
In a modular structure it is essential that all modules have the same number and type of inputs and
outputs in order to make them easily interchangeable. Therefore first an inventory of all the necessary
modules and their inputs and outputs is made. This is followed by an investigation whether or not it is
257
possible to reduce or manipulate the number and type of required inputs and outputs in such a way that
they become the same for all modules involved.
258
reversed. From a generic model building point of view it is necessary to create modules that solely
depend on the same inputs, here the ship speeds in the horizontal plane, and have the same outputs,
here forces X and Y and moment N, so element modules can be easily ‘connected’ to the equation of
motion module When in the following sections ‘ship speed’ is mentioned, the speed vector containing
u, v and r is meant. Similarly, ‘force’ indicates the vector containing the elements X, Y and N.
In the following section a simple diagram shows the principal input and output variables and
parameters for each module, the legend of which is shown in Figure 2.
parameter
input output
ELEMENT
variable variable
The azimuth module requires additional data concerning the slipstreams of the surrounding thrusters.
Since the position of the surrounding thrusters does not change, these can be entered as parameters.
The actual slipstream position and trajectory do change, and are calculated on the basis of the actual
ship speed and azimuth thruster thrust.
It is now possible to compose a seemingly unlimited variety of models, a few of which are shown in
Figure 3. In the lower figure the thruster force feedback is not connected to the other azimuth thruster
by a connection line but via ‘send’ and ‘receive’ elements. This is done in order not to loose sight of
the overall structure of the model.
SHIP SPEED SHIP SPEED
FRO
PRIME MOVER- M2 AZIMUTH
GEARBOX- FORCE FORCE
FRO THRUSTER 1
PROPELLER
M3 OUT
1
RUDDER FORCE
FRO
M1 AZIMUTH
FORCE
FRO THRUSTER 2
M3 OUT
TUNNEL
FORCE 2
THRUSTER
EQUATIONS EQUATIONS
+ +
OF MOTION FRO OF MOTION
M1 AZIMUTH
AZIMUTH FORCE
FORCE THRUSTER 3
THRUSTER FRO
M2 OUT
3
3.4 Implementation
The method of modelling presented here is implemented in Matlab/Simulink. Simulink allows the user
to create his own library containing user-defined simulation blocks. In this case every single module is
represented by such a user-defined block, enabling the user to use the drag-and-drop facilities
259
Simulink offers to build a model. The parameters necessary for every module can then be entered
through the blocks interface.
However, in order to simplify the use of the program further, a Matlab routine was developed
performing the composition of the model automatically. The user now only has to enter the desired
number of each element and define the necessary parameters. The program then connects the proper
library blocks to create the Simulink model. Subsequently all the required parameters are loaded into
the model. Simulations required for the manoeuvres requested by the user are then performed
automatically. Results are presented either through standards plots available from a menu in the
program or by user-defined plots. The resulting Simulink model is still available to the user for
inspection and detailed evaluation.
The equations used for every module reduce the number of parameters necessary to an absolute
minimum and to parameters that in general are easy to obtain. After all necessary parameters are
entered, a module instance is created and a new model can be composed by selecting the desired
module instances and entering some additional over-all model parameters. The parameters necessary
for such a module instance - for instance parameters of a specific engine, a hull shape, a propeller
etcetera - are stored in the database. This allows for the user to select a specific instance of a module
without having to define the parameters every simulation.
Figure 4 shows the initial screen where the user can define a hull shape. This hull shape is then stored
in the database for future use. Figure 5 shows the screen asking for the user input for a Diesel engine.
Similar windows exist for the definition of new propellers, superstructure and tunnel thrusters. Figure
6 shows several windows necessary for the composition of a new model.
The same procedure is followed with regard to the manoeuvres. Various standard manoeuvres such as
turning circles and Zigzag manoeuvres are stored in a database, enabling easy selection for the user.
User-defined manoeuvres can be added to the database for future use.
260
Fig. 5: Engine specification window
4. Thruster-thruster interaction
A special piece of interest for the further development of ManSim was the implementation of
thrusters-thruster interaction, Reinders (2005). This phenomenon is experienced when, for example a
tug has been equipped with two thrusters which during manoeuvring becomes in each others
slipstream. At such a moment interaction occurs what can have influence on the manoeuvring
behaviour of the tug.
4.1 Overview
As stated above thrusters-thruster interaction can influence the manoeuvring behaviour of the tug.
Therefore it is important to understand what happens during interaction. To determine what the
interaction effects will be, a simulation model of thruster-thruster interaction was made based on
Nienhuis (1992).
This model uses a description of the slipstream to determine the velocity at any position in that
slipstream. When visualizing another thruster in this slipstream, the inflow velocity of this thruster can
be determined.
261
The generated thrust is dependent of the inflow velocity. When the inflow velocity changes, advance
ratio J will change, this means that there will be a change of operation point of the thruster. The
advance ratio is:
V
J= (4)
n⋅D
Through this operation point a thrust coefficient Kt is determined for the new J [-] and the generated
thrust T [N] can now be determined. The thrust coefficient is defined as:
T
Kt = (5)
ρ ⋅ n2 ⋅ D4
Uax is the velocity at the centre line of the slipstream. This velocity is approximately zero at the
propeller plane and increases gradually to the maximum velocity. Uax will be become equal to Um,
when Um is positioned at the centre line of the slipstream.
Rh1 is the position in the slipstream where the velocity is the average of the maximum velocity and the
velocity at the jet centre line, U Rh1 = 0.5 ⋅ (U ax + U m ) . This place is between the centre line of the
slipstream and the position of the maximum velocity, Rm.
Rh2 is the position in the slipstream where the velocity is the average of the maximum velocity and the
surrounding (ship) velocity, U Rh 2 = 0.5 ⋅ (U m + U s )
In the figure below the variables used to describe the slipstream are visualized.
262
Fig. 7: Definition of the slipstream
The boundary of the slipstream can be expressed as a function of the variables mentioned above. The
half radius of the slipstream is defined as:
R j = Rh1 + Rh 2 − Rm (6)
At this radius the velocity is approximately the average of the maximum and the surrounding velocity.
At the full radius of the slipstream (2* Rj) the velocity is assumed equal to the surrounding velocity.
This can be seen as the boundary of the slipstream. The half and full radius of the slipstream are
defined with respect to the centre line of the slipstream. In Figure 7 the half radius and boundary of the
slipstream can be seen.
According to the assumption that the slipstream behaves similar to a turbulent jet [5], the velocity field
behind a propeller can be divided in two zones:
1.The initial developing zone, where the velocity profile is influenced by the propeller geometry. The
velocity profile has a double peak displaced from the centre line. Just after the propeller hub there is
a dead water zone.
2.The fully developed zone, where the velocity profiles have a single peak at the shaft centre line.
Figure 7 shows the initial and fully developed zone. The initial zone extends from the propeller plane
to the place where Rm becomes zero, or in other words where velocity at the axis will be equal to the
maximum velocity. When Rm = 0, the maximum velocity is placed at the centre line of the slipstream.
The fully developed zone starts where the initial zone ends, or in other words in this zone Rm = 0. In
Figure 8 the velocity profiles for the initial and fully developed zone are visualized.
Uˆ = e −0.694 rˆ
2
(7)
263
Fig. 8: Velocity profiles in the slipstream
2
⎛ 1 ⎞
Uˆ = ⎜⎜ ⎟
⎝1+ ( )
2 − 1 ⋅ rˆ 2 ⎟⎠
(8)
The non-dimensional velocity profile Û can be described as an exponential function or with the
equation defined by Schlichting.
264
It is chosen to use the exponential velocity profile in ManSim. This velocity profile approaches faster
and closer to zero, see Figure 9, which from calculation point of view is preferred. To determine the
velocity profile the non-dimensional coordinate r̂ is needed.
4.4 Overview
Assuming there are two similar thrusters underneath a ship. The interaction between those thrusters
can be determined in the following order:
5. Manoeuvring models
Forces acting on the hull are generally calculated using hydrodynamic derivatives obtained through
extensive model testing. However, at the design stage in which the program is intended to be used
these tests have not yet been performed. Therefore a way of expressing the hydrodynamic derivatives
based on principal ship dimensions was sought and found in Inoue(1981). Inoue (1981) also gives a
set of equations by which the actual calculations can be performed.
With seemingly valid arguments Kobayashi(1988) states that Inoue’s equations do not hold for low
speeds and large drift angles, and he therefore proposes a different set of equations. However since his
equations depend on hydrodynamic derivatives that are not readily available from principal ship
dimensions, Inoue’s equations are used for all cases.
The straight-line resistance X(u) is derived from a simplified Holtrop & Mennen(1978)
approximation, requiring only principal ship dimensions. The ships’ mass, m, and hydrodynamic
masses, mx, my, again are approximated on the basis of the main particulars alone, as is the added
moment of inertia Jzz.
6. Results
As stated in paragraph 4 the model for thruster interaction was based on Nienhuis (1992).
First of all the differences between the derivation of data between Nienhuis (1992) and ManSim will
be shortly described after that the results will be presented.
265
6.2 Nienhuis data
Nienhuis (1992) calculates the thruster-thruster interaction with a CFD-method 1 . From the
measurements made during the model tests, the equations used in ManSim are derived. In the model
tests a barge is used to make measurements on the thruster-thruster interaction with the following
characteristics:
- The thrusters where fitted under a barge.
- Thrusters are placed behind each other.
- The barge was towed, it could only move in one direction (x-direction).
- The speed is the towing speed, no relation with engine speed.
- Rotational speed is given and equal for both thrusters.
6.3 Differences
As stated above the method, to calculate and simulate the thruster-thruster interaction in ManSim and
[4] are quite different. The main differences are:
1. Calculation method, algebraic versus CFD.
2. Simulations, self propulsive versus towed.
1
CFD: Computational Fluid Dynamics
266
Fig. 11: Thruster-thruster interaction x/D = 3
During the simulation the angle of both thrusters was equal. The continuous lines are the calculations
of ManSim and the dotted lines are calculations of Nienhuis.
Figure 11 & 12 show that the estimation made with ManSim is comparable with the results obtained
by Nienhuis (1992). However for increasing thruster angles there are small deviations. The thrust
reduction in ManSim is calculated too pessimistic. The inflow speed at this position is calculated too
high, which causes a too large thrust reduction.
267
7. Conclusion of thruster-thruster interaction calculations
The thruster-thruster interaction as a function of the thruster distance, Figure 10, shows that the thrust
reduction is calculated quite well for 3 <= x/D <= 8. For x/D=2 the thrust reduction is calculated too
optimistic, this can be caused by an inflow speed that is too low. The thruster-thruster interaction as a
function of thruster angle shows good resemblances with the data of Nienhuis. Figure 11, x/D=3,
shows that the calculation for zero thruster angle is the point that has the greatest deviation to Nienhuis.
However at larger angles the resemblance is good. Figure 12, x/D = 6, shows a good resemblance
with the data of Nienhuis for thruster angles <= 10 [deg]. For larger thruster angles deviations start to
occur however these deviations are not larger than 5%. The overall conclusion is that the
thruster-thruster interaction is calculated accurate enough, when used for relative comparison of
different propulsion configuration concepts. However for small x/D < 3 the results are too positive,
this might be an indication that there are too many effects neglected in the interaction calculation. For
x/D = 6 the inflow speed is calculated too high, the thrust reduction is predicted too pessimistic.
References
DIRIX, T. (2002), Renewed Concept Exploration Model for Manoeuvring, Master thesis, Delft
University of Technology, Mechanical Engineering, Marine Engineering Group, Report Number 02/09
HIRANO, M., TAKASHINA, J., (1980), A calculation of ship turning motion taking coupling effect into
consideration; Transactions of the West-Japanese Society of Naval Architects No. 59
HOLTROP, J., MENNEN, G.G. (1978): A statistical power prediction method; International
Shipbuilding Progress, Vol. 25 October 1978
INOUE, S. HIRANO, M. KIJIMA, K. (1981): Hydrodynamic derivatives on ship manoeuvring;
International Ship Building Progress, Vol. 28.
KOBAYASHI, E. (1988): A simulation study on ship manoeuvrability at low speeds, Mitsubishi
Technical Bulletin No 180, Mitsubishi Heavy Industries LTD.
NIENHUIS, U. (1992): Analysis of thruster effectivity for dynamic positioning and low speed
manoeuvring; PhD dissertation Delft University of Technology, Delft.
REINDERS, S. (2005): Extending a concept exploration model for ship manoeuvring; Master thesis,
Delft University of Technology, Delft.
268
Automatic Piping System in Ship
Andi Asmara, Delft University of Technology, Delft - Merwede Shipyard, Hardinxveld-Giessendam
/ The Netherlands, [email protected],
Ubald Nienhuis, Delft University of Technology, Delft / The Netherlands,
[email protected]
Abstract
One of the most complicated and time-consuming process in ship production is to determine the
optimum route of piping. The automatic system to generate optimum collision free routes for pipes is
presented in this paper. In the past, research has been primarily focused on the use of only
deterministic or only nondeterministic optimization techniques to find the optimal route. In this paper,
the combination between deterministic and nondeterministic optimization techniques is proposed. The
strategy is to use a deterministic technique as a tool to find the optimum route while the tool’s
parameters are chosen by the nondeterministic technique. The practical aspects, i.e. branching and
minimizing the cost are included in the objective function to be optimised. The performance of this
novel approach is measured by its ability to accommodate and efficiently solve problems in the real
ship application.
1. Introduction
The design of the piping systems consumes a large part of the engineering effort for a modern ship.
Moreover, a ship is not a mass product like a car, which means that the specification is different for
each ship, and the design process has to be done for each different ship. Nowadays, pipe routing is
done manually by a pipe designer using CAD software, therefore the experience of the designers is
the dominant parameter in this process. In the design process, many decisions should be made by
designers, e.g. which pipe should be routed first, and which one is next. If this kind of decisions can
be replaced by the automatic piping system, the design man-hours can be reduced, design plans can be
standardized and the time-to-market of ships will reduce.
In line with the previous works of Kang et al. (1999), the automatic piping system has following
objectives:
1) to minimize user input and user decision,
2) to make the system easy to use,
3) to be used in real shipyard design process.
In section II, the main architecture of the automatic piping system that is used in this paper is
explained. The Interface Module of the automatic piping system is also described in more detail in
this section. Furthermore this section discusses the Engine Module of the system, it describes its
components and how each component works. This section also contains the list of variables that
should be optimized by the Optimizer Module of the system which is explained in more detail here. In
Section III the test case results are shown and discussed. Section IV concludes the paper.
269
Skeleton or Roadmap approach involves capturing the set of feasible motions (free space) in a
network of one dimensional lines and conducting a graph search of this network. The Cell
Decomposition approach consists of decomposing the free space into cells and connecting the start
and goal configurations by a sequence of connected cells. In the Potential Field Method, a scalar
mathematical function is constructed with minimum value when it is at the goal configuration, and
maximum near the obstacles. The path from the start to the goal is determined by putting a small
marble at the start and following its movement. The Mathematical Programming approach deals with
computing the path as a mathematical objective function and trying to minimize it while satisfying
constraints (obstacle avoidance). In the real application, more than one approach sometimes is used
together.
Research of pipe routing in the past few years has produced remarkable results with interesting
applications to packing problems and emphasis on employing novel approaches (usually heuristic
based) and unconventional optimization methods such as nondeterministic methods to improve design
productivity.
The automatic piping system consists of three main parts. The first part is the interface that creates the
link between this routing system and commercial CAD software that is used in the shipbuilding
process. With this module the user can easily generate the input data from the information that is
already available in CAD software, like the ship construction, the equipment data and position of
equipment. This module is called the Interface Module. The conversion of the generated pipe route
data to be exported to the CAD software is also done by this module.
The second part of this system is the pipe routing tool. This tool uses Djikstra Algorithm to find the
shortest path of each pipe. This tool is called the Engine Module. The Engine Module is also capable
to decompose free space to different size of cells, where the cell size is defined by the third part of the
system, the Optimizer Module. The Optimizer Module also decides the order in which pipes should be
routed, which branch should be routed first and where it should connect, and the decision which route
should be kept and which one should be discarded. Nondeterministic optimization method is applied
for this part of the system. The Optimizer Module uses one of the population-based evolutionary
algorithms called Particle Swarm Optimization.
A brief background of Djikstra Algorithm and Particle Swarm Optimization is provided in the
following subsections.
270
1.2.1 Dijkstra Algorithm
Dijkstra algorithm, named after its discoverer, Dutch computer scientist Edsger Dijkstra, is an
algorithm that solves the single-source shortest path problem for a directed graph with nonnegative
edge weights.
For example, if the vertices of the graph represent cities and edge weights represent driving distances
between pairs of cities connected by a direct road, Dijkstra algorithm can be used to find the shortest
route between two cities.
The input of the algorithm consists of a weighted directed graph G and a source vertex s in G. We will
denote V the set of all vertices in the graph G. Each edge of the graph is an ordered pair of vertices (u,
v) representing a connection from vertex u to vertex v. The set of all edges is denoted by E. Weights
of edges are given by a weight function w: E → [0, ∞]; therefore w (u, v) is the non-negative cost of
moving from vertex u to vertex v. The cost of an edge can be thought of as (a generalization of) the
distance between those two vertices. The cost of a path between two vertices is the sum of costs of the
edges in that path. For a given pair of vertices s and t in V, the algorithm finds the path from s to t
with lowest cost (i.e. the shortest path). It can also be used for finding costs of shortest paths from a
single vertex s to all other vertices in the graph.
The Automatic Piping System - DelftPipe is developed as a computer aided technique of generating a
collision free and efficient route for pipes in a ship. Fig. 1 shows the main architecture of the system.
The system has three main parts, the Interface Module, the Engine Module, and the Optimizer
Module. The system has the interface, namely the Interface Module, to the commercial CAD software
that is widely used in shipbuilding, thus it can be used without much effort to key-in the required data.
The result of DelftPipe can easily be exported to the CAD software. As it can be seen in Fig. 1, the
Interface Module gets the necessary data from the CAD software to be used by the Engine Module,
and it gives back the output of DelftPipe to be used by the CAD software.
The second part of DelftPipe is the Engine Module which uses the data from the interface and
calculates the route of all pipes. This is the part of the system that actually performs the routing. This
module uses the Dijkstra Algorithm to find the shortest path of each pipe. As implied by its name, the
Engine Module works as a black box, which is only blindly running according to the directions
provided by the Optimizer Module.
271
Automatic Piping System
DelftPipe
Optimizer Module
Parameters
Objective Value
Engine Module
Interface Module
CAD
Software
All of the decision making process is done by the Optimizer Module of the system. This particular
part of the system finds and decides what should be done by the Engine Module, e.g. which pipe
should be routed first. The main element of the Optimizer Module is the deterministic optimization
technique called Discrete Particle Swarm Optimization briefly described before.
272
Automatic Piping System
Construction
DelftPipe
Position of Equipments
Schema
I Environment
N Pipe Connection
T
E
CAD R EM OM
Software F
A
C Pipe Route
E
mdl File
As can be seen in the Engine Module procedure above, the result of this module can be different
depending on the sequence in which the pipes are routed. Moreover, there is a possibility that this
module fails to find a feasible solution for all pipes for some sequence of routing the pipes. Because
of this, the sequence of the pipe to be routed plays an important part.
The decision of the order of the pipes to be routed in step 4) is made by the Optimizer Module; also
the branching is optimized by that module. In step 4), during the generation of the cells, it recognizes
the space that is already used by the previous pipes (see step 6), therefore the collision of the pipe can
be avoided.
The aspects of cell generation and branch handling are important to be noted and are briefly described
in the following two subsections.
The size of the cells in step 4) is approximately equal to the outside diameter of pipe, and pipes with
almost the same size use the same size of cells, e.g. pipes with diameter 50 and 65 use cell size 75.
273
a) b)
6 5 4 3 2 1 2 3 4
6 5 4 3 2 1 1 2 3
6 5 4 2 1 2 3 4
6 5 4 5
6 6 5 6
6
a) b) c)
2 3
1 2 3
1 2
1 2 3
2 3
d) e)
At least two decisions need to be made. The first one is to choose the order in which the connection
points are handled. The second decision refers to step 2) and involved the question if the branch
274
should only appear in the main pipe or that it can also be in a different (previously branched) pipe.
There is a third problem that is the calculation of the diameter of the pipe after each configuration of
branching. These problems can be manually defined by the user, or can be included as a task for the
Optimizer Module.
The following subsections describe PSO and DPSO in more detail, and indicate how it works as the
Optimizer Module in DelftPipe.
While wandering through the problem space, in the process updating its pbest and gbest location, each
particle, at each time step, changes its velocity and its current position according to:
(1)
(2)
where
There are two types of boundary values; velocity maximum Vmax and position boundaries xmin and
xmax. If the velocity from eq. (1) exceeds Vmax, then it is limited to Vmax. Similarly if the position xi
in eq. (2) exceeds the position boundaries, then the particle is restricted to lie on the boundary.
275
Vmax needs to be chosen wisely, as it influences the convergence of the search. If Vmax is too high,
particles might move too fast, passing a good solution. If Vmax is too small, the particles may not
explore the search space sufficiently. Early experience with particle swarm optimization led us to set
the acceleration constants c1, and c2 equal to 2.0 and Vmax equal to 20% of the dynamic range of the
variable.
(3)
(4)
As can be seen by comparing eq. (1) and eq. (3), there is no formal difference between classical PSO
and DPSO. However slightly different rules are imposed:
• the search space of position S = {si}
• position of a particle pi
• velocity of a particle vi
• subtraction (p, p) → velocity
• multiplication (constant, velocity) → velocity
• movement (position, velocity) → position.
The search space S is the finite set of all sequences of the pipes to be routed. The position pi is one of
the possible sequences. The interesting part is on the velocity, since the meaning of movement of the
particle is not the same as in the classical PSO. In DPSO, the velocity is just like a list of
transpositions. For example, v = (2, 5) is means that if this velocity is applied to a position pi = (0, 1,
2, 3, 4, 5), it generates a new position pi = (0, 1, 5, 3, 4, 2).
Another important aspect is to define the objective function. The objective function is a criterion for
the quality of the solution. In the Automatic Piping System, the objective function that is used is
generated by the Engine Module according to the parameters that are supplied by the Optimizer
Module.
Based on that information, the particles inside DPSO are transposed in a certain rules as described in
eq.(3) and eq.(4). That process is repeated until the target cost is achieved or until the maximum
allowable number of evaluations is reached. The process is also stopped if there is no more
improvement after a certain number of iteration.
276
Fig. 5: Automatic Piping System Result
The current research is motivated by the need to have smarter tools to assist the ship design process.
Moreover the result of this research has to be implemented in the real ship design process. In this
paper, we have provided some insight in some of the steps taken in this research.
The objective function that is used incorporates the cost of the pipes and includes a large penalty for a
solution that is not feasible. The number of particles of the Discrete PSO is 16 particles. The system
will run until the target value is achieved or until 1000 evaluations have been performed.
It should be remarked that for the combinatorial problem with 8 variables, there are 8! solutions. So to
find the optimum combination by performing 1000 objective evaluation is rather limiting in view of
the number of all possible solutions which equals 40320.
3.2 Result
Fig. 5 shows the result of the Automatic Piping System. As can be seen, the proposed method is able
to find the pipe route solution. Each pipe is connected according to the pipe connection list, and there
is no collision between any of the pipes and no collision with any of the units.
This system is also capable to solve the pipe routing with branches. As can be seen in Fig.6, the
connection points are indeed connected to the pipe through the closest branch point. The route of the
pipe with the branch of course depends on the sequence in which the connection points are routed.
The sequence is defined by the Optimizer Module.
277
Fig. 6: Branch Solution
4. Conclusion
The Automatic Piping System - DelftPipe has been developed and proposed. This system uses both
Cell Decomposition method and Mathematical Programming as its main approach. The available
space is decomposed to cells and the algorithm connects the start connection point to the end point
through those cells. In terms of the optimization technique, DelftPipe combines the deterministic and
nondeterministic optimization techniques to combine the advantages of both. It uses the speed of the
deterministic technique while it also uses the flexibility of the nondeterministic technique to find other
possible solutions.
DelftPipe also has the Interface Module with the commercial CAD software, so the users can easily
use DelftPipe without re-entering many data. The output from the DelftPipe is easily exported in the
form of the .mdl file, so it can be used directly inside the commercial CAD software. In terms of time
that is needed to find the pipe route, DelftPipe can find the solution relatively fast.
Further ongoing developments focus amongst others on the practical aspects of the pipe routing, such
as calculating the change of the pipe size on the pipe branch, calculate the cost of the pipe as it passes
through a piece of the construction, and decide which type of penetration should be used.
References
AHUJA, N, HUANG, Y (1991), Gross motion planning - a survey, J. Ship Production 24/3
AL-KAZEMI, B., MOHAN, C. (2002), Multi-phase discrete particle swarm optimization, Fourth
International Workshop on Frontiers in Evolutionary Algorithms
AURENHAMMER, F (1991), Voronoi diagrams - a survey of fundamental geometric data structure,
ACM Computing Survey 23/3
CLERC, M. (2004), Discrete particle swarm optimization, illustrated by the Travelling Salesman
Problem, New Optimization Techniques in Engineering
EBERHART, R.C., KENNEDY, J (1995), A new optimizer using particle swarm theory, IEEE
International Symposium on Micro Machine and Human Science 6th, Nagoya
EBERHART, R.C., SHI, Y (2001), Tracking and optimizing dynamic systems with particle swarms,
IEEE Congress on Evolutionary Computation, Seoul, Korea
278
FAN, H.Y., SHI, Y. (2001), Study of Vmax of the particle swarm optimization algorithm, the
Workshop on Particle Swarm Optimization, Indianapolis, IN: Purdue School of Engineering and
Technology
GAING, Z.L. (2004), A particle swarm optimization approach for optimum design of PID controller
in AVR system,” IEEE Trans. Energy Conversion 19
ITO, T (1999), A genetic algorithm approach to piping route path planning, Journal of Intelligent
Manufacturing 10/1
JERALD, J., ASOKAN, P., PRABAHARAN, G., SARAVANAN, R. (2004), Scheduling optimisation
of flexible manufacturing systems using particle swarm optimisation algorithm, Int. Journal of
Advanced Manufacturing Technology
KANG, S-S, MYUNG, S, HAN, S-H (1999), A design expert system for auto-routing of ship pipes, J.
Ship Production 15/1
KENNEDY, J, EBERHART, R.C. (1995), Particle swarm optimization, IEEE Int. Conference on
Neural Networks IV, Piscataway, NJ
KENNEDY, J, EBERHART, R.C. (1997); A discrete binary version of the particle swarm algorithm,
Conf. on Systems, Man, and Cybernetics
KUO, C, WU, J, SHAW, H (1999), Collision avoidance schemes for orthogonal pipe routing, J. Ship
Production 15/4
NEWELL, R (1972), An interactive approach to pipe routing in process plants, Journal of
Information Processing
SALERNO, J (1997), Using the particle swarm optimization technique to train a recurrent neural
model, IEEE Int. Conference on Tools with Artificial Intelligence, Newport Beach, CA
STORCH, R, PARK, J-H (2002), Pipe-routing expert system, Int. Conf. on Computer Applications in
Shipbuilding
YOSHIDA, H, KAWATA, K., FUKUYAMA, Y. TAKAYAMA, S, NAKANISHI Y. (2001), A
particle swarm optimization for reactive power and voltage control in electric power systems,” in
Proc. IEEE Congress on Evolutionary Computation, Seoul, Korea
279
Requirement management, traditional and a second generation scenario
Leo van Ruijven, Croon TBI techniek, Rotterdam/Netherlands, [email protected]
Ubald Nienhuis, Delft University of Technology, Delft/Netherlands [email protected]
Abstract
The subjects of the paper are the process of formulating requirements at the client side and the
handling of these requirements on the contractor side (e.g. ship owner and yard).
In many project most discussions (and so time and money) are related to the way requirements are
being interpreted by involved parties, e.g. client and contractor. The reason for this can be found in
the fact that on the client side, traditional specifications are the result of a creative process of
thinking in a non-structured way, about the context, subjects, problems met in the past, wanted
solutions etc. Information provided by the client to the subcontractor is often redundant, implicit and
dispersed in different documents; context information about parts is missing; and traceability of
requirements, design data, product data etc is lacking.
Reasons on the contractor side can be found in fact that most of the time there is no systematic
approach of handling the requirements in such a way that for every discipline it is always clear what
has to be done, delivered and what the quality of both must be. Also there is in most cases lack of
consistent change management, reasons can be found in the complexity caused by the problems
mentioned earlier. Another reason can be found in the increasing number of requirements for systems
and the corresponding demands.
In several cases, clients start to specify more functional but mostly, they still stated requirements on
detailed level, frustrating the ones stated on a functional level.
First, the paper described the traditional way of making specifications and related problems with
implicit requirements.
Second, the paper describes how to define explicit requirements. Explicit means in the context of this
paper two things: requirements that describes unambiguous one aspect at the time, related to just on
thing and it means that the right requirements are created in the right stage of the project in a
structured way.
The presented way to derive system requirements in a structure way is based on System Engineering
as described in standards like IEEE 1220 and ISO 15288. These standards original are developed in
the context of defence and aerospace but are also very useful within the area of shipbuilding..
A method is presented to create requirements in such a way that they become unambiguous and that
terms, objects, relations between objects, characteristics of these relations and objects are defined
ones and can be used many. This method is based on ISO 10303 and Gellish, a generic engineering
language and uses only a few specific and fundamental views on information and semantic relations
between information elements within these views. The semantic relations within and between the
different views forms together a semantic network where in the relationships between the product
data, the information and the knowledge about the product and the product lifecycles have been made
explicit.
This paper describes developments aimed at defining and follow up requirements that cover as
example electrical, mechanical and organizational aspects of a system in an explicit and clear way.
The work presented in this paper is part of ongoing PhD-work by the author carried out at Delft
University of Technology.
1. Introduction
Realizing systems becomes these days more and more complex. The reason can be found in the
increasing number of requirements for systems and the corresponding demands.
Systems itself these days also becomes also more and more complex and requires that several
disciplines must work together on several (hopefully) defined interface levels, on both sides of the
interface fulfilling common requirements of the client.
Clients more and more demands contractors to make there building process more transparent and
280
accessible for the client to be sure that the product the ordered will be there on time and is compliant
with in the contract specified quality.
The methodology of System Engineering gives guidelines over how to structure the design processes,
the project stages and the stage results. In case of complex projects, the result is very data intensive
which requires an information system to handle this data in a structured and consistent way.
Several reasons in the area of information systems make complexity in system engineering a problem.
For example, requirement information provided is often redundant and dispersed in different
documents; context information about parts is missing; in a structured way defined requirements is
lacking. This means that requirement management starts with information management that
must take care of structuring requirements in an unambiguous, consistent and explicit way.
System Engineering gives guidelines of how to do this. Basic processes of System
Engineering methodology are:
• Requirement analysis;
• Functional analysis;
• System synthesis;
These three processes are managed by e.g. requirement-, interface- and configuration management
This paper describes aspects of the requirement management process, creation and using of
requirements and the difference between implicit and explicit requirements and a more fundamental
way to define and use requirements.
2. Why requirements
Working with requirements is a necessary means to the design process. First of all requirements serve
to agree about the aims of the project before take-off, with a minimum of hidden misunderstandings.
Next, requirements are used to control the design process and this is a recurring event. In the
successive phases of the project increasingly concrete, more detailed requirements are deduced from
the previous requirements and the design that was based upon these preceding requirements.
Second, requirements arise during the design process when designers identify interfaces with other
disciplines and or systems, and when they choose solutions. Both, interfaces and solutions, asked
limiting conditions which has to specify, so leads to requirements.
This way the requirements continue to set the trend in the development of the design.
By using requirements, it will be easier to continue the design along the same lines. Resistance
against uncontrolled changes will increase; the decision process concerning requirements and the
continued effect of accepted changes will gain in controllability.
At the start of a more or less sizeable project the number of requirements runs in the order of some
dozens. Going through project stages, the number will increase into their hundreds and thousands.
Such an amount of requirements can no longer be controlled if they have not been electronically
made explicit, classified and subsequently maintained during the life cycle.
In general six application purposes for requirements can be recognized:
1. Reach agreement in a dialogue between the principal and the service provider.
2. Control of the design process.
3. Deducing the lower rank requirements (more concrete demands).
4. Manage the scope.
5. Verification of the design.
6. Implement changes collectedly.
Requirements are used to lay down the "Why" of the "What" to be realised, thus preventing the
“Why” to disappear from sight. If the ‘Why’ remains in evidence, choices (A or B) and decisions (yes
or no) can be justified and are traceable. In this view, many times drawings make a part of
specifications. Drawings however, contains the “Why” bud don’t tell it explicit.
Only Requirements can solve the communication problem that arises in designing if participants no
longer "spontaneously" have the same thoughts about the Why (the Through Which, or the For What
Purpose) of solutions. Situations in which participants no longer automatically mean the same, lead to
281
increasing complexity. This doesn’t necessarily mean only technical complexity, not-understanding
and misunderstanding can also occur because of complexity originating in the large scope of the
project. If fellow-workers possess shared knowledge to a high degree and they find themselves in a
situation of immediate interactive contact the “Why” need not be made markedly explicit, much can
remain implicit. If this is not/no longer the case, if people do not know each other because they
belong to unfamiliar departments of their company, or work for different companies, it becomes vital
to work explicitly. This gets in the way the most if a contract has to bind parties and it inevitably
cannot because well-wrought and up to date specifications are missing.
3. What is a requirement
A product is the result of a project where a product can be physical or non-physical (e.g. a service).
Normally, a product is predefined by quality related requirements and quantity related requirements
in such a way that the product can meet its objectives. In general a product can be characterized by its
geometry, capabilities and performance (“how well” is the capabilities are fulfilled under various
circumstances). So, all requirements can be, in principle, traced back to one of these aspects.
Definition of a requirement in the context of this paper:
A requirement is a description of a desired aspect of a deliverable (this can be a physical
object or a process or activity).
A specification is a consistent, intentional collection of requirements which, all together meets the
objectives of a desired product or process. Most of the time the collection of requirements is recorded
in a document, that then is called the specification. So, the concepts of <requirement> and
<specification> are not synonymous.
Requirements can be described in a explicit and an implicit way. Often the problem is that when a
client is writing the specification, requirements are written down in an implicit way.
4. Creation of requirements
282
Several paragraphs later something is stated about required activities, e.g. validation of the same
object or subject. This process is symbolized in figure 1. This way of specifying a product leads to
misunderstandings on the side of the contractor. Many failure costs can be related to this kind of
dispersed and implicit requirements.
When analyzing the specification with the requirements by the contractor, an information
management system is required that has the capability to structure and transform the requirements to
explicit statements about one aspect of an object or subject and to related that statement to relevant
objects and subjects. In this way it becomes possible to relate in a consistent way all relevant aspects
for realizing the required system, to all relevant requirements.
This leads to traceability of requirements and context information in relation to the realized product.
Why ?
subject Objective
Leading to what?
Viewpoints…
Geometry
Experiences… What ?
Terms ... specification
Objectives…. Building property
Solutions… material
Requirement
Physical object
explicit information
Implicit event
information
How?
With what? process
When ?
Building property
activities
Context
organisation
client contractor
283
Sometimes hard measurable product variables are used as a requirement while the performance
variables that relate to the actual client requirements are not used because they are harder to measure.
Plain inconsistence between components/systems mentioned in different chapters exist; the authors
involved apparently did not exchange such information or did not apply the latest updates; a more
integral approach to requirements specification is to preferred
Sometimes above lying requirements and or design philosophy of both client and Yard is not
explicitly stated. In state all kind of details are specified without saying the “why” of these details.
Again this will lead to “robotic behaviour” of designers, without thinking of this will lead to the best
solution. At least it would be convenient if such philosophies ore principles where mentioned.
In some case specifications are a combination of hard client requirements and requirements that are
derived form the pre-design of the yard as contractor and/or suppliers/subcontractors. In that case it is
not always clear which requirements are compulsory and which can be changed (with as reason a
better, more efficiently design) without violating the contract between client and yard. So it must
always be clear what the compulsory performance, functional and geometric requirements are of the
client.
It may be clear that it would have great benefits if clients already delivered specifications in such a
structured way. In next example a specification of an Anti Heel System is given (original described in
“paragraph 3.18 of the contract”).
“An anti-heeling system will be installed which will limit the vessel to heel to less than 3 degrees
284
when a load of 400 tonnes is suspended over the ship’s side at a radius of 16,5 meter from the centre
of the crane pedestal. The transfer rate of the anti-heeling system will relate to the crane slewing rate.
Transfer of water between the heeling tanks will be by means of an electrically driven reversible axial
impeller pump with a capacity of approximately 1000 m3/h. The ballast system will be utilized for
pre-heeling prior to heavy lifting operations. The anti-heel system will be connected to the ballast
system for filling / draining of the heeling tanks.”
During the process of converting a specification to this level, involved people continuously must ask
there self either he or she understands the question or the given solution by the client. Poor
qualification, inconstancy and missing information is detected and can be presented to the client for
clarification. By doing this before starting with the design, a lot of discussion and rework can be
prevented.
This means more work at the beginning of a project (a longer preparation time), but leads to much
less work and costs at the end.
285
4.4 A structured way of creating requirements
In projects, clients want to achieve one or more objectives which normally shall be related to one or
more business processes of the client. (Business) processes have one or more characteristics which
must be constraint or conditioned to meet one or more objectives of the client.
System Engineering (SE) design methodology
Customer needs
Process Management
- Interface management
Analysis and definition of objectives, processes, - Requirement management
process activities, requirements and environment - Verification management
- Configuration management
- Information management
Requirement analysis Analyse loop
process ISO 15288
Identify functions enable process activities.
Define an integral functional architecture, incl. functional
interfaces
synthese loop
Architectural Design
process ISO 15288 Transform the functional architecture towards a physical structure
(incl. physical interfaces).
Fig. 2: A structured way to design and or specify a product, based on System Engineering
As example we look at the creation of the requirements of an anti-heeling system, based on the
(business) process of pipe lying at the bottom of the sea. The approach of this example is based on the
design process as mentioned by the System Engineering methodology, shown in figure 2.
One of the conditions that are needed to perform this process is that the vessel stays horizontal and
stable in all directions, given the context of the vessel.
(Specifying the load suspending process, will be needed to find a right solution for this)
From the pipe laying process can be stated a demand to limit this angle of heeling to maximum 3
degrees. So, this limitation of the angle of heeling is a necessary condition for the pipe laying process.
This condition has to be realized by a function. There are several solutions to reach this condition.
For example by making the vessel as broad that the weight suspending is relative small versus the
supporting power of the vessel (caused by the upward power of water)
Another solution can be to introduce an anti-heeling system based on compensating the weight
balance change by moving weight opposite (this is the principle). A rational chose for the weight
medium is using water (a chose have to be made which kind of water: seawater or fresh water or a
combination of these).
286
In that case has to be specified what the conditions are for using the stabilizing system and what the
conditions are for using the anti-heeling system.
(Requiring a specific system for this purpose can be the next level of requirements)
Basically there will be a need of an accumulator tank system on both side of the vessel and a transport
system existing of a piping system and a pump system to move water from one tank system to the
tank system on the other side via the transport system (containing the basically needed intersection
valves). This configuration can be defined by system interface for filling the tank systems
(compound port), and an electrical system interface for powering the pump system (electrical power
port) and an information system interface for moving information of the crane system (information
port). The system itself can be defined by the subsystems, the internal ports and the connections
between these ports (defined at this level by type and direction). On this level there also have to made
a chose for a own control system or using the central vessel control system. In the last case, the
information port will be having a more complex flow.
(The basic system configuration can be the next level of requirements) introduce of interface
requirement with the stabilizing system
Based on the system configuration, specified load suspending process, vessel characteristics the anti-
heel system can be dimensioned in terms of accumulating capacity, needed flow through the transport
system. This will lead to more specified port characteristics and position of the tank systems within
the vessel.
(These dimensions can be the next level of requirements) introduces interface requirement for the
space-system and electrical system
Based on afore-mentioned and possible client standards for materials, there is enough information to
start to realize the sub systems and port connections.
In this stage secondary system parts are defined, looking, for example at the maintenance processes.
So, additional valves are added to the configuration.
Chooses are made for the realization of the port connections by defining connection assemblies which
are the whole of al the needed connection materials for that specific port connection.
287
Requirement analysis and architectural design process
Objectives Preliminary design (sub)system
CSCI
Software
requirement
Context requirement Critical / Detail
(implicit) design
Process
decomposition DB
Software
Identify requirement
requirement
and System
Requirement Choose
allocate decomposition
(explicit) solution /
requirements
technology Solution
to:
Functional to fulfill description
• Process
decomposition capability Realization
• Capability
and
• System
integration
• Interface HWCI
Hardware
requirement Iterative requirement
classified requirement
requirement
Requirement
(explicit) Architectural
Requirement
Design
Analysis Interface
requirement
classified requirement
requirement
Validation and
requirement
Document x Commissioning
(implicit) requirement
requirement
classified
Results:
Client contract System/Subsystem Design Description (SSDD)
specification Software Requirement Specification (SRS)
Result: HWCI Design Specification (HDS)
System/Subsystem Specification (SSS) Interface Requirement Specification (IRS)
Object
HWCI = hardware configuration item
CSCI = computer software
configuration item Review with client (SRR) Review with client (PDR)
verification verification
Within the context of the verification process it’s important that client and contractor agree about the
severity of a requirement, verification method and acceptance criteria for compliancy of the
requirement.
288
When analyzing the requirements, other verification characteristics must be defined such as project
stage when verification will be accomplish, who will be responsible for verification and what the risk
is on non-compliancy. Both, severity and the weighted risk on no-compliancy are input of the risk
management process which can lead to a control action to the design process.
A verification of a requirement can be performed in the context of the quality and or risk management
process of the contractor or these processes of the client. A model that represents the above
mentioned aspects is given in Figure 5.
Method
Procedure
Design activity Responsible person
Accept criteria
capability
context context
interface
Risk
severity Weight
management
Resulting risk
Review with client
5.3 Requirements in relation with Failure Mode, Effects and Criticality (FMECA) Analysis
Requirements have also an import role in the FMECA and RAM analysis. This because of the fact,
that a specification should give information about the required reliability, availability and
maintainability of the asked product. Within a FMECA, potential problems in systems and processes
are identified and related to effected processes of the client. Identified risks are analyzed and feed
back to the design of the system and eventual other processes to reach a acceptable risk level. The
characteristics that are involved in the FMECA are mostly the same that are used in a RAM analysis
to determine the level of these characteristics and to verify if these capabilities accomplish the
requirements. This process is illustrated in Figure 6.
289
Failure Mode, Effect, Criticality (FMECA) and RAMS Analysis
RAMS Control
Requirement “n”
Effected Effected
Project process Operational Process
Client Client Detection mechanism
Identify
relevant failure cause
requirements
Failure effect
(sub)system
Criticality
Risk
Risk Priority Number
management
Project Process
Fall-back strategy
Contractor
FMECA analysis context RAMS Analysis RAMS related
Risk Control and risk report Maintenance
management Severity plan
Risk profile report
Resulting risk
Fig.6: Relations and aspects within the FMECA and RAMS analysis process
290
Life cycle stage definition
As-required
Is described by As-proposed
Life cycle Stage As-designed
As-realized
As-maintained
Boundary is defined by
specifying of the project result
Proposal, bid
Purpose Specifying the design, ready to materialize
Registration of materialized design
Registration of changes in operation
Is described by
Specify process
Process Requirement analysis process
Design process
Purchase and construction process
Maintenance process
Is the whole of
Is specified by
Activity
291
Business generic definition of a
concept template chilled water unit chilled water unit
Re-use
Development
Description of a specific
class template chilled water unit chilled water unit
by yard for building nr xxx
Re-use
Development
Description of a unique
Instantiated class template chilled water unit chilled water unit
by supplier for building nr. xxx
Some examples of decompositions and breakdown structures are (see also figure 9):
• Work break down
• System decomposition
• Functional decomposition
• Geometry decomposition
• Organizational breakdown
• Process decomposition
By focusing on just one interest area per decomposition, the reason to create an element within de
decomposition can now be based on criteria, direct related tot the area of interest and will not be
influents by other areas of interest or domains. So as example the decomposition of work packages
can be done only from a discipline or domain point of view and the system decomposition can be
done from a physical point of view.
292
Reason for using a integrated information management system within projects
Fig. 9: Definition of a project by decomposition the project into many project view structures
Another reason to define project view spaces is related to the definition of a system, being a set of
related elements that transforms material, energy and or information. Requirement almost always says
something about (so it must be possible to organize the storage of these kinds of “things“.):
• the material, energy or information that has to be transformed
• the technology to use for this transformation (electrical, hydraulic, software, hardware etc)
• the capabilities of the system (functions, safety, availability etc)
• the process that will be active within the system or process that leads to the realization of the
system
Mechanical
Software
Production process
Hardware
Process Design process
Material
Process activity
Energy Requirement
System Organizational system
Information
System element Technical system
Function
Administrative system
Availability
Maintainability
Capability
Reliability
Security
Safety
Geometry
293
A special capability is the function of something. It’s difficult to give a definition of a function. In
general a function is described by a verb followed by a noun, e.g. move water (pump), give time
information (clock), give light (lamp). The function of a thing is the specific objective or the
expected uses of it to fulfil the need of the user of that thing. A function of a thing captures the reason
for the existing of that specific thing. In principle, a function can be realized by more than one
solution.
This means that if these three aspects are placed along three orthogonal axes, every piece of
information related to a project, can be placed in this 3-dimensional space. This is presented in figure
11. The same principle can be used to define and organize requirements. Every possible requirement
can be placed in this 3 dimensional space. By creating information management system that supports
such a three dimensional information space, requirements can created and managed in a fundamental
way.
All defined decompositions within in a project can placed and maintained within such an information
management system. Within the information management system elements of the various
decomposition can be related to each other when relevant, so there will be “a semantic web” covering
the information related to these decompositions.
objective
function
Activity Semantic
project view axis relations
Physical objects Product
Knowledge
Model
Life-cycle axis
Individual product level
Information-level axis
Fig.11: The resulting product knowledge model to capture all relevant information within a project
294
Implementation of such a framework can be done by using the ISO standard 15926-part 7, by
using the Gellish table format or by using the web ontology language (OWL). In 2005, none
of these were implemented yet in a commercial information system. From 2006 there will
several initiatives started to realize demonstration projects (also on ISO level).
Conclusion
Within projects, requirements are very important for communication between involved parties and to
bring the design level from requirements analysis via a detailed design to realization of the product.
Requirements are the focal points in the design, verification and validation process and the FMECA
and RAMS analysis processes. Therefore it’s important that clients deliver clear and unambiguous
specifications for the products they want.
The traditional way of making specifications, is many times reason for discussions and claims
between involved parties. By analyzing the specification and making implicit requirements that are
not clear explicit and discuss the explicit made requirements with the client before starting with the
design.
The presented way to model requirements in the “Gellish way” by breaking up the requirement in
facts, each existing of short left and right hand term and a semantic relation, is very intensive work
and ask specific, high skills to do, but makes requirements very explicit. A more practical way is to
capture and isolate sentences from the specification that describes just one aspect of an object (to see
as an information presentation object) and relate this sentence first to an information object (the
requirement itself with a unique identifier) and than relating this information object to relevant
objects (e.g. physical objects, processes, activities, life forms, events, etc.). Than it becomes possible
to capture and integrate all the information around earlier mentioned design processes in one
environment.
A very flexible and rich information system is needed to model requirements in this way that
recognizes information levels, life cycle stages and various project views spaces (physical object
space, activity space, life form space, function space etc.).
It would be a major step if specifications would be delivered in this way. Contractor than shall have in
a short time a clear picture of the wanted product and there would no misunderstandings any longer.
By using branch specific type models, clients and contractors would be able to reuse there knowledge
effectiveness. At the end this would lead to better and less expensive products.
References
295
Uncertainty Analysis Applied to Feedforward Neural Networks
David E. Hess, Naval Surface Warfare Center, MD, USA, [email protected]
Robert F. Roddy, Naval Surface Warfare Center, MD, USA, [email protected]
William E. Faller, Applied Simulation Technologies, FL, USA, [email protected]
Abstract
The investigation into the manner in which uncertainty is quantified for a feed forward neural
network has led to several interesting aspects of the problem, three of which will be discussed here.
First, the uncertainty present in the input vector propagates through a trained network into the output
vector, and this uncertainty is determined using the matrix of partial derivatives defining the change
in each output with respect to the inputs. These quantities are derived, and the approach is
demonstrated using examples from a FFNN trained to predict the 4-quadrant thrust and torque
performance for a series of propellers. Second, because the partial derivative information conveys
the relative sensitivity of a given output to each of the inputs, it can be used as a tool to determine the
relevance of each the inputs to the output prediction using two different approaches. Examples from
a FFNN trained to predict propeller side forces will be used to illustrate the methods. Finally, the
degree to which training data may be reproduced by a trained network depends upon decisions made
during development. One of those is the partitioning of the data set into training data and validation
data. By varying the data contained in each set and retraining, one obtains a number of potential
solutions. The variability in these solutions provides a measure of the fossilized bias error in the
network with respect to this development decision. An example will be provided.
Nomenclature
a, b scaling coefficients
b1, b2, b Bias arrays for 1st & 2nd hidden and output layers
CT* { [ ]
Propeller thrust coefficient, CT* = T (ρ 2) Va2 + (0.7π n D ) (π D 2 4)
2
}
CQ* Propeller torque coefficient, CQ* = Q {(ρ 2)[Va
2 2
]
+ (0.7π n D ) (π D 3 4)}
D Propeller diameter
EAR Propeller expanded area ratio, EAR = expanded area of blades disk area
i, j, k, l Indices for counting nodes in input, 1st & 2nd hidden and output layers
n Propeller revolutions per second
N, N1, N2, No Number of nodes in input, 1st & 2nd hidden and output layers
P Propeller pitch, distance advanced in one revolution without slip
Q Propeller torque
s Subscript denotes quantities scaled to input and output ranges of nonlinearity
T Propeller thrust
Ux, Uy General uncertainty specification with bias and precision contributions
v1, v2, v Input to activation function in 1st & 2nd hidden and output layers
Va Propeller advance velocity
w1, w2, w Weight arrays connecting previous layer to 1st & 2nd hidden and output layers
x, y Input and output vectors
y1, y2 Output from nonlinearities in 1st & 2nd hidden layers, and input to next layer
Z Number of propeller blades
β Propeller advance angle, β = tan −1 (Va 0.7π n D )
ρ Fluid density
296
1. Introduction
The Manoeuvring and Control Division of the Naval Surface Warfare Center (NSWC) along with
Applied Simulation Technologies have been developing and applying feed forward neural networks
(FFNN) to problems of naval interest. This has prompted a need for a better understanding of
uncertainty associated with network predictions. The investigation into the manner in which
uncertainty is quantified for a feed forward neural network has led to several interesting aspects of the
problem, three of which will be discussed here. The first topic considers the propagation of
uncertainty associated with the input vector through a trained network into the output vector. This
uncertainty is determined using the matrix of partial derivatives defining the change in each output
with respect to the inputs.
When performing a general uncertainty analysis for a dependent variable, y, which is a function of N
variables xi
y = y (x1 , x2 , … xN ) , (1)
the uncertainty in the result, U y , is a function of the uncertainties in each of the xi , denoted by U x ,
i
⎡⎛ ∂y 2
⎞ ⎛ ∂y ⎞
2
⎛ ∂y ⎞ ⎤
2 2
⎢
Uy = ⎜⎜ ⎟ ⎜
Ux ⎟ + ⎜ ⎟
Ux ⎟ + ⎜
+⎜ ⎟
Ux ⎟ ⎥ . (2)
⎢⎣⎝ ∂ x1 ⎠ ⎝ ∂ x2 ⎠ ⎝ ∂ xN ⎠ ⎥⎦
1 2 N
Equation 2 describes how uncertainty in each of the inputs, where U x is a total uncertainty composed
i
of bias and precision uncertainties, propagates through the functional relationship, eq. 1, into an
output. To apply eq. 2 for the determination of uncertainty in neural network outputs, one must
derive the matrix of partial derivatives from the equations relating the outputs to the inputs. The
resulting equations for the partial derivatives depend upon the structure of the network and type of
activation functions that are used. The functional form is given by
∂ yl
= f l ( xi , biases, weights ) , (3)
∂ xi
where the partial derivative of the lth output with respect to the ith input depends upon the input vector
and the values of weights and biases in the trained network. The next section defines the feedforward
equations which express eq. 1, and then derives eq. 3 for the FFNN structure commonly employed at
NSWC; namely, input layer, two hidden layers and output layer, all fully connected and with zero-to-
one sigmoid activation functions. The equations are easily implemented in the feedforward
subroutine, and the propagation of uncertainty from the inputs into the outputs is easily quantified.
The approach is demonstrated in Section 3 using examples from a FFNN trained to predict the 4-
quadrant thrust and torque performance for a series of propellers, as described in Roddy, et al. (2006).
The partial derivatives given in Eq. 3 have another interesting application. When developing FFNN
models of physical behaviour, a nontrivial part of the development process is often the determination
of the set of inputs that define and appropriately pose the problem to the network. For all but the
simplest problems, a typical procedure is to define an extensive set of inputs, perform the training,
then examine the importance of each input to the trained network. The latter step is typically referred
to as a lesion analysis. The results of the lesion analysis allow one to refine the set of inputs by
discarding those unimportant for the determination of the outputs. To implement such an analysis,
one characterizes the baseline performance of the network with all inputs using appropriate error
measures. Then, one at a time, an input is zeroed (lesioned), and the error measures quantify the
degree to which the output prediction grows worse (or sometimes improves). An alternative
approach, referred to as the white noise method, is to add a specified amount of white noise
sequentially to each input and quantify the impact on the outputs. The changes in the error after
altering each input, relative to the baseline case, are then sorted in descending order from largest
change to smallest change. Those inputs at the bottom of the list then have the least impact on the
solution. The FFNN designer can then delete those inputs that fall below some empirically
297
determined threshold, retrain the network, and often obtain improved output predictions.
Unfortunately, the filtering that is carried out using lesioning and then using the white noise approach
typically leads to different sorted input lists. Our experience at NSWC has shown that the lesioning
approach is consistently more successful at identifying unimportant inputs.
One might expect the matrix of partial derivatives identified earlier to shed some light on this issue
because they serve as sensitivity coefficients, in that they specify the sensitivity of a given output to a
change in an input. This is indeed the case. One can average the partial derivatives over all of the
test data, then rank them for a particular output from largest to smallest. The smallest derivatives at
the bottom of the list identify inputs that will likely have little role in the prediction of that output.
When comparing the sorted input list to those obtained using the lesioning and white noise
approaches, one finds that this method produces a list that is nearly identical to that obtained from the
white noise approach. This was initially a surprise. The key issue, however, is the change in the
output that results from a change in a given input. By consulting the simple equation
∂ yl
Δyl = Δxi , (4)
∂ xi
one sees that a change in an output Δyl due to an input change Δxi depends not only on the size of
the partial derivative, but also on the size of the input change itself. Thus, a second approach
presented itself. Average the derivatives as well as the magnitudes of each of the inputs over the test
data. Then, for a given output, rank the products of an averaged derivative and an averaged input
change from largest to smallest. When comparing the sorted input list to those obtained using the
lesioning and white noise approaches, one finds that this method produces a list that is nearly
identical to that obtained from the lesion analysis. The two partial derivative analyses described here
clarify the differences between the lesioning and white noise methods and put each on a firmer
foundation. Examples from a FFNN trained to predict propeller side forces will be used in Section 4
to illustrate the methods.
A third aspect of the uncertainty problem is an examination of the fossilized bias error, a term coined
by Moffat (1988), that exists within the trained network and which results from decisions made
during development. For example, the degree to which the training data may be reproduced by the
trained network will depend upon how effectively the back propagation algorithm can refine the
weight set. This, in turn, depends upon chosen parameters such as learning rate and momentum. It
depends upon the choice of activation function. It may well depend upon the initial random weight
distribution. Certainly, it will depend upon the choices made when partitioning the test data into data
to be used for training and data to be set aside for testing the network. Whatever predictions result
from this network will not be perfect. Yet, once the development decisions are made and the network
is trained, then, no matter how many times the trained network is executed, it will always produce the
same bias error in the outputs for a given input vector. Hence, whatever remaining prediction errors
may exist in the trained network get fossilized into a bias error. The general problem can prove to be
difficult to quantify.
This paper focuses on one particular development decision: partitioning the test data into training data
and validation data. Experience gives us some guidance with the initial selection. This choice is
made, the network is trained, and the predictions are compared with the measured data. Then, the
process is repeated with the training data chosen at random from the overall set of test data. This
approach has been automated using the ICE program by Faller (2005), and the sequence is repeated a
desired number of times, 10 say. The result is a collection of 11 sets of weights and biases
representing 11 trained neural networks. The test data is then input into each of the 11 networks, a
prediction is obtained from each, and then a mean and standard deviation is computed for the 11
predictions. The mean solution, computed for every point in the test data, is typically better than any
of the 11 particular solutions. Furthermore, the standard deviation characterizes the level of
variability among the 11 solutions when the training files are chosen at random. Examples taken
from Roddy, et al. (2006) will be used to illustrate the approach in Section 5. We now turn to the
description of the feed forward equations and the derivation of the partial derivatives.
298
2. Feed forward Equations and Sensitivity Derivatives
Figure 1 shows a feed forward network with an input layer, two hidden layers and an output layer.
Each layer is fully connected to the preceding layer, and nodes in the hidden and output layers use
zero-to-one sigmoid nonlinearities. The right side of the figure shows how the input vector and the
bias travel along weighted links from the input layer to the jth node in the first hidden layer. Products
of the inputs and the weights are summed with the bias to provide the input to the activation function,
and the output from the nonlinearity is also shown. The process is similar for other nodes in each of
the other layers; the minor changes in the notation are documented in the nomenclature.
xN
The equations describing the transformation of the input vector into the output vector are given
below. The notation used considers i = 1,…, N input nodes, j = 1,…, N1 and k = 1,…, N 2 nodes in
the two hidden layers and l = 1,…, No output nodes.
Input layer to first hidden layer
N
v1 j = b1 j + ∑ w1ij xi ( j = 1,…, N1)
i =1
(5)
1
y1 j = −v1
1+ e j
Note that the biases in the output layer, bl , are typically set to zero (not used) for the networks in
routine use here at NSWC. Substituting eq. 5 into eq. 6 and then eq. 6 into eq. 7 will yield a set of
equations relating yl to xi of the form of eq. 1. The set of partial derivatives that are required are
∂ yl
and will now be derived step by step using repeated application of the Chain Rule.
∂ xi
The change in the output of the jth node in the first hidden layer with respect to the inputs is found as:
299
Input layer to first hidden layer
∂ y1 j − v1 j
= y1 j (1 − y1 j )
e
=
∂ v1 j (1+ e ) −v1 j 2
∂ v1 j
= w1ij (8)
∂ xi
∂ y1 j ∂ y1 j ∂ v1 j
∴ = = w1ij y1 j (1 − y1 j )
∂ xi ∂ v1 j ∂ xi
The change in the output of the kth node in the second hidden layer with respect to the nodes in the
first hidden layer is given by:
First hidden layer to second hidden layer
∂ y 2k
= y 2k ( 1 − y 2k )
∂ v2k
∂ v2k
= w2 jk (9)
∂y1 j
∂ y 2k
∴ = w2 jk y 2 k ( 1 − y 2 k )
∂ y1 j
The change in the output of the lth node in the output layer with respect to the nodes in the second
hidden layer is determined from:
Second hidden layer to output layer
∂ yl
= yl ( 1 − yl )
∂ vl
∂ vl
= wkl (10)
∂ y 2k
∂ yl
∴ = wkl yl ( 1 − yl )
∂ y 2k
∂ yl
Now, we need to apply the results from eqs. 8-10 to obtain . Working backward, we first want to
∂ xi
∂ yl
determine , which is the change in the output of the lth node in the output layer with respect to
∂y1 j
the nodes in the first hidden layer. By referring to eqs. 7 and 6, we find that
yl = f l ( y 21 , y 2 2 , …, y 2 N 2 ) and y 2 k = g k ( y11 , y12 , …, y1N 1 ) . (11)
Application of the chain rule for the case of multiple independent variables requires a sum giving
∂ yl N2
∂ yl ∂ y 2k
=∑
∂ y1 j k =1 ∂ y 2 k ∂ y1 j
, (12)
[ ]
N2
= ∑ [wkl yl ( 1 − yl )] w2 jk y 2 k ( 1 − y 2 k )
k =1
where the second line follows from substituting the results of eqs. 10 and 9. Finally, we can compute
∂ yl
, the rate of change of the lth node in the output vector with respect to the nodes in the input
∂ xi
vector, as follows:
300
∂ yl N1
∂ yl ∂ y1 j
=∑
∂ xi j =1 ∂ y1 j ∂ xi
[
= ∑ w1ij y1 j (1 − y1 j ) ] ∑ [w2 ]
N1 N2
jk y 2 k ( 1 − y 2 k ) [wkl yl ( 1 − yl )] , (13)
j =1 k =1
∑ [w1 ][ ]
y1 j (1 − y1 j ) w2 jk y 2 k ( 1 − y 2 k ) [wkl yl ( 1 − yl )]
N1 N2
=∑ ij
j =1 k =1
∂ y1 y1 ( x1 + Δx1 ) − y1 ( x1 − Δx1 )
≈ + Ο(Δx12 ) . (16)
∂ x1 2 Δx1
By keeping Δx1 a very small value, this approximation will compare well to the exact value computed
by eq. 13.
To conclude this section, an additional comment regarding scaling is in order. Prior to training the
network, the inputs and outputs that comprise the training data are scaled, using a linear
transformation, to a portion of the input domain [−0.95,0.95] and output range [0.05,0.95] of the
sigmoid activation function. Thus, scaled inputs, xs , and scaled outputs, ys , are computed from
xsi = ai xi + bi and ysl = al yl + bl , (17)
where a and b are coefficients that are different for each input and output. To be precise, these scaled
quantities are used in eqs. 3-17, and the s subscript is understood and dropped to ease the notation.
Accordingly, the uncertainties must be scaled as well. A summary of the entire procedure follows.
301
The user specifies general uncertainty levels for the inputs, U x . Scaled uncertainties are obtained by
i
applying eq 2 to the scaling equation for the inputs, eq. 17, to give
U sx = aiU x .
i i
(18)
The scaled input uncertainties are then used with the derivatives, computed from eq. 13, in eq. 2 to
produce scaled general uncertainty levels for the outputs, U sy . (Note that these are scaled derivatives
l
because they represent changes in scaled outputs with respect to changes in scaled inputs.) The
output uncertainties must then be unscaled. First, the scaling equation for the outputs, eq. 17, is solved
for the unscaled outputs, and then eq 2 is applied to the rearranged equation. The unscaled output
uncertainties are then obtained from
U sy
Uy = l
. (19)
l
al
If one wishes to examine the derivatives directly and desires that they represent the slopes of the
unscaled predictions, then one must unscale the derivatives using
∂ yl ∂ ysl ai
= . (20)
∂ xi ∂ xsi al
In the next section, we provide some examples of these calculations using FFNNs trained to predict
propeller thrust and torque.
Roddy, et al. (2006) describe the development of feed forward neural network (FFNN) predictions of
four-quadrant thrust and torque behaviour for the Wageningen B-Screw Series of propellers. The
purpose of the work was twofold: to create a prediction tool that accurately recovered measured data
for those propellers in the series for which measured data was available, and to further provide
reasonable four-quadrant thrust and torque predictions for the remaining propellers for which no
measured data was available. The training data expressed thrust and torque coefficients, CT* and
10 CQ* , as functions of pitch to diameter ratio, P D , expanded area ratio, EAR, number of blades, Z,
and advance angle, β . The experimental data set, while substantial, did not cover the entire range of
each of these variables for each propeller in the series. Past attempts using other methods to fit or
interpolate within the existing experimental data to estimate the performance for the entire series were
unsatisfactory. The FFNN approach initially used two FFNNs, each taking as inputs: P/D, EAR, Z
and β , and predicting CT* and 10 CQ* as the single output, respectively. Then, to further increase
prediction quality, the [0, 360] range of β was subdivided into four regions, and a separate FFNN
was used for each of the regions for a total of eight FFNNs. The networks were trained using a subset
of the experimental data with the remainder being used to validate the performance of the trained
networks. The results showed excellent agreement with the existing data.
Fig. 2 shows a typical example of FFNN predictions of CT* and − 10 CQ* plotted versus β for a family
of curves of varying P D and for constant values of Z = 5 and EAR = 0.65 for the prediction of the
B5-65 propeller series. (The B-Series propellers are defined as Bm-nn, where m = Z and
nn = EAR * 100 .) No experimental data is available for this propeller series, and these predictions
show the utility of the FFNN approach to make reasonable predictions of the 4-quadrant behaviour.
The quadrant definitions for β are: [0, 90] – Ahead, [90, 180] – Crashback, [180, 270] – Backing,
and [270, 360] – Crashahead. In particular, n = 0 when β = 90° and β = 270° . Fig. 2 shows that in
regions centred about these values, there is considerable variation in 10 CQ* , and to a lesser degree in
CT* , with respect to β and P D . Other results provided by Roddy, et al. (2006) demonstrate that
302
substantial variation with respect to EAR occurs in these regions as well. The need to capture this
variation with sufficient quality was the motivation for using separate networks in regions centred
about these values.
1.60
-10CQ*
1.20
0.80
0.40
0.00
CT*, -10CQ*
-0.40
-0.80
*
CT
-1.20
P/D=0.40
-1.60 P/D=0.60
P/D=0.80
P/D=1.00
-2.00
P/D=1.20
P/D=1.40
-2.40
0 60 120 180 240 300 360
Beta (deg)
1.6
-10CQ*
1.2
0.8
0.4
0
*
CT , -10CQ
-0.4
*
-0.8
-1.2
-1.6
P/D=1.4
-2
-2.4
0 60 120 180 240 300 360
Beta (deg)
Fig. 3: Four quadrant predictions for B5-65 with unscaled uncertainty bounds.
To compute uncertainty levels for CT* and 10 CQ* , one must first specify general uncertainties for each
of the inputs. These uncertainties may be a function of the input values, or they may be held constant
over the entire range. Conservative constant values were chosen to be
303
U P / D = 0.02, U EAR = 0.02, U Z = 0.0, U β = 0.2° . (21)
Represented as percentages, these choices are 1.4-5% for P D = 0.4 − 1.4 and 1.9-5% for
EAR = 0.4 − 1.05 . The resulting output uncertainties vary as a function of the values of the input
vector, and they are plotted as bounds about the thrust and torque values in Fig. 3 for a B5-65
propeller with P D = 1.4 . For this case, the uncertainty in CT* varies from 0.0015-0.046 with an
average value of 0.015 and a standard deviation of 0.012. The uncertainty in − 10 CQ* varies from
0.011-0.149 with an average value of 0.032 and a standard deviation of 0.025.
Fig. 3 shows the magnitude of the unscaled output uncertainties relative to the output magnitudes
themselves; for the input uncertainty levels assumed in eq. 21, the output levels are in most cases
quite small. To better illustrate the variation of the output uncertainties in the four quadrants, U CT
and − U10CQ have been plotted as a function of β for a family of propellers of varying P D and for
constant values of Z = 5 and EAR = 0.65 in Fig. 4.
0.04
UCT
0.02
0.00
UCT, -U10CQ
-0.02
-U10CQ
P/D=0.4
-0.04 P/D=0.6
P/D=0.8
P/D=1.0
P/D=1.2
P/D=1.4
-0.06
0 60 120 180 240 300 360
Beta (deg)
(Note that the uncertainties in Fig. 4 have been left in scaled form so that comparisons can be made to
subsequent Figs. 5-7 presented later.) This figure reinforces the previous conclusion that the output
uncertainties are small except for small regions centred about β = 90° and β = 270° where n = 0 .
In these regions the uncertainty values become much larger. Shown on this larger scale, one can see
that the curves become discontinuous at the boundaries of the four subdivisions of β . As each
boundary is crossed, a different feed forward network is in use with a different set of weights and
biases. In accordance with eq. 14, this alters the computed derivatives, which in turn changes the
output uncertainties.
Because the input uncertainty levels were chosen to be constant values as the inputs vary, the
variation in the output uncertainties essentially represents the underlying behaviour of the derivatives.
An examination of the derivatives themselves would be useful for determining the prime contributors
to the uncertainty peaks that appear in Fig. 4. Therefore, the magnitudes of the products
304
∂CT ∂CT ∂C
U P/ D , U EAR , and U β T , (22)
∂ PD ∂EAR ∂β
along with those for 10 CQ* were examined in each of the four quadrants. Note that these are scaled
quantities (with the s subscript dropped for convenience) because these are the ones used to determine
the output uncertainties as discussed following eq. 18. The uncertainty-derivative products with
respect to P D are given in Fig. 5, those with respect to EAR are shown in Fig. 6, and those with
respect to β are found in Fig. 7. As expected, the uncertainty-derivative products are generally small
quantities over most of the range of β and show the largest variation in the regions where the
uncertainty peaks were found. Surprisingly, products with respect to EAR are the overall dominant
contribution and are about three times larger than those with respect to P D , and about six times
larger than those with respect to β . In the region centred about β = 270° , all three products
contribute with the EAR product the dominant one. For the region centred about β = 90° , the β
contribution is absent, the P D product plays almost no role and the EAR product is the strong
contribution. Important to note in this discussion is the fact that the input uncertainties are scaled, as
are the derivatives. This fact certainly influences the relative magnitudes of the uncertainty-
derivative products. Nevertheless, the above statements as well as the conclusions below are
certainly valid when using the FFNN model to make thrust and torque coefficient predictions.
Summarizing, the conclusions to be drawn from this study are as follows. Roddy, et al. (2006) have
applied a technique using feed forward neural networks to make predictions of the behaviour of CT*
and 10 CQ* , as functions of pitch to diameter ratio, P D , expanded area ratio, EAR, number of blades,
Z, and advance angle, β . A general method has been described to compute the level of uncertainty
in the output predictions that arises from uncertainty associated with the input variables. In order to
minimize the uncertainty in these FFNN output predictions, the uncertainty in EAR is most
important and should be kept to a minimum. The uncertainty specification for P D is less important
and could be as much as three times larger than U EAR before it makes a contribution as large as that of
EAR. The uncertainty in β is least important and can be approximately 50 times larger than the
uncertainty in EAR before it begins to make a contribution as large. We turn now to another
interesting use of the derivatives: to eliminate useless inputs.
The preceding analysis has shown that the derivatives reveal quite a bit about the underlying nature of
the solution. One would think that since the derivatives represent the sensitivity of outputs to inputs,
they could be used to determine whether inputs are influencing the output in the trained network or
whether they are having no effect and can be discarded. This can be of great utility for more complex
problems where a nontrivial part of the development process is often the determination of the set of
inputs that define and appropriately pose the problem to the network. As stated earlier, a typical
procedure in these cases is to define an extensive set of inputs, perform the training, then examine the
importance of each input to the trained network. Two techniques typically used for this purpose are
the white noise and lesion analysis methods. These methods result in a quantitative ranking of the
inputs from most important to least important. The FFNN designer can then delete those inputs that
fall below some empirically determined threshold, retrain the network, and often obtain improved
output predictions. A difficulty that arises is that the sorted input lists computed by the two methods
are typically different. Which one should be used? We have found that the lesion analysis
consistently gives more reliable results. This section will show that by using the derivatives, one has
an additional method available; and furthermore, the derivatives can be used to explain the
differences obtained using the other two methods.
To illustrate the techniques, we will use feed forward neural networks, currently under development,
for the computation of propeller thrust and side force coefficients. The training data, acquired from
305
experiments here at NSWCCD, are a large and rich data set with many variables, and combinations of
variables, that can be used as potential inputs. Each network has an input layer, two hidden layers
and an output layer with one output. Each layer is fully connected to the preceding layer, and nodes
in the
0.006
0.004
0.002
0.000
UP/D * ∂ CT / ∂ P/D
-0.002
-0.004
-0.006
P/D=0.4
-0.008
P/D=0.6
P/D=0.8
-0.010 P/D=1.0
P/D=1.2
P/D=1.4
-0.012
0 60 120 180 240 300 360
Beta
0.015
P/D=0.4
P/D=0.6
P/D=0.8
0.010
P/D=1.0
P/D=1.2
P/D=1.4
0.005
−UP/D ∗ ∂ 10CQ / ∂ P/D
0.000
-0.005
-0.010
-0.015
-0.020
-0.025
0 60 120 180 240 300 360
Beta
306
0.035
P/D=0.4
P/D=0.6
0.030
P/D=0.8
P/D=1.0
0.025 P/D=1.2
P/D=1.4
0.020
0.015
UEAR ∗ ∂ CT / ∂ EAR
0.010
0.005
0.000
-0.005
-0.010
-0.015
-0.020
0 60 120 180 240 300 360
Beta
0.04
P/D=0.4
P/D=0.6
0.03 P/D=0.8
P/D=1.0
0.02 P/D=1.2
P/D=1.4
0.01
−UEAR * ∂ 10CQ / ∂ EAR
0.00
-0.01
-0.02
-0.03
-0.04
-0.05
-0.06
0 60 120 180 240 300 360
Beta
307
0.010
P/D=0.4
P/D=0.6
P/D=0.8
0.008
P/D=1.0
P/D=1.2
P/D=1.4
0.006
0.004
−Uβ * ∂ 10CQ / ∂ β
0.002
0.000
-0.002
-0.004
-0.006
0 60 120 180 240 300 360
Beta
0.004
P/D=0.4
P/D=0.6
0.003 P/D=0.8
P/D=1.0
P/D=1.2
0.002 P/D=1.4
0.001
Uβ * ∂ CT / ∂ β
0.000
-0.001
-0.002
-0.003
-0.004
-0.005
0 60 120 180 240 300 360
Beta
308
hidden and output layers use zero-to-one sigmoid nonlinearities. The network for the prediction of
thrust coefficient has 19 inputs, whereas the network for the prediction of lateral force coefficient has
112 inputs.
To determine the relative importance of these inputs, the white noise method (WN) can be used. This
method first characterizes the quality of the solution over the full set of test data (both training data
and validation data) with all inputs using appropriate error measures. We use the average angle
measure (AAM), developed at NSWCCD and defined in Roddy, et al. (2006), and a correlation
coefficient (R) for this purpose. This becomes the baseline to which the other results are compared.
The white noise method proceeds by adding a random number uniformly distributed in [-1,1] and
multiplied by a coefficient to the first scaled input. We typically use 25% white noise such that the
multiplying coefficient is 0.25. The error measures, averaged over all of the test data, are recorded
for the first input with the added white noise. If the first input is important, then AAM and R will
show substantial decreases from the baseline case indicating the poor quality of the solution with
noise added to the first input. Then, the process is repeated over all of the test data with the white
noise added to the second input, and so on. The 1 − AAM values for each of these cases are then
ranked from largest to smallest to yield the list of most important to least important inputs.
An alternative method (D1) averages, for each output, the magnitude of the partial derivatives of that
output with respect to each of the inputs over all of the test data, then ranks them for a particular
output from largest to smallest. The smallest derivatives at the bottom of the list identify inputs that
will likely have little role in the prediction of that output. To allow direct comparison of the two
techniques, the 1 − AAM values from the WN method and the averaged derivatives of the D1 method
were transformed to the range [0,1] using separate linear transformations. The range [0,1] now
represents relative importance with a value of one corresponding to most important and zero denoting
least important.
The comparison of WN and D1 is shown in the left graph of Fig. 8 for the FFNN predicting thrust
coefficient. The abscissa denotes the input numbers, and the ranking was formed using the D1
method indicated by the light blue vertical bars in the graph, which are monotonically decreasing.
Input 15 is clearly least important and input 19 is, by far, most important to the output prediction.
1.0 1.0
Derivatives Derivatives
0.9 0.9
White Noise Lesion
0.8 0.8
0.7 0.7
Relative Importance
Relative Importance
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0.0 0.0
19 12 2 10 14 16 4 18 11 5 3 7 17 13 1 6 9 8 15 2 19 16 18 17 3 10 12 4 14 8 7 9 6 11 13 15 1 5
Input Input
309
∂ yl
output yl , each of the inputs xi and each of the derivatives over all of the test data. Then, for a
∂ xi
∂ yl
given output, rank the products of an averaged derivative and an averaged input xi from largest
∂ xi
to smallest. The comparison of L and D2 for the FFNN predicting thrust coefficient is shown in the
right graph of Fig. 8. The ranking was formed using the D2 method indicated by the light blue
vertical bars in the graph, which are monotonically decreasing. Again, some discrepancies between
the rankings provided by L and D2 can be seen (3 and 16 are swapped, for example) but both are very
close. On the other hand, by comparing the left and right graphs in Fig. 8, one sees that the ranking
provided by the two pairs of methods are quite different.
1.0 1.0
Derivatives Derivatives
0.9 0.9
White Noise Lesion
0.8 0.8
Relative Importance
0.7 0.7
Relative Importance
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0.0 0.0
0 20 40 60 80 100 120 0 20 40 60 80 100 120
Input Input
As discussed earlier, decisions made during the development of a FFNN will affect the degree to
which the training data may be reproduced by the network. A trained network with different learning
rate or momentum coefficients, different nonlinearities, different initial weight distributions or
different training sets will produce results which vary from networks using alternative choices. Once
these development decisions are made and the network is trained, then, no matter how many times the
310
trained network is executed, it will always produce the same bias error in the outputs for a given input
vector. The former errors that resulted from variations in network parameters get fossilized into a bias
error. Since little theory exists to aid the developer with these decisions, the choices are usually made
based upon long-term experience.
One of the decisions of particular relevance is the choice of data to use as the training data for the
network. Our approach here at NSWCCD is to typically use about 80% of the available data as
training data leaving 20% set aside for use to validate the trained network against data typical of, but
not included in, the training set. What data should be chosen to include in the training set? If the data
is acquired from an experiment, how much experimental data is required to lead to successful
simulation and prediction by a FFNN? These are difficult questions that will not be answered here.
Instead, we will focus on a random variation of the data chosen as training data and investigate the
variability that results in the output predictions.
As an example to illustrate the technique we use an early version of the FFNNs developed for the
prediction of thrust and torque coefficients for the B-screw series discussed in Roddy, et al. (2006).
These earlier versions did not subdivide the range of β and use separate networks in each
subdivision; instead, they employed a single network over the entire range. Each FFNN prediction is
a thrust coefficient or torque coefficient prediction using the following inputs:
P D , EAR, Z , β , cos β , cos 5β , cos10β , sin β , sin 5β and sin 10β . An example of the predictions of
CT* and − 10 CQ* from these networks is presented in Fig. 10 plotted versus β for a family of two
propellers with EAR = 0.4 and EAR = 1.0 and for constant values of Z = 4 and P D = 1.0 for the
prediction of the B4-40 and B4-100 propeller series. The solid symbols in Fig. 10 represent the
training data. These solutions are not as precise as those for the networks finally adopted; for this
reason, they provide a good demonstration.
These FFNNs are implemented using software developed by Applied Simulation Technologies that
automates the development of feed forward networks. The code is named Intelligent Calculation of
Equations (ICE), and provides many development options that make ICE a powerful and versatile
tool. Details may be found in Faller (2005). One of those options is to vary the chosen training data;
the procedure is as follows. A baseline partition of the test data into training data and validation data
is made using heuristics determined from experience. With this initial choice made, the network is
trained, and the predictions are compared with the measured data. Then, the process is repeated a
desired number of times, 10 say, with the training data chosen at random from the overall set of test
data. The result is a collection of 11 sets (including the initial one) of weights and biases representing
11 trained neural networks. The test data is then input into each of the 11 networks, a prediction is
obtained from each, and then a mean and standard deviation is computed for the 11 predictions at
each value of the input vector. The mean solution, computed for every point in the test data, is
typically better than any of the 11 particular solutions. Furthermore, the standard deviation
characterizes the level of variability among the 11 solutions when the training files are chosen at
random.
An example of this process is depicted in Fig. 11, where we are focusing on the CT* solutions to
simplify the plot. The results for the − 10 CQ* solutions are similar. Shown in bold lines are the
solutions averaged over the 11 trial solutions for the two propellers in Fig. 10; these match the CT*
curves provided in that figure. Three of the 11 trial solutions are shown as thin lines about the
average solution; only 3 trial solutions are shown to simplify the plot. The use of trigonometric
functions of β as inputs leads to some of the variability observed; yet, the inclusion of these inputs
was the primary factor allowing the use of a single FFNN to predict CT* over the entire range of β .
Shown beneath the CT* curves in Fig. 11 is a plot of the standard deviation about the average value of
the 11 trial solutions for the two propellers for the specified value of β , Z = 4 , P D = 1.0
and EAR = 0.4 or EAR = 1.0 . These values are referred to the secondary axis on the right side of the
plot. When these standard deviations are further averaged over the range of β , they yield the
311
1.6
1.2
-10CQ*
0.8
0.4
CT*, -10CQ*
-0.4
CT*
-0.8
-1.2
-1.6
Averaged Solution EAR=0.4
Averaged Solution EAR=1.0
-2
0 60 120 180 240 300 360
Beta (deg)
1.2 0.7
Averaged Solution EAR=0.4
Averaged Solution EAR=1.0
Selected Solutions EAR=0.4
0.8 0.6
Selected Solutions EAR=1.0
Standard Deviation EAR=0.4
Standard Deviation EAR=1.0
0.4 0.5
CT* Standard Deviation
0 0.4
CT*
-0.4 0.3
-0.8 0.2
-1.2 0.1
-1.6 0
0 60 120 180 240 300 360
Beta (deg)
Fig. 11: Averaged and selected solutions for B4-40 and B4-100, P/D=1.0.
312
quantities in column 2 of the following table. To get a sense of the relative size of these deviations,
the absolute values of the CT* predictions were averaged over β to yield the numbers in column 3 of
the table. The percentages in the fourth column represent the size of the deviations (col. 2) relative to
the average absolute values (col. 3).
Table: Standard deviations and averages of absolute values
for B4-40 and B4-100 propellers, P/D=1.0
EAR Std Dev. Avg. Abs. Val. Std. Dev. (%) Bias Index Mean (%)
0.4 0.046 0.310 14.75 9.91
1.0 0.045 0.486 9.31 6.25
*
The numbers in col. 2 represent the averaged change in the C output with respect to one
T
development change (choice of training data) while holding other development parameters constant.
In this sense, they are a measure of the bias error associated with using any of the 11 trial solutions.
To determine a measure of the bias error for the mean solution of the 11 trial solutions, we proceed as
follows. The t value from the Student’s t Distribution for 11 − 1 = 10 degrees of freedom is 2.228.
The bias index of the mean value is computed from
tST
BM = , (23)
N
where BM is the bias index of the mean solution, ST is the standard deviation of the trial solutions
(col 2) and N is the number of trials. The use of the t value provides a 95% confidence estimate. The
corresponding percentages are given in col. 5. (Note that the numbers in col. 2 would have to be
multiplied by t = 2.28 , and the percentages in col. 4 would have to be updated to provide similar 95%
bias indices for the trial solutions.) The fact that the numbers in col. 5 were quite high was the reason
for discarding this early FFNN approach.
6. Summary
This paper has derived analytic expressions for the matrix of partial derivatives relating the outputs to
the inputs for the FFNN structure commonly employed at NSWC; namely, input layer, two hidden
layers and output layer, all fully connected and with zero-to-one sigmoid activation functions. These
partial derivatives were then used in a study of the propagation of input uncertainty into the output
predictions. An example from the prediction of thrust and torque coefficients for the B-screw series
was used. The analysis showed that to minimize the uncertainty in the FFNN output predictions, the
uncertainty in EAR should be kept to a minimum. The uncertainty specification for P D was less
important and could be as much as three times larger than U EAR before it makes a contribution as
large as that of EAR. The uncertainty in β was least important and can be approximately 50 times
larger than the uncertainty in EAR before it begins to make a contribution as large.
The next section of the paper showed that the derivatives may be used to determine the relative
importance of inputs, possibly allowing some to be discarded. Two different methods were
discussed, and each showed similar results to the white noise and lesion analysis approaches. The
two partial derivative analyses clarified the differences between the white noise and lesion approaches
and recommended the lesion method.
The final section discussed a means to measure the fossilized bias error in the output predictions of a
network resulting from variations in the choice of training data. The example chosen was an early
version of the FFNNs developed for the prediction of thrust and torque coefficients for the B-screw
series. The level of variability among the various trial solutions was characterized, and a 95% bias
index for the mean solution was computed and expressed as a percentage of the averaged absolute
value of the predictions.
313
Although specific examples drawn from FFNNs intended for propeller thrust and torque predictions
were repeatedly used to illustrate the methods, the approaches described are quite general and can be
applied to any FFNN with the architecture described. Modifications to the derivative expressions to
account for architecture differences should be straightforward. Taken together, the methods
demonstrate a unified approach for the characterization of various aspects of uncertainty and for the
pruning of network inputs.
Acknowledgments
This work is supported by the U.S. Office of Naval Research, and the program officer is Dr. Ronald
Joslin, Code 333. Dr. Patrick Purtell, Code 333 has also provided funding for neural network
research. Neural network efforts are also sponsored by the U.S. Office of Naval Research through the
Independent Applied Research program conducted at the Naval Surface Warfare Center, Carderock
Division. The program monitor is Dr. John H. Barkyoumb, Code 0021. The automated feedforward
neural network development code, Intelligent Calculation of Equations (ICE), was used to create the
neural networks used in this study. This code was developed by Applied Simulation Technologies
and is available for free by contacting the company via [email protected].
References
COLEMAN, H. and STEELE W. (1999), Experimentation and Uncertainty Analysis for Engineers,
Second ed., John Wiley and Sons, New York.
FALLER, W.E., (Dec 2005), “Intelligent Calculation of Equations (ICE) User Manual,” Applied
Simulation Technologies, Cocoa Beach, FL.
MOFFAT, R.J. (June 1985), “Using Uncertainty Analysis in the Planning of an Experiment,” J.
Fluids Engineering, Vol. 107, pp. 173-178.
MOFFAT, R.J. (Jan. 1988), “Describing the Uncertainties in Experimental Results,” Experimental
Thermal and Fluid Science, Vol. 1, pp. 3-17.
RODDY, R.F., HESS, D.E. and FALLER, W.E. (May 2006), “Neural Network Predictions of the 4-
Quadrant Wageningen B-Screw Series,” Fifth International Conference on Computer and IT
Applications in the Maritime Industries, Leiden, Netherlands.
314
Neural Network Predictions of the 4-Quadrant
Wageningen B-Screw Series
Robert F. Roddy, David Taylor Model Basin, NSWC/USA, [email protected]
David E. Hess, David Taylor Model Basin, NSWC /USA, [email protected]
William E. Faller, Applied Simulation Technologies/USA, [email protected]
Abstract
The Manoeuvring and Control Division at the David Taylor Model Basin, Naval Surface Warfare
Center (NSWC) along with Applied Simulation Technologies have been developing and applying
neural networks to problems of naval interest. This paper describes the development of feed forward
neural network (FFNN) predictions of four-quadrant thrust and torque behaviour for the Wageningen
B-Screw Series of propellers. The purpose of the work is twofold: to create a prediction tool that
accurately recovers measured data for those propellers in the series for which measured data is
available, and to further provide reasonable four-quadrant thrust and torque predictions for the
remaining propellers for which no measured data is available. Substantial results, varying each of the
inputs over the full operating range, will be presented which establish that these two goals have been
well attained.
1. Introduction
During preliminary ship design studies an estimate of the performance of the proposed propeller is
required. One of the most used subcavitating open-propeller series is the Wageningen B-Screw Series.
Experimental data on the series was first reported in 1937. As additional propellers were added to the
series, and new techniques were developed, additional reports on the series were issued from 1937 to
1984. The parameters that were varied in this series are: the number of propeller blades (Z); the
expanded area ratio of the propellers (EAR); and, the pitch-diameter ratio of the propellers (P/D). The
B-Series propellers are defined as Bm-nn, where m=Z and nn=EAR*100. Table 1 presents a summary
of the principal geometric characteristics of the propellers contained within the B-Screw Series while
Figure 1 shows the characteristics of the B4-Series of propellers reproduced from Troost,(1951). The
information contained in these reports is sufficient for performing preliminary powering estimates;
however, to conduct ship performance simulations, this information must be supplemented with four-
quadrant thrust and torque performance of the desired propeller. A very good summary of all of the
MARIN propeller series results is presented by Kuiper, (1992).
Z\EAR 0.30 0.35 0.38 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 P/D Ranges
2 X X 0.6 - 1.4
3 X X X X X B3-35 [0.6 - 1.4]; Remainder [0.5 - 1.4]
4 X X X X X 0.5 - 1.4
5 X X X X 0.5 - 1.4
6 X X X 0.5 - 1.4
7 X X X X 0.5 - 1.4
315
Fig. 1: Characteristics of the B4-Series of Propellers
1.3
Solid Lines: KT and Eta Dashed Lines: 10KQ P/D=0.4
P/D=0.6
1.2 P/D=0.8
P/D=1.0
1.1 P/D=1.2
P/D=1.4
1
0.9
0.8
KT 10KQ Eta
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6
J
316
open water performance of the optimized propeller; estimating the off-design performance of the
optimized propeller; and, performing design trade-off studies. A sample plot of results using these
coefficients is shown in Figure 2
Four-quadrant thrust and torque performance is normally presented using the β, CT*, CQ*
nomenclature. The correlation between this nomenclature and the traditional J, KT, and KQ definitions
used with open water data is shown in the equations below. The β, CT*, CQ* nomenclature is used for
4-quadrant data because: 1 - the thrust and torque curves are continuous; 2 - the curves are single
valued; and, 3 - the definitions are more consistent with other associated definitions such as propeller
Reynolds number.
Va ⎛ β ⎞
J= = 0.7π tan⎜ ⎟
nD ⎜ 180 ⎟
⎝ π⎠
T π
CT* ( J + (0.7π ) 2)
2
KT = =
ρ n2D4 8
Q π 2
KQ = = CQ * ( J + (0.7π )2)
ρ n2D5 8
where
Va ⎛ J ⎞
β = arctan = arctan ⎜ ⎟
0.7πnD ⎝ 0.7π ⎠
T 8 * KT
CT * = =
1 ρ Va 2 + (0.7π nD)2 π D2
2 { 4
} (
π J 2 + (0.7π )
2
)
Q 8 * KQ
CQ * = =
1 ρ Va 2 + (0.7πnD)2 π D3
2 { 4
} (
π J 2 + (0.7π )
2
)
There is another set of definitions the reader should be aware of; namely, the definition of the 4-
quadrants differ with the different nomenclatures. In a J, KT, and KQ nomenclature the quadrants have
been traditionally defined to match the J, KT, and KQ coordinate system but this is not consistent with
the β quadrants. The different definitions are shown in Tables 2 and 3.
317
Table 2: Quadrant Definitions for the J, KT, and KQ Coordinate System
1 Ahead 0 - 90
2 Crashback 90 - 180
3 Backing 180 - 270
4 Crashahead 270 - 360
Since the β nomenclature does not use the J, KT, and KQ nomenclature, it is not bound to the older
quadrant definitions defined by J, KT, and KQ. The quadrant definitions used with the β, CT*, CQ*
nomenclature follows the hydrodynamic angle-of-attack of the propeller blades. The β, CT*, CQ*
nomenclature has more consistency with propeller physics than the older quadrant definition used
with the J, KT, and KQ nomenclature.
MARIN,(1984), presents harmonic analysis coefficients that enable the computation of the 4-quadrant
thrust and torque performance for a subset of the B-Screw Series using the β, CT*, CQ* nomenclature.
Table 4 presents the geometric propeller characteristics of this subset. An examination of Table 4
reveals that this subset of the B-Screw Series contains three parameter sweeps: 1 – a sweep across the
range of P/D’s for B4-70 propellers; a sweep across the range of EARs for a series of 4-bladed
propellers with a P/D=1.00; and, a sweep across the range of propeller blade numbers for a series of
propellers with a P/D=1.00 and an EAR≈0.7. One sample plot of the results using these coefficients is
shown in Figure 3.
318
B4-70 Series CT* & -10CQ*
1.6
P/D = 0.5
1.4 P/D = 0.6
-10Cq* P/D = 0.8
1.2
P/D = 1.0
1 P/D = 1.2
P/D = 1.4
0.8
0.6
0.4
0.2
0
CT* & -10CQ*
-0.2
-0.4
-0.6
-0.8
-1
Ct*
-1.2
-1.4
-1.6
-1.8
-2
-2.2
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
In the past both MARIN and NSWC have made efforts to fit and/or interpolate the data presented by
MARIN,(1984), so that performance estimates could be made across the entire B-Screw Series.
MARIN has reported less than satisfactory results with their efforts and the same is true for the efforts
at NSWC. However, with recent advances made in using feed forward neural networks, a successful
effort was made to train a neural network using a combination of the data obtained from Oosterveld
and Oossanen, (1972) and MARIN,(1984). This section describes the approach used and the results
obtained using FFNN to estimate the 4-quadrant thrust and torque performance of the B-Screw Series.
A FFNN is a computational technique for developing nonlinear equation systems that relate input
variables to output variables. In a feed forward network information travels from input nodes through
internal groupings of nodes (hidden layers) to the output nodes. A FFNN is distinguished from a
recursive neural network (RNN) by the fact that the latter employs feedback; namely, the information
stream issuing from the outputs is redirected to form additional inputs to the network. The additional
complexity of an RNN is required for the solution of difficult time-dependent problems such as the
simulation of the motion of a manoeuvring submarine Faller, et al.(1997), or surface ship Hess &
Faller,(2000).
Feed forward neural networks, on the other hand, are employed for a wide array of uses. Correctly
trained FFNNs offer two primary functions: first, they serve as an efficient means for accurately
recovering an experimental data set long after the experiment has concluded, and second, they have
the ability to predict data that was not measured but is similar to the training data.
The FFNNs used here are fully connected with two hidden layers and use 0 to 1 sigmoid activation
functions trained by back propagation. Each FFNN typically has a single output to maximize
prediction quality; therefore, problems with more than one dependent variable use multiple networks.
The available experimental or numerical data is partitioned into two sets: training data (80%) used to
train the network and adjust the weights via back propagation, and validation data (20%) used along
with the training data to test the performance of the trained network. Prediction quality is judged by
two error measures: the average angle measure (AAM) described below, and a correlation coefficient
319
(r). For both measures, a numerical value of one indicates perfect agreement between measured data
and predictions, whereas a value of zero denotes no agreement.
The Average Angle Measure was developed by the Maneuvering Certification Action Team at
NSWCCD in 1993-1994, Ammeen (1994). This metric was created in order to quantify (with a single
number) the accuracy of a predicted time series when compared with the actual measured time series.
The measure had to satisfy certain criteria; it had to be symmetric, linear, bounded, have low
sensitivity to noise and agree qualitatively with a visual comparison of the data. The definition is
given in Equation 1 for the jth output variable computed over a set of N points and is described below.
⎡ N
⎤
4 ⎢ ∑ D j (n) α j (n) ⎥
AAM = 1− ⎢ n =1
⎥,
π ⎢
j N
⎥
⎢⎣ ∑n =1
D j (n)
⎥⎦
⎡ m j (n) + p j (n) ⎤
α j ( n ) = cos −1
⎢ ⎥, (1)
⎢⎣ 2 D j (n) ⎥⎦
D j (n) = m 2j ( n ) + p 2j ( n ) ,
Given a predicted value, p, and an experimentally measured value, s, one can plot a point in p-s space
as shown in Fig. 4.
If the prediction is perfect, then the point will fall on a 45° line extended from the origin; the distance
from the origin will depend upon the magnitude of s. If p ≠ s , the point will fall on one side or the
other of the 45° line. If one extends a line from the origin such that it passes through this point, one
can consider the angle between this new line and the 45° line, measured from the 45° line. This angle
is a measure of the error of the prediction. To extend this error metric to a set of N points, one
computes the average angle of the set. A problem arises, however. When s is small and p is relatively
close to s, one may still obtain a comparatively large angle. On the other hand, when s is large and p is
relatively far from s, one may obtain a relatively small angle. To correct this, the averaging process is
weighted by the distance of each point from the origin. The statistic is then normalized to give a value
between –1 and 1. A value of 1 corresponds to perfect magnitude and phase correlation, -1 implies
perfect magnitude correlation but 180° out of phase and zero indicates no magnitude or phase
correlation. This metric is not perfect; it gives a questionable response for manoeuvres with flat
320
responses, predictions with small constant offsets and small magnitude signals. Nevertheless, it is in
most cases an excellent quantitative measure of agreement.
The FFNN is developed using executable code that automates the process of solving nonlinear
equation systems. The code, “Intelligent Calculation of Equations” (ICE), was developed by Applied
Simulation Technologies and is available without charge. The ICE routines work on two user defined
input data sets. One data set is comprised of independent variables (inputs) and dependent variables
(outputs). There can be as many as 10,000 input data points and the only restriction is that the data be
in ASCII columnar format. The second input data set specifies the options the user desires for the
training. These options make ICE a powerful and versatile tool.
2.1 Methodology
The first step in any neural network training is to prepare the input data. In the normal open-water
curve range, coefficients from Oosterveld and Oossanen, (1972), were used to create thrust and torque
data spaced at one degree β increments for all of the B-Series except the two-bladed propellers. Over
the entire four-quadrant range, coefficients from MARIN,(1984), were used to create this data, again at
one degree β increments. MARIN has stated that the open water results obtained with the coefficients
in Oosterveld and Oossanen, (1972), are substantially better than the results obtained in the same area
with the coefficients in MARIN,(1984). After discussions between MARIN and NSWC, it was agreed
that for successful results it would be necessary to use Oosterveld and Oossanen, (1972), to produce
open water results for the entire B-Screw Series. Then, for the propellers that have 4-quadrant data,
smoothly blend these results to match the open water results. An example of this blending for one of
the propellers in MARIN,(1984), is shown in Figure 5. In this figure the open-water curve data is
shown as solid lines and the 4-quadrant data is shown as small circles. The blended data is shown at
each end of the open-water data and is shown as small dashes. It can be seen that this blending
process smoothly blends the two dataOpen
sets.Water Curve Blending Example
0.40
CT OWC
10CQ* OWC
CT*
0.30
10CQ*
CT* Blend
10CQ* Blend
0.20
0.10
CT* and 10CQ*
0.00
-0.10
-0.20
-0.30
-0.40
-30 -20 -10 0 10 20 30 40
Beta
321
Sample CT* Training Results Over Entire Beta Range
1.4
CT_star
1.2 ICE_Pred
0.8
0.6
0.4
0.2
Ct*
-0.2
-0.4
-0.6
-0.8
-1
-1.2
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
Fig. 6: Sample CT* Training Results over Entire β Range with SIN and COS Terms
The results shown in Figure 6 were promising enough to investigate whether the FFNN coefficients
would predict the rest of the B-Series well. The data trends indicated that data interpolation worked
well, but the predicted data in the extrapolated region were only marginally acceptable. Therefore, the
possibility of breaking the data into subsets was investigated. The data was subdivided into four
regions where each region was centered around either a bollard condition or a zero rpm condition. The
initial regions are defined in Table 5. Training runs were made with these four regions, and the results
were much more promising as can be seen in Figure 7.
322
Sample Training Runs with the Input Data Subdivided
1.3
Data
1.1 Pred
0.9
0.7
0.5
0.3
CT*
0.1
-0.1
-0.3
-0.5
-0.7
-0.9
-1.1
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta (deg)
Fig. 7: Sample CT* of Initial Training Results with the β Range Subdivided into Four Regions
The remainder of the ICE training runs were performed: 1 - with the data divided into four regions;
and, 2 – with the results from Oosterveld and Oossanen, (1972), and MARIN,(1984), combined. For
each of these regions analyses were made to determine the detailed region boundaries and the ICE
options that yielded the best results. Two key changes were made in the final ICE training runs: 1 -
the overlap between the regions was increased; and, 2 - the boundaries for which data would be output
was set at specific angles. These final regions are shown in Table 6 and Figure 8.
323
Ranges for ICE Training
1.4
1.2
0.8
0.6
0.4
0.2
CT*
0
CT* & -10CQ*
-10CQ*
-0.2 Region A
-0.4 Region B
Region C
-0.6
Region D
-0.8
-1
-1.2
-1.4
-1.6
-1.8
-2
-60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420
Beta
For all of the ICE training the standard ICE learning algorithm was used with ICE determining the
neural network (NN) architecture and adaptively removing unnecessary inputs. ICE was also specified
to determine a solution with a moderate amount of extrapolation. In the more well behaved Regions
‘A’ and ‘C’ ICE was specified to produce the single “best” solution, while in Regions ‘B’ and ‘D’
ICE was specified to produce 20 solutions and average these multiple solutions for a “final” solution.
For all the ICE training the error measures were typically: AAM > 0.99 and r > 0.99, which
correspond to excellent predictions.
The penalty incurred with using four FFNN’s is that there are, inevitably, small discontinuities at the
boundaries when moving from one FFNN prediction to another. Although these breaks are typically
small, two “matching polynomial” procedures were developed to smoothly fit a polynomial from one
prediction to the other, thereby ensuring continuity. One procedure matches the slopes and the other
matches both slope and curvature. The detailed derivation of these procedures is discussed in the
Appendix. For the four-quadrant predictions the first method was used as the default with an option to
use the second derivative method.
3. Discussion of Results
The results show excellent agreement with the existing data and provide a good means for estimating
4-quadrant performance for the entire B-Screw Series. There are reasonable trends when the results
are plotted as a family of propeller performance curves with the different members of the family
varying P/D, or EAR, or Z. Even the results near the edges of the box defining the input data (Table 1)
look reasonable but there is increased uncertainty in these results. Figure 9 shows how well the FFNN
recovers the existing data while Figures 10 and 11 show the predicted trends for EAR sweeps for a 3
and 6 bladed propeller series including the existing measured data for the only propeller in each
series. These plots are two excellent examples of how well the FFNNs can predict the performance
propellers that have not been tested. Examples of predicted trends using the FFNN are presented in
Figures 12 through 14. The plots in Figure 12 show P/D variations from 0.4 to 1.4 for three different
5-bladed propeller series with EARs = 0.40, 0.65, and 1.00. The plots in Figure 13 show blade number
324
variations from 3 to 7 for three different propeller series with P/D = 1.0 and EARs = 0.40, 0.65, and
1.00. Finally, the plots in Figure 14 show EAR variations from 0.4 to 1.0 for three different 5-bladed
propeller series with P/Ds = 0.6, 1.0, and 1.4. These three sets of plots show the consistency in the
FFNN predictions, and showcase the ability of the networks to make reasonable predictions for
propellers that have not been tested.
During the training there were two distinctly different solutions needed in different regions of the
data. In the two regions around the bollard conditions, Regions ‘A’ and ‘C’, the best training resulted
from standard ICE training runs with a single “best” solution. However, in the two regions around the
zero rpm conditions, Regions ‘B’ and ‘D’, the best training resulted from ICE training runs with 20
averaged solutions. The 20 independent solutions are obtained by varying which data points belong to
the training and validation sets. After the 20 solutions are obtained, they are averaged to provide a
single best solution.
Four Quadrant Data and Predictions for the B4-70 Series of Propellers
1.80
-10Cq* P/D=0.5
1.60 P/D=0.6
P/D=0.8
1.40
P/D=1.0
1.20 P/D=1.2
P/D=1.4
1.00
0.80
0.60
0.40
0.20
Ct* & -10Cq*
0.00
-0.20
-0.40
-0.60
-0.80
-1.00
Ct*
-1.20
-1.40
-1.60
-1.80
-2.00
-2.20
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
Fig. 9: Four Quadrant Prediction for B4-70 Series Showing Comparison with Measured Data;
Symbols = Measured Data, Solid Lines = Predictions
325
1.60
-10Cq*
1.40
1.20
1.00
0.80
0.60
0.40
0.20
Ct* & -10Cq*
0.00
-0.20
-0.40
-0.60
-0.80
-1.00
Ct*
-1.20
EAR=0.50
-1.40
EAR=0.65
-1.60 EAR=0.80
EAR=0.95
-1.80
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
Fig. 10: Four Quadrant Prediction for B3-EAR (P/D=1.0) Series Showing Comparison with
Measured Data; Symbols = Measured Data, Solid Lines = Predictions
1.4
1.2
-10Cq*
1
0.8
0.6
0.4
0.2
Ct* & -10Cq*
-0.2
-0.4
-0.6
-0.8
-1
Ct*
-1.2
EAR=0.50
-1.4
EAR=0.65
-1.6 EAR=0.80
EAR=0.95
-1.8
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
Fig. 11: Four Quadrant Prediction for B6-EAR (P/D=1.0) Series Showing Comparison with
Measured Data; Symbols = Measured Data, Solid Lines = Predictions
326
Four Quadrant Estimate for B5-40 Series of Propellers
1.20
1.00
-10Cq*
0.80
0.60
0.40
P/D=0.40
0.20
P/D=0.60
Ct* & -10Cq*
0.00 P/D=0.80
-0.20 P/D=1.00
P/D=1.20
-0.40
P/D=1.40
-0.60
Ct*
-0.80
-1.00
-1.20
-1.40
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
1.60
1.40 -10Cq*
1.20
1.00
0.80
0.60
0.40 P/D = 0.4
0.20
P/D = 0.6
0.00
Ct* & -10Cq*
P/D = 0.8
-0.20
-0.40 P/D = 1.0
2.00
1.80
-10Cq*
1.60
1.40
1.20
1.00
0.80
0.60
P/D=0.40
0.40
P/D=0.60
Ct* & -10Cq*
0.20
0.00 P/D=0.80
-0.20 P/D=1.00
-0.40
P/D=1.20
-0.60
P/D=1.40
-0.80
-1.00
-1.20 Ct*
-1.40
-1.60
-1.80
-2.00
-2.20
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
Fig. 12: 4-Quadrant Predictions for a B5-40, B5-65, and B5-100 Series
327
Four Quadrant Estimate for BZ-40 (P/D=1.0) Series of Propellers
1.60
1.40
1.20
1.00 -10Cq*
0.80
0.60
0.40
Ct* & -10Cq* 0.20 Z=3
0.00 Z=4
-0.20 Z=5
-0.40 Z=6
-0.60 Z=7
-0.80
-1.00 Ct*
-1.20
-1.40
-1.60
-1.80
-2.00
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
1.60
1.40 -10Cq*
1.20
1.00
0.80
0.60
0.40
0.20 Z=3
Ct* & -10Cq*
0.00 Z=4
-0.20 Z=5
-0.40 Z=6
-0.60 Z=7
-0.80
-1.00
-1.20 Ct*
-1.40
-1.60
-1.80
-2.00
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
1.60
1.40 -10Cq*
1.20
1.00
0.80
0.60
0.40
0.20 Z=3
Ct* & -10Cq*
0.00 Z=4
-0.20 Z=5
-0.40 Z=6
-0.60 Z=7
-0.80
-1.00
Ct*
-1.20
-1.40
-1.60
-1.80
-2.00
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
328
With P/D = 1.0 Four Quadrant Estimate for B5-EAR (P/D=0.6) Series of Propellers
2.00
1.80
1.60
1.40
-10Cq*
1.20
1.00
0.80
0.60 EAR=0.4
0.40 EAR=0.5
Ct* & -10Cq*
0.20 EAR=0.6
0.00
EAR=0.7
-0.20
-0.40 EAR=0.8
-0.60 EAR=0.9
-0.80
EAR=1.0
-1.00
-1.20 Ct*
-1.40
-1.60
-1.80
-2.00
-2.20
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
2.00
1.80 -10Cq*
1.60
1.40
1.20
1.00
0.80
0.60 EAR=0.4
0.40 EAR=0.5
Ct* & -10Cq*
0.20 EAR=0.6
0.00
EAR=0.7
-0.20
-0.40 EAR=0.8
-0.60 EAR=0.9
-0.80
EAR=1.0
-1.00
-1.20
Ct*
-1.40
-1.60
-1.80
-2.00
-2.20
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
2.00
1.80
-10Cq*
1.60
1.40
1.20
1.00
0.80
0.60 EAR=0.4
0.40 EAR=0.5
Ct* & -10Cq*
0.20 EAR=0.6
0.00
EAR=0.7
-0.20
-0.40 EAR=0.8
-0.60 EAR=0.9
-0.80
EAR=1.0
-1.00
-1.20
-1.40
Ct*
-1.60
-1.80
-2.00
-2.20
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360
Beta
Fig. 14: 4-Quadrant Predictions for B5-EAR (P/D=0.6, 1.0, 1.4) Series
329
4. Utilization of the FFNNs for the B-Screw Series
A computer program was written to use the four FFNN’s and the matching polynomial algorithm to
produce 4-quadrant predictions of a B-Series propeller, or family of propellers. Included with this
paper is software with three folders. The folder named “Bseries4Q” contains the executable program
and data files, written as part of this project, to predict the thrust and torque performance of propellers
within the range of the B-Screw Series. The folder named “4Q Samples” contain several Microsoft
Excel files that can be used as templates for plotting the output The folder named “Extra” contains
some programs that use the coefficients from Oosterveld and Oossanen, (1972), and coefficients of a
subset of the Hamilton Standard Air Screws, Hamilton Standard, (1963), to perform propeller
optimizations and off-design computations for several propeller types. Included in this folder are files
describing the input and output variables.
The results show excellent agreement with the existing data and provide a good means for estimating
4-quadrant performance for the entire B-Screw Series. Examination of the results show how well the
FFNNs can predict the performance for propellers that have not been tested. For the B-Screw Series
there are reasonable trends when the results are plotted as families of propeller performance curves
with the different members of the family varying P/D, or EAR, or Z. The programs, and files,
included herein allow for the easy determination of any propeller within the data sets.
Acknowledgements
The U.S. Office of Naval Research supports this work, and the program officer is Dr. Ronald Joslin,
Code 333. Dr. Patrick Purtell, Code 333 has also provided funding for neural network research.
Neural network efforts are also sponsored by the U.S. Office of Naval Research through the
Independent Applied Research program conducted at the Naval Surface Warfare Center, Carderock
Division. The program monitor is Dr. John H. Barkyoumb, Code 0021. The automated feedforward
neural network development code, Intelligent Calculation of Equations (ICE), was used to create the
neural networks used in this study. This code was developed by Applied Simulation Technologies and
is available for free by contacting the company via [email protected].
References
AMMEEN, E.S. (1994), Evaluation of Correlation Measures, Naval Surface Warfare Center Report
CRDKNSWC-HD-0406-01
FALLER, W.E., SMITH, W.E., AND HUANG, T.T. (1997), Applied Dynamic System Modeling: Six
Degree-Of-Freedom Simulation Of Forced Unsteady Maneuvers Using Recursive Neural Networks,
35th AIAA Aerospace Sciences Meeting, Paper 97-0336, pp. 1-46.
HAMILTON STANDARD (1963), Generalized Method of Propeller Performance Estimation,
Hamilton Standard Report PDB 6101, Revision A, 1963
HESS, D.E., FALLER, W.E. (2000), Simulation of Ship Maneuvers Using Recursive Neural
Networks, 23rd Symposium on Naval Hydrodynamics, Val de Reuil, France, September 17-22.
KUIPER, G (1992), The Wageningen Propeller Series, MARIN Publication No. 92-001
MARIN (1984),"Vier_Kwadrant Vrijvarende-Schroef-Karakterstieken Voor B-Serie Schroeven.
Fourier-Reeks Ontwikkeling en Operationeel Gebruik", MARIN Report 60482-1-MS [Limited
Availability].
OOSTERVELD, M.W.C., OOSSANEN, P. VAN (1972), Recent Developments in Marine Propeller
Hydrodynamics, International Jubilee Meeting 40th Anniversary of the Netherlands Ship Model Basin,
NSMB Publication No. 433
330
TROOST, L.(1951), Open Water Test Series with Modern Propeller Forms, Part 3: Two and Five
Bladed Propellers, Transactions North East Coast Institution of Engineers and Shipbuilders, Vol. 67.
As described previously, separate feedforward neural networks (FFNN) were implemented in four
overlapping regions of the range of the advance angle, β, in order to maximize prediction quality. The
penalty, as already mentioned, is that there are discontinuities at the boundaries between the FFNN
predictions. An example is illustrated in Figure 15. Although these breaks are typically small, two
procedures were developed to smoothly fit a polynomial from one prediction to the other, thereby
ensuring continuity. One procedure matches the slopes and the other matches both slope and
curvature.
0.5
0.3
0.1
g(x)
-0.1
h(x)
-0.3
CT
-0.5
-0.7
-0.9
a b
-1.1
-1.3
200 210 220 230 240 250 260
Beta
Fig. 15: Plot of Thrust Coefficient vs. Beta Showing Mismatched Predictions at One of the
Boundaries
Each FFNN prediction is a thrust coefficient or torque coefficient function of the form
CT = CT (P D , EAR, Z , β ) or CQ = CQ (P D , EAR, Z , β ) , where P D is the pitch to diameter ratio, EAR is
the expanded area ratio, Z is the number of blades and β is the advance angle. A given curve as
shown in Figure 15 represents the variation CT = CT (β ) for fixed values of P D , EAR and Z. To
simplify the notation, the FFNN prediction to the left of the boundary will be referred to as g (x) , the
one on the right will be h(x) , and the matching polynomial will be referred to as f (x) .
Matching Slopes
The simpler procedure begins by defining a closed interval [a, b] , which contains the boundary as
shown in Figure 15. This interval defines the domain of the matching polynomial. The polynomial
will be required to match the function values, g (a) and h(b) , as well as the slopes, g ′(a) and h′(b) , at
the endpoints of the interval. To find a unique polynomial that will match these four conditions
requires that it have four adjustable coefficients and be of the form
f ( x) = c0 + c1 x + c2 x 2 + c3 x 3 . (2)
The matching conditions then become
f (a) = c0 + c1a + c2 a 2 + c3 a 3 = g (a) = A
f (b) = c0 + c1b + c2 b 2 + c3b 3 = h(b) = B
, (3)
f ′(a) = c1 + 2c2 a + 3c3 a 2 = g ′(a) = C
f ′(b) = c1 + 2c2 b + 3c3b 2 = h′(b) = D
331
where g (a) = A , h(b) = B , g ′(a) = C and h′(b) = D is employed to ease the notation. Therefore, to
uniquely determine the matching polynomial, one must provide six numbers: a, b, A, B, C and D.
To determine g ′(a) and h′(b) , backward and forward differences are used
g ( a ) − g ( a − Δx ) h(b + Δx) − h(b)
g ′(a) = and h′(b) = . (4)
Δx Δx
The determination of the coefficients of the matching polynomial requires the solution of the set of
simultaneous equations represented by Eqs.3. Written in matrix form, they are
⎡1 a a 3 ⎤ ⎡c0 ⎤ ⎡ A ⎤
a2
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢1 b b 2 b3 ⎥ ⎢ c1 ⎥ ⎢ B ⎥
= . (5)
⎢0 1 2a 3a 2 ⎥ ⎢c2 ⎥ ⎢ C ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎣⎢0 1 2b 3b 2 ⎦⎥ ⎣c3 ⎦ ⎣ D ⎦
A unique solution will exist as long as the coefficient matrix is not singular; that is, as long as the
determinant is non zero. The determinant of the coefficient matrix is computed to be
1 a a2 a3
1 b b 2 b3
= − ( a − b) 4 . (6)
0 1 2a 3a 2
0 1 2b 3b 2
Thus, a unique solution will exist as long as the endpoints of the interval do not coincide. The solution
is most effectively determined by using a simultaneous equations solver. However, by inverting the
matrix and carrying out the indicated multiplication
−1
⎡ c 0 ⎤ ⎡1 a a 2 a3 ⎤ ⎡ A⎤
⎢ ⎥ ⎢ ⎥ ⎢B⎥
⎢ c1 ⎥ = ⎢1 b b 2 b3 ⎥ ⎢ ⎥ , (7)
⎢c 2 ⎥ ⎢ 0 1 2a 3a 2 ⎥ ⎢C ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣c3 ⎦ ⎢⎣0 1 2b 3b 2 ⎥⎦ ⎣ D⎦
the coefficients can then be explicitly determined in terms of the six provided numbers: a, b, A, B, C
and D given below.
c0 = [a 3 ( B − bD) − b3 A − a 2b(bC − bD + 3B) + ab 2 (bC + 3 A)] (a − b)3
c1 = [a 3 D + a 2b(2C + D) − ab(bC + 2bD + 6 A − 6 B ) − b3C ] (a − b)3
(8)
c2 = [−a 2 (C + 2 D) − ab(C − D) + b 2 (2C + D) + (3a + 3b)( A − B )] (a − b)3
c3 = [(a − b)(C + D) − 2( A − B )] (a − b)3
Having the coefficients explicitly determined in this manner is convenient to code in a subroutine, but
the implementation requires greater precision than solving for them indirectly using a simultaneous
equations solver. Nevertheless, using double precision, the computation is straightforward and the
result is given in Figure 16.
Matching Curvatures
A more visually appealing solution results when one requires that the matching polynomial satisfy not
only function values and slopes at the endpoints of the interval, but also second derivatives
(curvatures) as well. Specifically, the polynomial will be required to match the function values, g (a)
and h(b) , the slopes, g ′(a) and h′(b) , and the curvatures, g ′′(a) and h′′(b) , at the endpoints of the
interval. To find a unique polynomial that will match these six conditions requires that it have six
adjustable coefficients and be of the form
332
f ( x) = c0 + c1 x + c2 x 2 + c3 x 3 + c4 x 4 + c5 x 5 . (9)
The matching conditions then become
f ( a) = c0 + c1a + c2 a 2 + c3 a 3 + c4 a 4 + c5 a 5 = g ( a) = A
f (b) = c0 + c1b + c2b 2 + c3b 3 + c4 b 4 + c5b 5 = h(b) = B
f ′( a) = c1 + 2c2 a + 3c3 a 2 + 4c4 a 3 + 5c5 a 4 = g ′( a) = C
, (10)
f ′(b) = c1 + 2c2b + 3c3b 2 + 4c4b 3 + 5c5b 4 = h′(b) = D
f ′′( a) = 2c2 + 6c3 a + 12c4 a 2 + 20c5 a 3 = g ′′( a) = E
f ′′(b) = 2c2 + 6c3b + 12c4b 2 + 20c5b 3 = h′′(b) = F
where g (a) = A , h(b) = B , g ′(a) = C , h′(b) = D , g ′′(a) = E and h′′(b) = F is employed to ease the
notation. Therefore, to uniquely determine the matching polynomial, the user must provide eight
numbers: a, b, A, B, C, D, E and F.
To determine g ′(a) , h′(b) , g ′′(a) and h′′(b) , backward and forward differences are used
g (a ) − g (a − Δx) h(b + Δx) − h(b)
g ′(a) = and h′(b) =
Δx Δx
. (11)
g (a ) − 2 g (a − Δx) + g (a − 2Δx) h(b + 2Δx) − 2h(b + Δx) + h(b)
′′
g (a) = ′′
and h (b) =
(Δx )2 (Δx )2
The simultaneous equations are represented in matrix form as
⎡1 a a2 a3 a4 a 5 ⎤ ⎡c0 ⎤ ⎡ A ⎤
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢1 b b 2
b 3
b4 b5 ⎥ ⎢ c1 ⎥ ⎢ B ⎥
⎢0 1 2a 3a 2 4a 3 5a 4 ⎥ ⎢c2 ⎥ ⎢C ⎥
⎢ ⎥⎢ ⎥=⎢ ⎥ . (12)
⎢0 1 2b 3b 2 4b3 5b 4 ⎥ ⎢c3 ⎥ ⎢ D ⎥
⎢0 0 2 6a 12a 2 20a 3 ⎥ ⎢c4 ⎥ ⎢ E ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎣⎢0 0 2 6b 12b 2 20b3 ⎦⎥ ⎣⎢c5 ⎦⎥ ⎣⎢ F ⎦⎥
Again, one sees that a unique solution will exist as long as the endpoints of the interval do not
coincide. Inverting the matrix and carrying out the indicated multiplication
−1
⎡c0 ⎤ ⎡1 a a 2 a3 a4 a5 ⎤ ⎡ A⎤
⎢ ⎥ ⎢ ⎥ ⎢B⎥
⎢ c1 ⎥ ⎢1 b b2 b3 b4 b5 ⎥ ⎢ ⎥
⎢ c 2 ⎥ ⎢0 1 2a 3a 2
4a 3 5a 4 ⎥ ⎢C ⎥
⎢ ⎥=⎢ ⎥ ⎢ ⎥ , (14)
⎢ c3 ⎥ ⎢ 0 1 2b 3b 2 4b 3 ⎢ D⎥
4
5b ⎥
⎢c 4 ⎥ ⎢0 0 2 6a 12a 2 20a 3 ⎥ ⎢E⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣⎢c5 ⎦⎥ ⎣⎢0 0 2 6b 12b 2 20b 3 ⎦⎥ ⎣⎢ F ⎦⎥
allows the coefficients to be determined explicitly in terms of the eight provided numbers: a, b, A, B,
C, D, E and F given below.
333
c0 = [a 5 (b 2 F − 2bD + 2 B) − a 4b(b 2 E + 2b 2 F − 10bD + 10 B) + a 3b 2 (2b 2 E + b 2 F + 8bC − 8bD + 20 B)
−a 2b 4 (bE + 10C ) + 2ab5C − 2b3 A(10a 2 − 5ab + b 2 )] / [2(a − b)5 ]
c1 = [60a 2b 2 A + 2a 5 ( D − bF ) − 2b5C − a 4b(10 D − 3bE − bF ) − 4a 3b 2 (bE − bF + 6C + 4 D)
−a 2b 2 (b 2 E + 3b 2 F − 16bC − 24bD + 60 B) + 2ab 4 (bE + 5C )] / [2(a − b)5 ]
c2 = [−60abA(a + b) + a 5 F + a 4b(4 F − 3E ) + 4a 3b(6C + 9 D − 2bF ) + 4a 2b(2b 2 E + 3bC − 3bD + 15 B)
− ab2 (4b2 E − 3b 2 F + 36bC + 24bD − 60 B) − b5 E ] / [2(a − b)5 ] . (15)
c3 = [20 A( a 2 + 4ab + b2 ) + a 4 ( E − 3F ) + 4a 3 (bE − 2C − 3D) − 4a 2 (2b 2 E − 2b 2 F + 8bC + 7bD + 5 B)
− 4ab(b 2 F − 7bC − 8bD + 20 B) + b 4 (3E − F ) + 4b3 (3C + 2 D) − 20b 2 B] / [2(a − b)5 ]
c4 = [−30 A(a + b) + a 3 (3F − 2 E ) + a 2 (bE − 4bF + 14C + 16 D) + a(4b 2 E − b 2 F + 2bC − 2bD + 30 B)
− b(3b 2 E − 2b 2 F + 16bC + 14bD − 30 B)] / [2(a − b)5 ]
c5 = [a 2 ( E − F ) − 2a(bE − bF + 3C + 3D) + b 2 ( E − F ) + 6b(C + D) + 12( A − B)] / [2(a − b)5 ]
Again, using double precision, the computation is easily performed and the result is given below in
Figure 16.
0.5
0.3
0.1
-0.1 g(x)
Curv
-0.5
-0.7
-0.9
a b
-1.1
-1.3
200 210 220 230 240 250 260
Beta
Fig. 16: Plot of Thrust Coefficient vs. Beta Showing Matching Polynomials Computed Using Both
Methods
334
An Analytical Cost Assessment Module for the Detailed Design Stage1
Jean-David Caprace, Philippe Rigo, Renaud Warnotte,
ANAST, University of Liege, Liège/Belgium, [email protected], [email protected]
Sandrine Le Viol, Chantiers de l’Atlantique, Saint-Nazaire/France
Abstract
The main goal of the project is to implement a “real time” and automatic cost assessment model of
the ship hull construction, which integrates the design criteria and production parameters. The
presented method for short-term cost assessment promises to increase the productivity.
Nowadays, cost assessment is a key task of an integrated ship design. The various methods to estimate
the production cost differs with the required information (input data). The less information is needed,
the earlier this method can be used in the design process. The more information is used, the better we
can assess the differences between design alternatives. This means:
− At the basic design stage: validate the budget and give a reliable bidding price,
− At the detailed design stage:, plan the deadlines and establish the production schedule,
− At the scheduling production stage: distribute the workload between the various production
workshops and assess the productivity.
A first prototype of cost assessment module reaches the validation stage at the Alstom Marine St
Nazaire shipyard (Chantiers de l’Atlantique) within the framework of the European project InterSHIP
which has partly financed the study. In the future, cost assessment will become increasing important.
It is proposed to assist/help the designers by a “real time” follow-up of the cost, starting at the
earliest conceptual design stage up to the latest detailed design stage. The development such a cost
assessment tool requires considering simultaneously the design criteria and the production
parameters. Designers will consequently be able to choose the least expensive options at each step of
the design procedure (earlier is of course better).
1. Introduction
To avoid this delocalization and to remain profitable, Europeans decided to devote themselves only to
the ships with high added value, like passenger ships, or with high technology, like the LNG carriers.
To keep and increase their shipbuilding world market share, opposite to Asian competitors, the
European shipyards are always obliged to increase their competitiveness significantly. Moreover, the
nature of the European shipbuilding market prevents large production series; each ship is unique, and
the installation of full automatic processes remains complex.
Even if significant efforts have already been achieved by the shipbuilding industry to reduce the costs
of each individual stage of ship construction, the European objectives in this domain must still be
achieved: reduction of the design and manufacturing cost (25 to 30%) as well as times of production
cost (20 to 30%). Another important research field concerns the development of the best product by
using multi-purpose optimization integrated design, quality, safety, environment and efficiency.
1
This paper results from part of the work performed in sub-project II.1 of InterSHIP, a European R&D project
funded under the European Commission's Sixth Framework Programme for Research and Technological
Development. (Project n° TIP3-CT-2004-506127)
335
Since the main part of the construction cost relates to the production and since the producibility of a
ship is basically defined at the design stage, the main promising track of cost savings is to assess the
production cost as soon as the options of construction are fixed ("Design for production/Design to
cost").
The ability to assess ship construction costs is necessary for the commercial success of a shipyard;
- overestimate the cost will place the shipyard out of the competitive range, and
- underestimate the cost will result in a financial loss and possible bankruptcy.
To answer at this request we implemented a “real time” and automatic cost assessment model of the
ship hull construction, which integrates the design criteria and the production parameters.
Indeed, progress in these fields led to the improvement of the quality and the accuracy, which are two
essential preliminaries for an effective production. Today, shipyards use various software dedicated
for this purpose such as TRIBON, NAPA, CATIA and FORAN. Those integrate not only tools for
drawing of smooth hull forms, structural details and piping, but also tools for structural analysis,
stability calculation, resistance, propulsion and sea keeping. Moreover, such software are nowadays
able to transfer the nomenclature parts to various production robot interfaces. Today, CATIA is even
able with DELMIA to virtually simulate the production to increase the efficiency and the workshop
productivity [2].
The objective of the present research consists in using nomenclature parts coming from CAD/CAM
software for cost assessment at the "detailed design" stage.
In order to compensate cost increases or quality decreases due to flexibility lost for modification of
scantling (see Fig. 1) during the ship design, the shipbuilding tries to apply the concurrent engineering
concept rather than a sequential engineering (see Fig. 1). The decisions of each stage are made by
considering the constraints imposed by the other stages of the ship life cycle. Now, the problems that
were only checked at the end of the project are now included in the design stage to reach a better
solution. Each department does not wait any more until the precedent had finished but has to consider
that a decision can occur in the course of project.[8]
As illustrated on Fig. 2, one of the effects of concurrent engineering is to move the information curve
upstream because the effectiveness and the quality of the information on the ship are improved from
the first stage of the project. This aspect is particularly strategic as the design process has a cost which
varies from 5% to 15% of the total cost and moreover decisions taken during this initial stage
determine about 60 to 95 % of the total cost.[9]
Methodology developed within the present research framework will increase and accurate the
knowledge relating to the ship by the prediction the useful data to assess the cost before the full
CAD/CAM model has been completed. Thus, the designer will know earlier more information and
will be able to make the best decision from the design stage. The first errors of the project, the most
expensive ones, could thus be avoided.
336
Project
Needs
definition
Product
Design
Checking
Concurrent engineering
Sequential engineering
Needs
definition Productibility
Service
Prototype Cost
Test Product Performance
Design Quality
Revised
Checking
Modifications
Production
Checking
Production
The challenge consists to create a leading assistance module for the designer during the ships design.
Moreover, this prototype is optimized for the production especially for the first stage design. Indeed,
it is at this stage that most profits can be made by avoiding errors in the design process which could
be discovered only later. In fact, the modification costs grow with an exponential way during the
progress of the project. In addition, arrived at a certain stage of project maturity, it is impossible to
carry out modifications while keeping the same quality level.
Design freedom
Flexibility
“The cost assessment approach” presented in this paper complies with the following requirements.
337
2.1 Top-Down or Bottom-Up system
The basic idea of the project is to implement a real time and automatic cost assessment method of the
ship hull production that integrates all the design criteria and the production parameters.
The methods for estimating production cost are classified into [3]:
- Top-Down (macro, cost-down or historical) approaches (empirical, statistical and close-form
equations, ...), [4][5][6]
- Bottom-Up (micro, cost-up or engineering analysis) approaches (direct rational assessment) [7]
Despite its popularity and frequent references in the literature, top-down approaches have serious
disadvantages, which are often overlooked or concealed:
- The approach uses only global information and therefore it is not suitable to reflect local
structural changes, neither to improve the producibility of structural details/parts.
- The approach is usually based on weight. Any changes that increase the weight will
automatically increase the estimated cost regardless of the real effect on cost. Extreme
lightweight designs may drastically increase the number of required hours, while large frame
spacing may increase weight, but decreases necessary man-hours. This is often not reflected in
such formulae!
- The approach is based on historical data, i.e. historical designs and historical production
methods. In view of revolutionary changes in production technology over the last decade, the
data and formulae may sometimes be 'prehistoric'. They do not consider the impacts of new
approaches in structural design and production technology.
- The approaches were often based on inaccurate data even at the time they were derived. In the
shipyards were traditionally poor sources of cost information. The data were frequently skewed
reflecting pressures of the first-line managers and other factors.
- Not suitable for structure optimization as there is no link between the Cost and the Design
Variables (scantlings).
We therefore choose a “bottom-up approach”. Indeed, the goal is to control the short-term cost and
then to identify the main key factors of the cost. Subsequently, we generate the missing data to assess
these factors starting from the matured elements available in basic design stage up to the final detailed
engineering stage.
Our idea is, to build a methodology allowing the analytical cost assessment in the "detailed design"
stage. The use of a cost module based on an analytical method is essential because the statistical
analysis of the data appeared is questionable.
ÆCOST
Æ SECTOR
Æ PrePremanufacturing
Æ WORKSHOP
Æ PPR
ÆSTAGE
Æ Welding robot
ÆOPERATION (NATURES)
Æ Flat welding
338
Table 1 : List of components of the cost hierarchy
ÆSHIP
ÆBLOCK
ÆPANEL
ÆAS3
ÆAS4
ÆITEMS
Fig.4: Tree structure of the ship
LabourCost = Q × Uc × S × Ac × Wc (1)
339
3. Description of the cost assessment approach
The goal of this work is to establish a methodology and a tool to solve a very common problem in all
shipyards: the cost assessment problem.
COST
DB
CostProcessing
Rules Engine
340
3.2 Three steps for cost assessment processing
Sector
Workshop
Stage 2
Operation
341
1
1 3
2
4
Nevertheless, the user will be able to choose two main options. The first (4) one permits to choose if
the display of the cost takes into account the predicted efficiencies stocked in the data bases, in other
words, if we display the indicated time (TI) or the predicted time (TP) (the difference between TI and
TP is the workshop efficiency coefficient). The second option (4) simply permits to choose between
displaying the sum of all the cost natures (by default) or displaying cost natures selected by the user.
We will also note that it is possible to filter the data according the user selection (only one cost nature
for example). We will be able to export results towards CSV files (5) to be read by Excel.
Consequently our work focuses on the costs analysis and the productivity of the production system. In
practice, after selection of a subset in the hierarchical ship structure (1 - Highlighted selection on
Fig. 8), the user presses the button "refresh" and the cost structure calls the sums requests which show
the cost according to the selected options.
342
The cost tree structure (2 – Fig. 8) is shown with a percentage and a colour gradation so that it is very
easy for the user to distinguish where the bigger costs are. We also show the absolute cost production
in hour (TI or RRO). Thus, we can drive out the expensive designs and the lack of productivity.
Thanks to the ship tree and with the data which can be consulted (quantities (1), characteristics of the
sub-assemblies (2), parts outlines (3) – see Fig. 9) the user can more easily and more quickly identify
which are the causes of an abnormal cost.
In addition to the cost shown in the vicinity of each node, at every click on a stage of the tree diagram,
it will be possible to visualise which are the quantities used for the cost determination in this node.
Two types of quantities will be described (1 – Fig. 9); the first relating to the number of entities
classed by type (plates (PT) or profiles (PP)) and by size (Bucket, Small, Large). The second quantity
corresponds to the measurement of welds classed by type (butt or fillet) and by the welding position
(Flat, Horizontal, Overhead or Vertical).
A major innovation concerns the cost that can be calculated for any subassemblies of the ship, from
the smallest up to the largest. We can thus compare the cost of the various assemblies’ of the ship. We
can find new innovative designs which will reduce the cost production (design for production).
343
4. Conclusions and perspectives
The high complexity of the production of a ship, due to the interaction of a great number of different
disciplines (hull construction, electricity, fluids, interior fitting, propulsion, etc) requires firstly an
intensive design and secondly a detailed production planning where most of the tasks are carried out
in parallel. In order to obtain the best quality, the lowest price and the shortest manufacturing lead
time, it is necessary to increase the number of simultaneous tasks. So the management of the
information flow becomes necessarily more and more complex, [10].
The current challenge for the European shipbuilding is to use these large and various information
flows through the different stages: negotiation, design, production and maintenance, so that
construction can be carried out with more effectiveness. In addition, it is expected that the design may
be fully optimized for the production (design for production).
The objective of the presented method for short-term cost assessment is to increase the productivity:
• The use of the design data and the cost model will lead to a more accurate approach compared to
the existing cost model;
• Improve the planning of the deadlines and establish the production schedule - at the detailed
design stage;
• Improve the distribution of the workload between the various production workshops and assess
the productivity - at the stage of design for production;
• Use the “design for production” concept by integrating the information about the cost production
at each design stage;
• Analyzing the cost structure leads to a better knowledge of the most relevant and significant
individual costs, on which the shipyard has to concentrate to reduce the global cost.
Today, the first prototype is tested at the ALSTOM shipbuilding (Chantiers de l’Atlantique) within
the framework of the European project INTERSHIP which has partly inanced the study. Currently, the
computing time can not be used in real time. But calculation can be made during the night and results
are available at the following day.
Nevertheless, to lead at the "basic design" stage to a cost assessment model that already includes the
design criteria and the production parameters, the pre-required data will be the development of a
methodology to obtain all the information we do not have yet, like the sequence of the structural
subassemblies, the cutting of the ship in blocks, etc.
In order to achieve this goal, we plan to apply Data Mining tools on the results from the analytical
model. This means using analysis modules such as histograms, correlations analyse, dendrograms,
decision trees, neurons networks, etc. These analyses will allow establishing a predictive
mathematical relation of the cost at the "basic design" stage on the basis of most significant
parameters of the ship by a neuronal network algorithm.
344
4.2 Industrial applications
Technologically and economically, the prospects for industrial applications of this research are based
on the urgent need for some shipyards to develop for the European shipbuilding industry an integrated
multi-objective tool for a high accuracy production cost assessment, which guarantees:
• a centralized management of the yard cost structure including the unitary costs of a whole
shipyard;
• the simulation of updated production scenarios few weeks before the production;
• before investment, simulation and impact study of various improvements of the production
facilities;
• “real time” advices for the designer based on production cost criterion (design for production);
• a measurement of the production efficiency with respect to the predicted cost;
• a reliable assessment of the final total cost of the ship at a long-term stage.
Acknowledgments
The authors thank University of Liege and “Chantiers de l’Atlantique”, ALSTOM Marine, Saint-
Nazaire, France for the collaboration within sub-project II.1 of INTERSHIP.
References
[1] SASAKI, Y. (2003), Application of factory simulation to the shipyard, COMPIT’03, Hamburg,
pp.362-376
[2] SHIN, J.G.; SOHN, S.J. (2000), Simulation-Based evaluation of productivity for the design of an
automated workshop in shipbuilding, J. Ship Production, pp.46-59
[3] BERTRAM, V.; CAPRACE, J.D.; RIGO, P.; MAISONNEUVE, J.J. (2005), Cost assessment in
ship production, The Naval Architect, March, pp.6-8
[4] ROSS, J.M.; HAZEN, G.S. (2002), Forging a real-time link between initial ship design and
estimated costs, ICCAS
[5] ROSS, J.M. (2004), A practical approach for ship construction cost estimating, COMPIT’04,
Siguenza, pp.98-110
[6] ROSS, J.M. (2005), Weight-based cost estimating during initial design, COMPIT’05, Hamburg,
pp.221-229
[7] ENNIS, K.J.; DOUGHERTY, J.J.; LAMB, T.; GREENWELL, C.R.; ZIMMERMANN, R.
(1998), Product-oriented design and construction cost model, J. Ship Production, pp.41-58
[8] BOCQUET, J.C. (1998), Ingénierie simultanée, conception intégrée, Conception de produits
mécaniques, Editions Hermes, pp.29-52
[9] SYAN, C.S.; MENNON, U. (1994), Concurrent engineering concepts, implementation and
practice, Chapman &Hall, London, UK
[10] HUGHES, O.F.; ET AL. (1994), Applied computer aided design, Report V.5 of Committee to
ISSC, Inst. Marine Dynamics, Newfoundland, Canada
345
Development of the USV Multi-Mission Surface Vehicle III
Jens Veers, Veers Elektronik+Meerestechnik GmbH, Kiel/Germany, [email protected]
Volker Bertram, ENSIETA, Brest/France, [email protected]
Abstract
Progress for autonomous air and land vehicles as well as homeland security issues for ports revive
developments of boat-sized unmanned surface vehicles (marine robots) with more or less autonomy.
Various such unmanned surface vessels have been developed and a survey of activities worldwide and
general technical challenges is given, before the development of the USV MMSV III at Veers is de-
scribed in more detail. Since 1997, the German company Veers has been active in developing USVs.
Initial work focuses on the development of a USV “STIPS” for the German ministry of fishery in two
stages. In early 2005, Veers presented the Multi-Mission Surface Vehicle III (MMSV III) or “See-
Wiesel”. Sea trials for Veers USV of February 2006 are reported.
1. Introduction
The ultimate in ship automation would be the unmanned ship. Unmanned ships have been envisioned
for at least three decades now, Bertram (2003). Realistic discussions have focused on navy applica-
tions with limited autonomy. Port security has became an issue after the September 11 attack on the
USA fuelling the development of unmanned surface vessels (USVs) further, envisioning diverse ap-
plications, Fig.1. The developments benefit from technology developed for related purposes, namely
for remotely operated vehicles (ROVs) in offshore and oceanographic applications, unmanned air-
planes (drones) and unmanned surface vehicles.
Suspicious divers search Search ship bottom with ROV Target boat
Fig.1: Applications for USVs, source: Yamaha
While USVs date back at least to World War II, it is only in the 1990s that a large proliferation of pro-
jects appears. This is in part due to the technological progress, but also driven by a paradigm shift of
the US Navy with a much stronger focus on littoral warfare and anti-terrorism missions. Successful
missions of USVs in the second Gulf war have increased interest within the US Navy in USVs and
several modern navies followed suit.
Potential USV missions could range from small torpedo-size data gatherers to large unmanned ships.
Carderock Laboratory has used the following grouping:
346
• Small (<1 t)
• Medium (< 100 t)
• Large (< 1000 t)
• Extra large (> 1000 t)
So far, all USVs have small or medium size. Most USVs are about the size of recreational watercraft,
i.e. 2 to 15 m long with displacements of 1.5 to 10 t. Some can operate at more than 35 knots in calm
water. Current discussions see the following technical and operational challenges to have USVs
widely accepted:
• Affordable over-the-horizon (OTH) communications to extend the range that USVs can oper-
ate from host ship or base; increased reliability of the remote communication
• Safe, reliable USV launch and recovery
• Greater USV autonomy / intelligence
• Increased reliability and survivability of the platform
• Improved operational experience
• Legal issues of unmanned vehicles (‘robot warfare’, ‘abandoned’ ships)
In addition to technical challenges, civil and navy regulatory authorities have yet to develop maritime
procedures and protocols that define how unmanned vessels operate and interact with other maritime
traffic.
Our survey builds on various printed sources and internet information, particularly on Portmann et al.
(2000), http://www.globalatlantic.com/unmanned.html, and a market overview prepared in 2003 by
Moiré Inc., www.moireinc.com/USVMarket.pdf.
World War II saw the first experimentation with USVs. Canadians developed the COMOX torpedo
concept in 1944 as a pre-Normandy invasion USV designed to lay smoke during the invasion - as a
substitute for aircraft. COMOX was designated a torpedo because it could only be programmed to
traverse a fixed course. Although COMOX was not deployed, a vehicle was constructed and a suc-
cessful test was completed. Meanwhile, the US Navy developed and demonstrated several types of
"Demolition Rocket Craft" intended for mine and obstacle clearance in the surf zone. The "Porcu-
pine," "Bob-Sled," and "Woofus 120" were variants of converted landing craft that carried numbers of
mine clearing rockets in different configurations. Unmanned operation was part of the concept, al-
though it is unknown which, if any, of these vehicles were demonstrated as USVs.
Post-war applications of USVs expanded, with the USN using drone boats to collect radioactive water
samples after atomic bomb blasts Able and Baker on Bikini Atoll in 1946. The 1950s-era US Navy
Mine Defense Laboratory's project DRONE constructed and tested a remotely operated minesweeping
boat in 1954. By the 1960s, the Navy was using target drone boats based on remote-controlled "avia-
tion rescue" boats for missile firing practice, and the Ryan Firefish target drone boat was used for de-
stroyer gunnery training. Similar to UAVs, target drone USV development and use has continued and
evolved over the years. Today, the Navy operates a number of USVs as target drones, including the
Mobile Ship Target (MST), the QST-33 and QST-35/35A SEPTAR Targets, and the High Speed Ma-
neuverable Seaborne Target (HSMST), Fig.2.
347
Interest in USVs as minesweeping drones and for other dangerous missions continued to grow after
the 1950s for obvious reasons, and further US Navy development included the small "Drone Boat" - a
15ft USV for unmanned munitions deployment - that was quickly developed and deployed to the fleet
as ten vehicle kits in 1965 during the Vietnam War. Larger Minesweeping Drone (MSD) USVs were
also developed and deployed in Vietnam in the late 1960s. The value of unmanned minesweeping sys-
tems was recognized by a number of countries, and systems were developed and deployed. Examples
include Denmark's STANFLEX, Germany's Troika Groups (one manned control ship operating three
drones), Netherlands drones, the UK's RIM drones, Sweden's SAM II ACV drones, and Japan's SAM
ACV drones operated from Hatsushima Class MCM ships.
By the 1990s, the Navy developed and tested more sophisticated USV mine sweeping systems, in-
cluding the R/C DYADS, the MOSS, and finally ALISS - which demonstrated a remotely operated
simultaneous acoustic and magnetic influence sweep capability. One mine-hunting USV now in de-
velopment by the Navy is the Remote Mine-hunting System (RMS), Fig.3, an air-breathing submersi-
ble that tows mine-hunting sensors and is deployed and operated organically from surface combatants.
RMS, a descendant of the Dolphin, an earlier Canadian remotely operated mine-hunting vehicle, can
be considered one of the first examples of truly autonomous USVs.
Navy interests in USVs for reconnaissance and surveillance missions emerged in the late 1990s, with
the development of the Autonomous Search and Hydrographic Vehicle (ASH), later called the Owl
and the Roboski. Navtec Inc developed in the late 1990s an USV for the Office of Naval Research
(ONR) under the name Owl MK II, Fig.4. The Owl is a Jet Ski chassis equipped with a modified low-
profile hull for increased stealth and payload capability. The Roboski, Fig.5, initially developed as
Shipboard Deployed Surface Target (SDST) as a jet-ski type target for ship self-defence training, now
also serves as a reconnaissance vehicle test-bed. Science Application International Corporation
348
(SAIC) offer also a small USV for port security. This Unmanned Harbour Security Vehicle (UHSV)
is an advanced version of the Owl MK II.
Fig.6: Unmanned Harbor Security Vehicle Fig.7: SSC San Diego USV
The robotics group at the US Space and Naval Warfare Systems Center in San Diego developed an
USV test-bed as versatile platform for rapid prototyping and testing of new concepts, Fig.7, Ebken et
al. (2005), http://www.nosc.mil/robots/surface/usv/usv.html.The USV is based on the Bombardier
SeaDoo Challenger 2000 powered by a Mercury 250-hp OptiMax fuel-injected V-6. According to
their website, the military community has expressed strong interest in the use of USVs for a variety of
roles, including force protection, surveillance, mine warfare, anti-submarine warfare, riverine opera-
tions, and special forces operations. This project focuses on developing and transitioning technologies
that will enable the USV community to deliver systems with increased capability and autonomy.
Much of the USV technology developed at SSC San Diego can be transitioned to other USVs with
minimal effort. The team is part of the consortium for the development of the SPARTAN USV, Fig.8.
The SPARTAN was developed as an Advanced Concept Technology Demonstration (ACTD). The
French navy and the Singapore navy joined the SPARTAN program of the US Navy in 2003. The
SPARTAN USV is a modular, reconfigurable, multi-mission, high-speed, semi-autonomous USV ca-
pable of carrying payloads of 1.5 and 2.5 t for 7 m and 11 m craft, respectively, with approximately 8
hours of autonomy, Maguer et al. (2005). The SPARTAN USV saw active service during the US
military operations Iraqi Freedom and Enduring Freedom. Commercial production is expected for the
year 2007. The Freedom Sentinel, Fig.9, is a similar rigid hull inflatable 11 m test-bed operated by
NAVSEA Panama City.
The US Navy started several new USV programs in 2003. The ONR provided funding to the US Na-
val Facilities Engineering Support Center (NFESC) to develop the Sea Fox, Fig.10. The NFESC also
developed/studied, http://www.nfesc.navy.mil/amphib/teams/team5/default.asp:
• Roboraider - USV based on an inflatable platform, a low-cost USV for intelligence, surveil-
lance, and reconnaissance (ISR), Fig.11
• Roboski (see above)
349
• Small Weapons Attack Trainer (SWAT) - USV for training on 25mm, 50cal, M-60 and
Close-In Weapon System (CIWS) using a jet-ski type platform to tow an expendable target.
The system will use a COTS RF control link to command the rudder and throttle.
The Autonomous Undersea Vehicle Fest 2005 featured one USV, presented by Doug Freeman of Na-
val Surface Warfare Center Panama City, which served as carrier for an unmanned underwater vehi-
cle, the RDUST, Hoffman (2005). Radix Marine announced in 2003 plans to develop a USV ‘Odys-
sey’, Fig.12, but no further progress was reported. Two USVs built by Maritime Applied Physics Cor-
poration (MARP) were tested by the ONR in 2005, Kennedy (2005). The High-Speed Unmanned Sea
Surface Vessel is a 10 m hydrofoil with a top speed of more than 40 kn in heavy seas. The Tow Force
Unmanned Sea Surface Vessel has just above 20 kn speed, but can tow in excess of 1 t and carry close
to 4 t payload and fuel. The Technology Development & Research Institute, www.saiaa.com devel-
oped also within the framework of an ONR programme two USVs apparently related in concept to the
MARP USVs, namely the USSV-HS hydrofoil, Fig.13, and the USSV-LS larger monohull, Fig.14. By
now, several concepts for stealthy USV sensor platforms like the Remotely Piloted Surface Vehicle
(RPSV), Fig.15, have been proposed and are under consideration by the surface fleet. One of the
most visible interests is in USVs that could serve as unmanned force multipliers for a number of litto-
ral combat missions. Besides the USA, several other countries employ and develop USVs.
In Japan, Yamaha developed two USVs, the Unmanned Marine Vehicle High-Speed UMV-H, Fig.16,
and the Unmanned Marine Vehicle Ocean type UMV-O, Fig.17, Enderle et al. (2004). The UMV-H is
a deep-V mass-produced hull, equipped with 90 kW to go 40 knots using water-jet propulsion. The
boat can be used either manned or unmanned. At a length of 4.44 m, the craft is small enough to be
loaded on a small cutter, but large enough to accommodate all necessary equipment and instruments
such as under-water cameras (ROV) and sonar equipment. The UMV-O is an ocean-going USV with
displacement hull. It is used primarily in applications involving monitoring of bio-geo-chemical,
physical parameters of the oceans and atmosphere that put the long-distance capabilities of the vehicle
to effective use. The first UMV-O “Kan-Chan” was delivered in 2003 to the Japan Science and Tech-
nology Agency.
The Canadian Barracuda, Fig.18, is an unmanned version of an 11 m rigid hull inflated boat (RIB).
International Submarine Engineering Ltd (ISE), http://www.ise.bc.ca/USV.html, has been working on
USVs for 10 years. Their Tactical Controller (TC) Kit transforms a manned boat into a USV which
operates via a command link. The TC is a portable, modular, flexible, expandable package based on
350
ACE, ISE's proprietary open architecture control system software. Four USVs have been implemented
by ISE:
• The Dolphin MK II semi-submersible, Fig.19, http://www.ise.bc.ca/dolphin.html, initially de-
veloped for the Canadian Hydrographic Service, was delivered in 1985 and in 1988 to the US
Navy for navy payloads and as remote mine-hunting vehicles.
• The Seal USV was a demonstrator to Canadian Department of National Defense (DND) for
Search and Rescue;
• SARPAL Autonomous Marine Vehicle (AMV) was developed for DND; and
• "The Machine" was a rapidly developed demonstrator for the US Military. ISE integrated the
TC Kit into an existing 8 m RIB supplied by ACB Boats of Bellingham, Washington.
351
the Protector can conduct a wide spectrum of missions, without exposing personnel and capital assets
to unnecessary risk. The Republic of Singapore Navy employs the Protector since the year 2005.
The Portuguese Dynamical Systems and Ocean Robotics laboratory , http://dsor.isr.ist.utl.pt, has de-
veloped several marine robotic vessels, including the DELFIM autonomous surface catamaran,
Fig.22, and the Caravela, Fig.23, an autonomous oceanographic vessel with a range of operation of at
least 700 nautical miles. The University of Plymouth has developed a catamaran USV ‘Springer’,
Fig.24, capable of operating in shallow water for measuring water quality and for environmental sur-
veys, Naeem et al. (2006), http://www.tech.plymouth.ac.uk/sme/springerusv/. The UK company H
Scientific (www.h-scientific.com) advertises a Remote Control Automatic System (RCAS), Fig.25, to
control unmanned surface vessels. Extent of autonomy and actual applications are unknown.
352
Fig.26: Rodeur Fig.27: Basil
In France, Sirehna developed an unmanned jet-ski under the name “Rodeur”, Fig.26. ACSA offered in
2005 two USVs, Basil, Fig.27, and MiniVAMP (Virtually Anchored Multipurpose Platform), Fig.28,
a low-cost version of limited autonomy developed originally for remote surveys of offshore pipelines,
http://www.underwater-gps.com/dpbuoys/dpbuoys.htm. The SeaKeeper, Fig.29, of the French navy is
a semi-submersible similar to the Remote Mine-hunting System (RMS), also intended for mine-
hunting and port security applications. The drone Argonaute, Fig.30, is used by fire-fighters to chemi-
cally analyze (‘sniff’) the air near containers with unknown content.
Since 1997, the German company Veers has been active in developing USVs. Initial work focuses on
the development of a USV “STIPS” for the German ministry of fishery in two stages, Figs.31 and 32.
In early 2005, Veers presented the Multi-Mission Surface Vehicle III (MMSV III) or “See-Wiesel”,
Fig.33. At present, this prototype is further tested and developed within an European cooperation, in-
volving as partners: Ensieta and Ifremer (France), Veers GmbH, DW Ship Consult, and ThyssenK-
rupp Marine Systems HDW (Germany), TU Gdansk (Poland).
Further lines of development include the testing of diverse sensor equipment on the MMSV III, which
may include:
- night vision equipment
- nuclear sensors
353
- chemical sensors
- video transmission
- different telemetry and remote control systems
Acknowledgement
This article is based on research funded in part from funds of the CIP “INTERREG IIIC North Zone“.
354
References
BERTRAM, V. (2003), Cyber-ships – Science Fiction and reality, 2nd Conf. Computer and IT Appli-
cations to the Maritime Industries (COMPIT), Hamburg, pp.336-349
COOPER, S.; NORTON, M. (2002), New Paradigms in Boat Design: An Exploration into Unmanned
Surface Vehicles, Int. Symp. Association for Unmanned Vehicles Systems
EBKEN, J., BRUCH, M.; LUM, J. (2005), Applying UGV Technologies to Unmanned Surface Ves-
sels, SPIE Proc. 5804, Unmanned Ground Vehicle Technology VII, Orlando
ENDERLE, B.; YANAGIHARA, T.; SUEMORI, M.; IMAI, H.; SATO, A. (2004), Recent develop-
ments in a total unmanned integration system, AUVSI Unmanned Systems Conf., Anaheim
HOFFMAN, K. (2005), Engineers show off at AUV fest, Minewarnews August
KENNEDY, H. (2005), No crews required: Unmanned vessels hit the waves, National defense Maga-
zine, October, http://www.nationaldefensemagazine.org/issues/2005/oct/sb-no_crews.htm
MAGUER, A.; GOURMELON, D.; ADATTE, M.; DABE, F. (2005), Flash and/or Flash-s dipping
sonars on Spartan unmanned surface vehicle (USV) : A new asset for littoral waters, Turkish Int.
Conf. Acoustics, Istanbul
NAEEM, W.; XU, T.; CHUDLEY, J.; SUTTON; R. (2006), Design of an unmanned surface vehicle
for environmental monitoring, World Maritime Technology Conf., London
N.N. (2001), SPARTAN Unmanned Surface Vehicle Extends the USW Battlespace-SPARTAN Con-
cept, Naval Forces, Special Issue, p.18
PORTMANN, H.H.; COOPER, S.L.; NORTON, M.R.; NEWBORN, D.A. (2000), Unmanned Sur-
face Vehicles, Past, present and future, http://www.globalatlantic.com/unmanned.html
355
Development of a Freely Available Strip Method for Seakeeping
Volker Bertram, ENSIETA, Brest/France, [email protected]
Bastiaan Veelo, NTNU, Trondheim/Norway, [email protected]
Heinrich Söding, TU Hamburg-Harburg, Hamburg/Germany [email protected]
Kai Graf, FH Kiel, Kiel/Germany, [email protected]
Abstract
The linear strip method PDSTRIP is under development and will be available to the public in source
code (Fortran 90/95) and executable version, both free of charge. PDSTRIP computes ship motions
for mono hulls including sailing boats. The paper describes the motivation for the development, gives
a survey of the common commercial strip methods, and the reasoning behind the chosen software
tools and legal framework.
1. Introduction
ENSIETA, www.ensieta.fr, is a French university teaching naval architecture and offshore engineer-
ing. Research in one department focuses on the mechanics of marine structures, with sea keeping as
one of the areas of interest. In 2005, ENSIETA faced a situation that motivated the development of a
shared sea keeping analysis tool:
− Sea keeping formed part of the standard curriculum as in most universities offering degrees in
naval architecture and/or offshore engineering. Exercises supplemented lectures, but students felt
rightfully that examples were unrealistic in simplicity of geometry or reduction of physics, e.g.
having only one degree of freedom. Three hours dedicated to exercises with a commercial sea
keeping program (3-d Green function method) were appreciated by the students, but not by the fi-
nancial department, as even with splitting students into groups and having three students per ter-
minal, the necessary number of licenses for this commercial program placed a heavy burden on
ENSIETA, while at most only one license was used for the remainder of the year. Hence the de-
sire to have a no cost/low cost software for sea keeping analysis which could be used in unlimited
parallel installations, perhaps even given to students who had their own private computers.
− Funds for research cooperation between the University of Applied Engineering in Kiel (FH Kiel)
and ENSIETA were available and FH Kiel was interested in adding a sea keeping analysis tool to
its existing design software for competitive sailing yachts. Required computational response and
available computing resources made a linear strip method approach the obvious choice. As ex-
pected, there was no such method on the market for sailing boats with their own particular hydro-
dynamics (involving roll damping due to the sails, asymmetric hulls due to heel, large lifting sur-
faces such as keel swords and rudders). Neither was a ‘regular’ strip method program available
that was suitable for modification for the particular problem at hand. Commercial programs as
listed in Appendix 2, were only available in executable versions, published research approaches
were only given in form of papers generally without giving sufficient details for re-programming
and even then such a development would have taken man-months even for very limited function-
ality.
The decision appeared obvious: Use the research project to fund the development of a strip method
code, incorporating the necessary functionalities to support sailing boat sea keeping analysis, but also
capable for most other ships. The resulting software should then also support teaching students in-
volving advanced applications. This requires multiple installations of a documented and user-friendly
code. Fortunately, the involved partners in the project have avoided so far the commercialization of
the academic field imposed in many countries of the world. From the beginning, the intention has
been to make the research results, particularly the software, widely available.
356
2. Programming approach
The closest starting point to our intended goal was the Fortran 77 strip method program STRIP of
Prof. Söding of the Institut für Schiffbau in Hamburg (The IfS was subsequently integrated into the
TU Hamburg-Harburg). STRIP was freely given to whoever asked, but
• the source code was not written with the intention of being easy to understand and modify;
• the documentation was in German;
• the documentation was not consistently updated with the source code;
• the programming assumed a Linux/Unix operating system allowing easy assignment of input
and output files on the operating system level;
• STRIP covered only parts of the required functionality, in particular roll damping due to sails,
shallow water, and second-order forces (added resistance and drift force) were not considered.
The choice of the programming language is often an almost religious dispute, between ‘object-
oriented programming’ and ‘procedural programming’. Rare attempts to bring the two fields together
usually meet little resonance, Abels (2005). The two main fields are:
• Procedural programming
Or the “Old school”. Engineers in classical ‘mechanical’ subjects like naval architecture pre-
fer in majority programming in Fortran for various reasons:
- that is what they learned as students (and nobody likes learning new tricks)
- the problems at hand usually involve complicated mathematics (procedures) and simple
data structures
- the programming is more intuitive and simpler (or at least the majority perceives it so)
• Object-oriented programming
Or the “New school”. Computer scientist and engineers in new subjects like electronics prefer
often programming in object-oriented languages like C++ or Java for various reasons:
- that is what they learned as students (and nobody likes learning new tricks)
- the problems at hand usually involve simple mathematics and complex data structures
- it is easier to incorporate graphical user interfaces (GUI)
The dispute is by now almost classic, see Post (1983) on the art of programming some 20 years ago.
In our case, the programming experience in relevant languages of the different team partners was as
follows:
- Heinrich Söding: Fortran 66, Fortran 77, Fortran 90/95
- Volker Bertram: Fortran 66, Fortran 77, Fortran 90/95
- Kai Graf: Fortran 77, Fortran 90/95, C+
- Gerhard Thiart: Fortran 77, Fortran 90/95, Oberon, Java
- Bastiaan Veelo: C/C++, D, Pascal, Matlab
Fortran appeared to be the most common denominator and as the algorithm as such was developed by
Söding, making extensively use of older program segments, the choice was dictated. However, as the
problem was procedural, Fortran in its most modern standard, namely Fortran 90/95, appeared to be a
good choice anyway. This allowed to incorporate most existing subroutines reducing programming
effort and error sources, while being sufficiently close to object-oriented languages to be ‘acceptable’
for the New School. GUI scripts for executable versions for Windows 95 to Windows XT were added
for those who just wanted to use the program (students and industry).
The extension and modernization of the source code has still involved considerable effort, which is
not yet finished, although a workable executable version is already available. Appendix 3 shows ex-
amples of source code in the original version and the corresponding source code in the new version.
357
Improvements are due to largely self-explanatory variable names, ‘cosmetics’ (indentation), and use
of the extended standard functions in Fortran 90/95 for vector and matrix operations.
Since it is possible to produce an executable form of the software from its source code, we will be
considering the legal frame for the source code first. After that we may decide to treat the executable
form differently.
We have already decided to share the source code with interested parties, and we want to grant people
the freedom to make changes to the code. The question now is how we are going to declare this free-
dom. An explicit declaration is necessary because without one, a copyright automatically applies to
the work by way of the Berne Convention of 1886; although in U.S. jurisdictions it is essentially im-
possible to sue for infringement of a copyright unless the copyright has been registered, Raymond and
Raymond (2002). Unfortunately, it is difficult to get advice on choosing a legal frame for the distribu-
tion of source code. Adding to the confusion, the two largest organizations that do offer help, the Free
Software Foundation (FSF) and the Open Source Initiative (OSI), partly disagree with each other in
ideology, definition of terms, and also legal interpretation. They represent two camps of developers
and users and it is hard not to choose sides. The FSF defines the term Free Software
(http://www.fsf.org/licensing/essays/free-sw.html), the OSI defines the trademark Open Source
(http://www.opensource.org/docs/definition.php). Unfortunately, these definitions are not equivalent,
i.e., some software that is considered Open Source by the OSI is not considered Free Software by the
FSF. Ironically, both organizations struggle with ambiguities in these terms, and both append that the
matter is about liberty and not price or readable source code. A newer term, which has its origin in
Roman languages and is less ambiguous, is Libre Software, and people that try to stay neutral some-
times use the acronym FLOSS to cover all of Free/Libre and Open Source Software. In considering a
legal frame, we see it as an advantage to comply with both definitions, so
1. we will favour a license that complies with both the definition of Free Software and the defini-
tion of Open Source.
Another difficulty in this business is the matter of license compatibility; in particular compatibility
with the General Public License (GPL, http://www.gnu.org/licenses/gpl.html), discussed in detail later
on. An efficient way of creating software is to assemble existing pieces of FLOSS, using them as
building blocks. This practice can be considered an important advantage of writing FLOSS, Veelo
(2005a,b). The tricky part is that when building blocks are combined, the license of the result has to
comply with the licenses of the parts. This is not always possible. Most notably, the GPL is notori-
ously known for its incompatibility with well over half of the other Free Software licenses, yet over
half of all FLOSS is licensed under the GPL, Raymond and Raymond (2002). Our software will be
most flexible in terms of using other building blocks and functioning as a building block itself when
we choose a license that is compatible with the GPL. Most licenses of this type state many freedoms
and few conditions. If we choose the very GPL then we reduce the set of usable building blocks con-
siderably, as well as its value as a building block. If we choose a license that is not compatible with
the GPL, the possibilities in this regard are even more reduced. Therefore, from the perspective of
building with blocks
2. we will favour a GPL-compatible license over the GPL, which we will favour over a GPL-
incompatible license.
Then there is the question of derivative works and the legal frame for them. If source code is in the
public domain (which means that there is no copyright and no license is needed) or if it is released
under an all-permissive license, the software may be incorporated into a proprietary product, which
then invisibly builds on the Free Software component. It means that current nor future improvements
to the Free Software component in this configuration are freely available anymore, in other words,
this strand of the component ceased to be Free Software. Hypothetically speaking, under a legal frame
358
of this kind it is possible that you are offered to buy a software product without knowing (and without
being able to know) that you contributed to it.
Many people have no problem with this; the more their contribution is used, the merrier it is for them.
However, there is another large group of people that think that this type of use is plain wrong. The
FSF is the organization that preaches this line of thinking. It believes that people should be free to use
their computer hardware in any way they like, and be free to legally share and evolve the software that
makes this possible. Since the freedom to make derivative works is central to the evolution of Free
Software, the FSF wants to perpetuate this freedom through all stages of evolution. This can be ac-
complished through the combination of two clauses in the license. The first clause requires any de-
rivative works to be issued under the same license as the original work, and the second clause states
that the license terminates in case one fails to comply with any of its conditions. A license of this kind
is generally denoted as a copy left license (as opposed to copyright) or a reciprocal license. It turns the
act of proprietarization into a violation of copyright law and an offender can be made to either share
alike or remove the Free Software building block from its product. The GPL is the most widely used
copy left license and it is the flagship of the FSF. The GPL does not require the modifying party to
publish the modified source code until it distributes the modified software in some form.
Unfortunately, copy left licenses suffer from a vague definition of the term derivative work as applied
to software. This is because software can have the form of a software library, to which other software
can link and call services from. This other software practically builds upon the library without neces-
sarily changing the source code of the library. The FSF has taken the position that if a software prod-
uct links in any way to some library (statically or even dynamically) then the product has to be seen as
a derivative work of the library. This interpretation has never been confirmed by a court test, because
all disputes are being settled outside of court, in favour of the FSF interpretation, up to the time of this
writing. However, if a copy left license is ever challenged in court, Raymond and Raymond (2002)
predict that courts will not follow the FSF interpretation and consider software B to be a derivative
work of software A only when the source code of B includes an identifiable portion of the source code
of A.
For the purpose of completeness, let us consider the licensing issue when writing a library. Should we
use a copy left license or a non-copyleft license? The FSF generally recommends the GPL for stand-
alone programs as well as libraries because they see copy lefted libraries as an invitation to others to
write Free Software, thereby contributing to the size of the global pool of Free Software. On the other
hand, Raymond and Raymond (2002) recommend non-copy left licenses for libraries, because too
much confusion, uncertainty and dread is generated by concerns about whether particular kinds of
run-time linkage might deprive a developer of rights he wishes to preserve. Alternatively, one can use
the GPL plus a special statement giving blanket permission to link with non-free software. This is de-
noted as a weak copy left license and it has the advantage of keeping all evolutions of the library itself
Free Software, takes away the confusion about linkage and is GPL-compatible.
Let us consider whether a copy left license of our software can help promote the development of some
free related tools that we would like to see, namely pre-processors and post-processors to our strip
code. Our strip code is a stand-alone program, with a textual input and output. This means that
pre/post-processors and our strip code will communicate through text files and not linkage, so the
processors cannot in any case be regarded as derivative works of our code and no copy left license
will be able to force them to be Free Software. It is possible to split our code into a library part and an
application part that links to the library, and release the library under the GPL in FSF style. Thus other
hydrodynamic analysis software could more tightly integrate with our code, at the cost of having to
adopt the GPL. However, it is unlikely that a proprietary software house will adopt the GPL because
of our strip code, as buying a license for a proprietary strip code is more attractive from a traditional
business point of view. Others will not need the incentive of a strongly copy lefted library to write
Free Software, like ourselves.
359
Weighing strong copy left licenses against weak copy left licenses in our case, we conclude that a
strong copy left license gives no convincing advantage over a weak copy left license but has the dis-
advantage of adding confusion and uncertainty on the issue of linkage. Therefore
This leaves the categories of weak copy left licenses and non-copyleft licenses. By using a weak copy
left license we would secure free access for everybody to all future extensions and improvements of
our code, made by anyone. It would prevent proprietary evolution of our code, whether that is a good
thing or not. By using a non-copyleft license, the extensions and improvements made by commercial
players may not become freely available to everyone, excluding ourselves as well.
As a matter of fact, we are no Free Software fundamentalists, and we have no problem should our
code be evolved further behind closed doors. Besides, we are slightly concerned about the number of
different versions and strands or forks that would eventually appear after publishing the source code,
and about the administrative workload that will be required to synchronize these. The problem has
already manifested itself even before publishing the code, as even within our small team it appears to
be difficult to reach agreement on how things should be programmed. So it is likely that even within
the project we will have different versions of the same program. If a proprietary player wishes to
maintain a private fork, then it will be one less version for us to handle. Nonetheless, the fact that a
commercial player can maintain a private fork for its improvements does not mean that it will. Ray-
mond and Raymond (2002) state that “what keeps most projects open-source isn't the threat of lawsuit,
it's that taking open-source code closed is an expensive way to lose lots of money as your handful of
salaried in-house developers tries to keep up with a much larger pool of open-source contributors.” In
other words, commercial players may prefer to try to get their modification accepted into the main
branch, thereby contributing to Free Software, contrary to keeping it private and then having to syn-
chronize it all the time to get the improvements that are being made to the main branch. We also hope
that this mechanism will work to keep the number of forks down in general, besides the fact that hav-
ing a patch (code change) accepted into some Free Software can be as personally rewarding as having
an article accepted into some journal. To wrap up the question of copy left licenses
This leaves the options of a) to donate to the public domain or b) to retain the copyright and publish
under a non-copyleft Open Source license. Although sentences like “This file is in the public domain”
are occasionally seen in the wild, according to Rosen (2004), p.74 “there is no mechanism for waiving
a copyright that merely subsists, and there is no accepted way to dedicate an original work of author-
ship to the public domain before the copyright term for that work expires. A license is the only recog-
nized way to authorize others to undertake the authors’ exclusive copyright rights.” The exception to
the rule is when the work was written by officers or employees of the U.S. Government as part of
their duty (17 U.S.C., 2003, §101 and §105), which is not the case here. Alas, the easy option is out,
This leaves us to find a GPL compatible, non-copyleft FLOSS license. We are discouraged by all ma-
jor counsels to write our own license, there are plenty of them to choose from already. Yet the choice
is not straightforward. Raymond and Raymond (2002) recommend the Academic Free License (AFL,
http://www.opensource.org/licenses/afl-3.0.php) as the single best-practice non-copyleft license. It is
a modern license, meant to be enforceable under copyright law and contract law and thereby rather
verbose. It is recognized by both the OSI and FSF as a FLOSS license, but the FSF argues that it is
not compatible with the GPL, and that it has severe practical problems for the development of soft-
ware in an open way (http://www.fsf.org/licensing/licenses/index_html#OSLRant). This is not inten-
tional for sure and depends on the interpretation of the word “reasonable”, and its author sees no rea-
sons why the AFL should be incompatible with any license, Rosen, 2004), p.249. The AFL has sev-
eral interesting features, in particular a patent action termination clause as a defensive measure; soft-
360
ware patents are seen as a threat to the Free Software phenomenon (www.nosoftwarepatents.com).
We assume that most users that want to know whether a particular license is compatible with the GPL
will regard the FSF as authoritative, and therefore
Let us return to the academic licenses that the AFL is supposed to substitute, like the BSD license and
the MIT license. These are much simpler licenses than the AFL and arguably have deficiencies re-
garding patents, liability, and warranty of copyright ownership, Raymond and Raymond (2002). Yet
they are very popular and they have a proven track record of successfully nurturing FLOSS projects.
The BSD license has its offspring at the Berkeley University of California and stands for Berkeley
Software Distribution. This license explicitly prevents the name of the licensor or contributors from
being used to endorse or promote products, the no-endorsement clause, Rosen (2004). This clause is
widely regarded as having no effect, since all it does is caution grantees that they are not being
granted rights that the law says they did not have to begin with, Raymond and Raymond (2002). Ver-
sions prior to 1999 also featured an advertising clause, which has been removed due to extensive pub-
lic criticism.
The MIT license has its offspring at the Massachusetts Institute of Technology. It is also referred to as
the X11 license. This license is based on the BSD license, but lacks the advertisement and no-
endorsement clauses. It also states the granted rights more clearly and is easier to read than the BSD
license. The license itself consists of a single sentence which is meant to directly follow the copyright
notice:
“Permission is hereby granted, free of charge, to any person obtaining a copy of this software
and associated documentation files (the "Software"), to deal in the Software without restric-
tion, including without limitation the rights to use, copy, modify, merge, publish, distribute,
sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or sub-
stantial portions of the Software.”
The license applies to the associated documentation as well. This fits well with our intentions of mak-
ing the algorithm and other theoretical documents as freely available as the software. There are no
requirements to make source code available when distributing executables, so students and others can
just share the software at will without complications.
Rosen (2004) explains the permission to sublicense: “The fact that the MIT license is sub licensable is
an advantage for anyone who wants to distribute copies or derivative works of MIT-licensed works.
[...] Note that the license terms for a sublicense must be consistent with—not necessarily the same
as—the original license terms. A sub licensor cannot sublicense more rights than have been granted
by the original author. The sub licensors needn’t use the identical words as in the earlier license they
received, but they cannot override terms and conditions that are mandated by that license.” The li-
cense is not more restrictive than the GPL (much to the contrary) and is therefore GPL-compatible,
which is a great advantage for the combination of FLOSS building blocks.
After the permission notice, the MIT license is completed with a disclaimer of liability and warranty,
which is a necessity to protect the licensor in U.S. jurisdictions.
We feel that the MIT license best resembles the intent of dedicating software to the public domain, if
that were legally possible. Therefore
361
4. Description of PDSTRIP
PDSTRIP (for public-domain strip) computes the sea keeping of ships and sailing yachts according to
the strip method which was proposed originally by Korvin-Kroukowski and Jacobs (1957). Here,
however, the slightly different method of Söding (1969) is applied for motions, and the procedure of
Hachmann (1991) for the pressure. Responses in regular waves are given as transfer functions, i.e. as
ratio between the response amplitude and amplitude of the wave causing that response. For so-called
linear responses this ratio is independent of the wave amplitude. The program is mainly confined to
such linear responses; however, it takes into account some nonlinear effects. Responses in natural
seaways are given as significant amplitudes. These are defined as the average of the one-third largest
positive maxima of the response, neglecting the 2/3 smaller positive maxima. The significant ampli-
tude is twice the standard deviation (from the average value zero) of the response.
The program cannot deal with multi-hulls, air-cushioned vehicles or planing hulls. But PDSTRIP has
several special features making it suitable for practical applications to most ships and sailing yachts:
- Consideration of discontinuous sections (as found at forebodies with bulbous bows)
- Consideration of transom sterns
- Consideration of unsymmetrical cross sections (as in heeled ships)
- Consideration of shallow or deep water
- Consideration of fins (such as rudders, bilge keels, etc.)
- Consideration of active fins (such as roll stabilizers)
- Consideration of roll damping due to sails in sailing yachts. With some precautions the
method to take account of sails may also be applied to wind forces on superstructures of ships
without sails, or on deck containers etc.
- Consideration of suspended loads (as in crane ships)
1. Users form the largest group of ‘players’. PDSTRIP Users would include students at different
levels, as well as researchers and industry users. It is difficult to combine extended functional-
ity and user-friendliness. At present, we accepted that the input is complicated if advanced
features are used like incorporating rudders and sails. The ideal user (from a developers per-
spective) is well educated in the underlying physics and understands thus also limitations of
the code, avoiding input that may be formally correct, but will still result in results without
physical meaning. The typical user in reality does not even like to read a user manual, leave
alone theoretical background requiring a post-graduate education in naval architecture. As an
intermediate approach, we developed a user manual, specifying the format of the input, but
have so far very few checks by the program concerning the correctness of the input. We also
prepared an overview of the physical model behind the PDSTRIP, which a serious user is ex-
pected to read. For simple computations of responses of naked ship hulls in regular waves, the
code is easy to use.
362
2. Programmers need not only access to the source code, they need to understand the source
code completely in order to modify or extend it. ‘Programmers’ for our case would under-
stand the hydrodynamics and extend the model. They need essentially an explanation of the
code structure and the algorithms behind the individual modules. We focused here on the hy-
drodynamics modules, namely
- a 2-d strip module to compute added mass, damping and exciting forces
- the global module giving 3-d values by integrating over all 2-d strips
- a module for converting responses in regular waves to significant responses in natural
seaways
- modules for appendages like rudders, bilge keels, sails
- a module for second-order forces
- a module for special treatment of surf-riding (very low encounter frequency)
Appendix 2 gives an idea of the depth of documentation supplied to understand the source
code. The source code follows the notation of this documentation closely.
First versions of the code were compiled with different compilers under Linux and Windows. The
resulting executable files have been tested with simple symmetric and non-symmetric test cases for
plausibility, formal correctness (no program crash with input data), and coincidence with the original
STRIP program which has been widely applied and validated for 20 years now. A validation of the
code against recommended test cases of ITTC (International Towing Tank Conference) and/or other
test cases with model test data remains to be done.
The focus was on obtaining an executable version quickly to supply demand in research and teaching.
Further documentation and conversion of the source code to a more structured code in more homoge-
neous programming style intended for faster understanding by other programmers remain to be done.
At present, the input data for advanced options (like rudders) requires advanced knowledge in ship
hydrodynamics. This knowledge could be documented in external text documents, but could also be
incorporated in pre-processing programs. Such programs may be called (somewhat pretentiously) ‘ex-
pert systems’. Interactive graphical programs for grid generation, check and modification would fur-
ther facilitate analyses using PDSTRIP.
Similarly, the output is at present purely text based. Post-processing programs could automatically
create curves of transfer functions or even visual displays (snap-shots, videos, virtual reality models)
of the ship motions.
It would benefit the whole industry and academic community if such further extensions would like-
wise be shared and published.
References
363
FINN, P.J.; BECK, R.F.; TROESCH, A.W.; SHIN, Y.S. (2003), Nonlinear Impact Loading in an
Oblique Seaway, J. Offshore Mech. and Arctic Eng. (OMAE), Trans. ASME 125, pp.190-197
HACHMANN, D. (1991), Calculation of pressures on a ship's hull in waves, Ship Techn. Research
38, pp.111-133
KORVIN-KROUKOVSKI, B.V.; JACOBS, W.R. (1957), Pitching and heaving motions of a ship in
regular waves, SNAME Transactions
POST, E. (1983), Real Programmers don’t use Pascal, Datamation 29/7
http://www.pbm.com/~lindahl/real.programmers.html
RAYMOND, E.S.; RAYMOND, C.O. (2002), Licensing HOWTO [draft OSI working paper],
http://www.catb.org/~esr/Licensing-HOWTO.html
ROSEN, L. (2004), Open Source Licensing — Software Freedom and Intellectual Property Law,
Prentice Hall PTR, New Jersey, http://www.rosenlaw.com/oslbook.htm
SÖDING, H. (1969), Eine Modifikation der Streifenmethode, Schiffstechnik 16, pp.15-18
SÖDING, H. (1993), A method for accurate force calculation in potential flows, Ship Techn. Re-
search 40, pp. 176-186
VEELO, B.N. (2005a), The potential of free software for ship design, 4th Int. Conf. Computer and IT
Applications in the Maritime Industries (COMPIT), Hamburg, pp.399-418,
http://www.veelo.net/bastiaan/media/publications/veelo_compit05.pdf
VEELO, B.N. (2005b), Free software as an option for ship design, Ship Techn. Research 52, pp.172-
188, http://www.veelo.net/bastiaan/media/publications/VeeloSTR2005.pdf
This section features a short description of linear strip theory codes that we found. None of these can
be used freely, none is available in source code.
• NAPA SHS
The Naval Architectural Package (NAPA) has an add-on subsystem for general seakeeping,
called SHS, that uses the strip method. It is commercially distributed by Napa Ltd,
www.napa.fi.
• NSHIPMO
Finn et al. (2003) examine a time domain “blended” linear-nonlinear strip theory method for
the analysis of bottom and bow flare slamming. Their program, NSHIPMO, does not seem to
be regularly distributed yet.
• POWERSEA
POWERSEA is a time-domain motion simulator for planing hulls. It uses a low aspect ratio
strip theory to calculate the motions of variable deadrise planing boats in waves, in the verti-
cal plane only. It is commercially distributed by Ship Motion Associates,
www.shipmotion.com.
• Seakeeper
Seakeeper is a frequency-domain strip method, Couser (2000). It is commercially distributed
by Formation Design Systems, www.formsys.com, as part of their Maxsurf design and analy-
sis suite.
364
• SEAPEP / STRIP
SEAPEP is a program for the evaluation of seakeeping performance, developed and marketed
by FORCE Technology, www.force.dk. It can be used with three alternative CFD (computa-
tional fluid dynamics) codes, one of which is the FORCE Technology linear strip theory code
called STRIP.
• SEAWAY for Windows / OCTOPUS Seaway
SEAWAY is a frequency-domain ship motions PC program based on the linear strip theory to
calculate the wave-induced loads, motions, added resistance and internal loads for six degrees
of freedom of displacement ships and yachts, barges, semi-submersibles or catamarans, sail-
ing in regular and irregular waves. The program is suitable for deep water as well as for very
shallow water. Viscous roll damping, bilge keels, anti-roll tanks and linear springs can be
added. SEAWAY was originally developed at Delft University of Technology, and trans-
ferred to the consultancy AMARCON in 2003 www.amarcon.com. Today it is commercially
distributed as a component in the OCTOPUS product suite, http://www.shipmotions.nl.
• SHIPMO
SHIPMO is a series of motion prediction codes, developed by Defence Research and Devel-
opment Canada – Atlantic (DRDC Atlantic, www.atlantic.drdc-rddc.gc.ca, known as DREA
until 2002). This code can be found in third-party commercial products.
• ShipmoPC
ShipmoPC is a strip theory-based, frequency domain seakeeping code capable of computing
the six degree-of-freedom motions of a monohull with forward speeds in regular as well as ir-
regular seas of arbitrary headings. Motion predictions are valid for Froude numbers ranging
from 0 to 0.4. It is based on the DREA/DRDC SHIPMO code, and commercially distributed
by BMT Fleet Technology Limited, www.fleetech.com.
• SMP95 / VisualSMP
VisualSMP is a suite of tools used in the prediction and analysis of a ship's seakeeping char-
acteristics. Included is SMP95, a frequency domain seakeeping program developed by the US
Navy, based on strip theory. VisualSMP adds a graphical pre- and post-processor, together
with tools to simulate and visualize the motion of the ship in a seaway. VisualSMP is distrib-
uted commercially by Proteus Engineering, www.proteusengineering.com, as part of their
FlagShip system.
This sample section describes the form and depth of documentation behind the code, allowing ship
hydrodynamicists to understand (and ultimately extend or modify) the theory that was transposed into
Fortran coding. The theory of a 2-d strip boundary value problem can be found in chapter 7.4 of Ber-
tram (2000). Other sections give then derivations of the boundary conditions referred to in the section
repeated here.
The numerical solution follows a “patch method”, Söding (1993), Bertram (2000), which computes
the forces more accurately than a traditional panel method. The patch method approximates the poten-
tial as a superposition of point sources
n
1
φˆ (y, z) = ∑ q i ln [(y − y i ) 2 + (z − z i ) 2 ] (1)
i =1 2
where qi are the source strengths of the n sources at locations (yi,zi). This satisfies the Laplace equation
everywhere except at the location of the sources (yi, zi) which are therefore located within the section
contour or above the line z = 0.
365
The section contour is defined by given offset points. For each contour segment between adjacent off-
set points, one source is generated near to the midpoint between the two offset points, however shifted
from the midpoint to the interior of the section by 1/20 of the segment length. Along the average wa-
ter surface z = 0 grid points are generated automatically. Near to the body, their distance is equal to
1.5 of the offset point distance on the contour at the waterline. Farther to the sides, the distance in-
creases by a factor of 1.5 from one segment to the next, until a maximum distance of 1/12 of a wave-
length (of the waves generated by the body oscillations) is attained. Source points are again located
above the mid-points of each free-surface segment, here however at a distance of one segment length.
The number of free-surface grid points used is 55 for a symmetrical body of which only one half
needs to be discretized, and 2⋅55 for asymmetrical bodies where the water surface to both sides of the
section must be discretized.
Whereas in the panel method the boundary conditions are, usually, satisfied at a ‘collocation point’ in
the middle of each segment, in the patch method the integral of the boundary condition over each
segment has to be used. For the body boundary condition, this is simple: ∫ ∇φˆ nds , i.e. the flux in-
duced by a source at S through a segment between points A and B, is equal to the source strength
times the angle ASB divided by 2π. The total flux is the sum of fluxes due to all sources. This method
is used also for the second term in the free-surface condition which - after integration over a segment -
is also the flux through that segment. For the integral over the first term of the approximation
B
1
∫ φˆ ds ≈ 2{φ(A + 0.316 [B − A]) + φ(A + 0.684 [B − A])}⋅ |B − A|
A
(2)
is used. The constants 0.316 and 0.684 were determined such that the integral is approximated cor-
rectly for a source which is located near to the midpoint of the segment AB, whereas for sources far-
ther off from the segment the errors are small anyway.
The bottom condition is satisfied exactly by using mirror images of all sources below the bottom at
the point (yi,2H - zi) for a source at (yi,zi). For deep water, the bottom condition is satisfied automati-
cally by the approach; however, for horizontal translation and for body rotation the accuracy is im-
proved by adding another source and specifying the additional condition that the sum of all source
strengths is zero. The location of the additional source is at y = 0 (above the body) at a distance above
the waterline of 1/2 the distance to the farthest free-surface grid point.
Also the radiation condition is integrated over a panel between points A (nearer to the body) and B
(farther out). Using again the approximation results in
ik
φB − φ A = ∓ ( ϕ (A + 0.316 [B − A]) + φ(A + 0.684 [B − A])(y B − y A ) (3)
2
from which follows
k
i(φB − φ A ) − | y B − y A |(φ( A + 0.316 [B − A]) + φ(A + 0.684 [B − A]) = 0 (4)
2
This condition is applied in the outer range of the free surface, for asymmetrical bodies on both sides.
The details of satisfying the radiation condition is important for the accuracy of the method and for
the necessary length of the discretized part of the free surface. This length, on the other hand, influ-
ences the required computer time. Therefore a number of improvements were made in the treatment
of the radiation condition.
A complex wave number k=kr+iki with negative ki, when used e.g. in the radiation condition, gener-
ates waves with decreasing amplitude in the direction of wave propagation. A damping region at some
distance from the body will decrease the effects of truncating the infinitely long free surface; thus a
complex k with negative imaginary part will be advantageous in the outer range of the free surface. In
the program this is done implicitly: The imaginary part of k introduces into (4) a term
366
ik i | y B − y A |φaverage (5)
2
Applying the y derivative of the radiation condition, i.e. φ yy = − k φ , to the term (5) transforms it to
ki
−i | y B − y A |φ yy,average (6)
k2
This term may be interpreted as a Taylor expansion of the first term in (4) for potentials φ shifted by
δy from the original points A and B:
i[φ[φB + δy) − φ(y A + δy)] ≈ i[φ[φB ) + δyφ y (y B ) − φ(y A ) − δyφ y (y A )] (7)
≈ i[φ[φB ) + φ(y A )] + iδy(y B − y A )φ yy,average (8)
However, in test computations there remained a small influence of the coordinate yD where the radia-
tion condition and the damping region started: The results oscillated, only slowly fading out for large
yD, with a wavelength of half the length of the waves radiated from the body. Therefore, for each body
and frequency two cases are computed with yD values differing by 1/4 wavelength of the radiated
waves. The results found in both cases are averaged. This removes the oscillations of results over yD
and results in high accuracy for a very broad range of frequencies with the selected moderate number
of free-surface panels: 55 on each side of the body, of which 25 (in one case) and 28 (in the other
case) apply the free-surface condition and the rest the radiation condition. The linear equation system
resulting from the boundary conditions is solved for the complex amplitudes of all source strengths.
The flow potential follows then from (1).
367
Appendix 3: Examples of original and new source code
read(5,*,END=400)H,T,HAUPTR,EXPON,SPUH
if(H.eq.0)stop
WRITE(8,'(/1x,1A80)')TEXT
WRITE(8,'(/
& '' Seegangsdaten: Kennzeichnende Hoehe '',F10.3/
& '' Periode T1 entspr. Schwerpunkt des Spektrums '',F10.3/
& '' Laufrichtung (0 von hinten, 90 Grad von Stb.)'',F10.3/
& '' Exponent in Winkelverteilung (meist 2 bis 4) '',F10.3/
& '' Spitzenueberhoehung (1 bei Pierson-M.-Sp.) '',F10.3)')
& H,T,HAUPTR,EXPON,SPUH
C Hauptrichtung des Seegangs muss zwischen 0 und 180Grad liegen. ABSCHAFFEN
C Vorbereitungsrechnungen fuer Integration
OMPEAK=(4.65+0.182*SPUH)/T
DOMSP=0.025*OMPEAK
DMUE=0.05
DO 20 IOM=1,NOM
OM(IOM)=SQRT(2*3.14159*G/RLA(IOM))
20 continue
DO 30 IART=1,NART+3*nb
DO IV=1,NV
AMPKEN(IART,IV)=0.
enddo
30 continue
368
Risk analysis as a base for the alternative method
for safety assessment of ships
Miroslaw Gerigk, Gdansk University of Technology, Gdansk/Poland, [email protected]
Abstract
The paper presents a few selected problems associated with an alternative method for safety
assessment of ships which is based on the performance-oriented risk-based analysis. Within the
method the design process is combined with the risk analysis. The design process regards the
application of performance-oriented procedure. The risk analysis is based on the Formal Safety
Assessment FSA methodology. The main steps of the method include the hazard identification, hazard
assessment, scenarios development, performance-oriented investigations, risk assessment and risk
control. Safety is treated as one between the design objectives. The assessment of safety is based on
the risk level. The risk level may be evaluated due to the risk acceptance criteria using the risk
analysis. The method may be implemented as a design for safety method (including safe operation) or
salvage-oriented method.
1. Introduction
The paper presents some information on modelling safety of ships in damaged conditions at the
preliminary design stage by using an alternative performance-oriented risk-based method. The present
regulations related to safety of damaged ships are included in SOLAS Chapter II-1 parts A, B and B-
1. Those regulations are prescriptive in their character and are based on the semi-probabilistic and
probabilistic approaches. Application of requirements included in those regulations to certain types of
ships e.g. large passenger vessels, Ro-Ro vessels or car-carriers may lead to insufficient level of ship
safety or provide unnecessary design restrictions. Instead of prescriptive regulations IMO has decided
to use within the rules improving and new rules making process the safety assessment based on
satisfying the objectives. One of the objectives, between the standard design objectives, is a sufficient
level of safety. For this purpose IMO has recommended an application of Formal Safety Assessment
methodology published as MSC Circ. 1023, IMO (1997), IMO (2002 a)).
The current method of assessment of safety of ships in damaged conditions is based on the
harmonized SOLAS Chapter II-1 parts A, B and B-1, IMO (2002 (b)), IMO (2004), IMO (2005). The
proposed alternative method is a kind of performance-oriented risk-based analysis incorporated in the
design process with reduction of risk embedded as a design objective. It should be underlined that this
method can easily be adopted for assessment of safety of undamaged ships as it very much depends
on the problem (system) definition.
In the paper the performance-oriented risk-based method of assessing safety of ships including
modelling is briefly discussed because of limited space available. Some examples of safety
assessment for two container ships using the proposed method are presented in the paper. The detailed
discussion regarding the method and modelling will be published by the Gdansk University of
Technology later this year.
The current method for safety assessment of ships in damaged conditions is based on the regulations
included in the SOLAS chapter II-1 parts A, B and B-1. Using the current methodology the measure
of safety of a ship in damaged conditions is the attained subdivision index “A”. It is treated as the
probability of survival of flooding any group of compartments.
A>R (1)
369
where: A - attained subdivision index calculated according to the formula:
A=∑pisi (2)
The logical structure of the system for assessing the condition (1) according to the current SOLAS
methodology is presented in Fig. 1.
Start
Fig. 1: Basic logical structure of system for assessing the condition (1)
according to the current SOLAS requirements
Both the indices A and R are calculated according to the well known formulae accepted by IMO. Lets
consider the survivability of the 1100 TEU container ship at the early stage of design.
The main data for the calculations are as follows, Gdynia Shipyard (1999-2005):
370
A graphical example following from the survivability analysis of this ship is presented in Fig. 2.
Fig. 2: The general arrangement of internal spaces, example of final stage of flooding
the data group of compartments and example of “pi” factor calculation
for the 1100 TEU container ship, Gdynia Shipyard (1999-2005)
The calculations of the attained subdivision index A are connected with the large scale numerical
calculations and they are time consuming. The final results of the probabilistic damage stability
analysis for the given example are as follows:
From a designer point of view a question can arise if the ship is safe indeed. The briefly presented
prescriptive method has been the base for creating the new techniques for solving some design
problems. The nature of these techniques was prescriptive as well. It concerns the procedures for
optimization of the index A and optimization of the local safety indices. Good examples concerning
these are presented in the previous publications, Gerigk (2005 (a)), Gerigk (2005 (b)), Gerigk (2005
(c)), Gerigk (2005 (d)).
The risk-based design is a formalized design methodology that integrates systematically risk analysis
in the design process with the prevention/reduction of risk embedded as a design objective, along
standard design objectives, SSRC (2005). This methodology applies a holistic approach that links the
risk prevention/reduction measures to ship performance and cost by using relevant tools to address
ship design and operation. This is a radical shift from the current treatment of safety where safety is a
design constraint included within the rules and regulations. The risk-based design offers freedom to
the designer to choose and identify optimal solutions to meet safety targets. For the risk-based design
safety must be treated as a life cycle issue. The risk-based design in the maritime industry should
follow the well-established path of quantitative risk assessment used in other industries. The term
“risk based design” is also in common use in other industries. The following steps are needed to
identify the optimal design solution: set objectives, identify hazards and scenarios of accident,
371
determine the risk, identify measures and means of preventing and reducing risk; select designs that
meet objectives and select safety features and measures that are cost-effective, approve design
solutions or change the design aspects. This approach is briefly introduced in the logical structure of
the risk-based design system presented in Fig. 3.
Because of limited space available the performance-oriented and risk-based approaches applied within
the alternative method will be presented during the conference.
4. Alternative method
The modern approach to ship safety is connected with combining the elements of system approach to
safety and Formal Safety Assessment (FSA) methodology, IMO (2002 (b)). The major elements of the
FSA methodology are as follows: hazard identification, risk analysis, risk control options, cost-benefit
assessment, recommendations for decision making.
Combining the above mentioned with the modern ship design spiral the basis for the performance-
oriented and risk-based formal method for safety assessment of ships is considered. Integrating the
systematically used risk analysis in the design process with the prevention/reduction of risk embedded
as a design objective (along standard design objectives) the risk-based design method is proposed as
presented in Fig. 3.
The entire structure of the method is published by Gerigk, Gerigk (2005 (a)), Gerigk (2005 (b)),
Gerigk (2005 (c)), Gerigk (2005 (d)).
Regarding the risk assessment methods, there is a research going on further incorporating the risk
assessment techniques into the design procedure regarding the safety assessment of damaged ships.
The following methods are used for the risk assessment, ABS (2000): hazard identification methods,
frequency assessment methods, consequence assessment methods and risk evaluation methods. The
current set of the hazard/risk analysis methods includes: preliminary hazard analysis (PHA),
preliminary risk analysis (PRA), what-if/checklist analysis, failure modes and effects analysis
(FMEA), hazard and operability analysis (HAZOP), fault tree analysis (FTA), event tree analysis
(ETA), relative ranking, coarse risk analysis (CRA), pareto analysis, change analysis, common cause
failure analysis (CCFA) and human error analysis (HEA).
The following risk reduction principles and strategies have been adopted for the method, Grabowski
et al. (2000):
1. reducing the probability of an accident;
2. reducing the probability of consequences of accident.
A method for the ships safety estimation when surviving is introduced and it is associated with
solving a few problems regarding the naval architecture, ship hydromechanics and ships safety and it
is novel to some extent. When preparing the method for the preliminary design purposes the global
and technical approaches are used, Barker et al. (2000). The global approach mainly regards the
problems associated with the development of methodology, ship and environment definition, hazard
identification and hazard assessment, scenario development, risk assessment, risk mitigation
measures, hazard resolving and risk reduction and decisions made on ships safety. The technical
approach concerns the logical structure of design system and computational model, design
requirements, criteria and constraints, library of required analytical and numerical methods and library
of application methods. There are two approaches to risk management: bottom-up approach and top-
down approach. The top-down risk management methodology has been applied for the method which
is suitable for design for safety at the preliminary design stage. This approach should work in the
environment of performance-based standards and help designing the ships against the hazards they
will encounter during their operational life.
The key issue when using the proposed method is to model the risk contribution tree. The risk
associated with different hazards and scenario development in estimated according to the formula:
R=PxC (6)
where:
P – probability of occurrence a given hazard;
372
C – consequences following the occurrence of data hazard and scenario development, in terms of
fatalities, injuries, property losses and damage to the environment.
Start
Safety objectives
Design requirements, criteria, constraints
Risk acceptance criteria
Cost/benefit constraints
Results:
evaluated
risk levels
373
For the complex safety assessment of ships in damaged conditions the model of risk assessment has
been anticipated as presented in Fig. 5.
A good example of risk and safety assessment according to the proposed method is the design
analysis conducted for the container ship presented in Table I.
Table I: Basic data for a container ship used for the example risk assessment
1. Length between perpendiculars LBP 163.00 [m]
2. Subdivision length LS 174.95 [m]
3. Breadth B 26.50 [m]
4. Design draugth df 9.00 [m]
5. Tonnage PN 22286.00 [DWT]
6. Service speed Vs 20.40 [kn]
7. Range R 12000.00 [Nm]
In Figure 6 the distribution of the probability of occurrence (Pi) the given hazard and scenario
development has been simulated using the Monte Carlo method. The distribution of these values
according to the curve presenting the Pi values calculated using the IMO-based formulae shows the
possible influence of uncertainties.
374
The different scenarios are associated with flooding the data group of compartments presented in
Figure 7. The Ri, Pi and Ci values have been obtained using the Monte Carlo method where the
influence of different heeling moments (water on deck, wing, cargo/passenger shift, etc.) was taken
into account.
Scenario Flooded compartment
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 1+2
11 2+3
12 3+4
13 4+5
14 5+6
15 6+7
16 7+8
17 8+9
18 1+2+3
19 2+3+4
20 3+4+5
21 4+5+6
22 5+6+7
23 6+7+8
24 7+8+9
The risk distribution (Ri = Pi * Ci) in terms of surviving the collision (not loosing the ship) is
presented in Fig. 8.
risk
risk
0,12
0,1
Risk value [-]
0,08
0,06
0,04
0,02
0
23
19
21
15
17
1
11
13
scenarios [-]
375
5. Challenges
Currently, there are a few problems under consideration regarding the safety of ships in damaged
conditions which are associated with the existing prescriptive method included in the SOLAS Chapter
II-1 parts A, B and B-1. The first problem concerns how to obtain the same required level of safety for
different types of ships. The second regards updating the statistical data for the pi factor estimation.
The next problem which can probably not be solved using the prescriptive approach is the problem of
calculation of the si factor according to the pure probabilistic concept. The new formula for si factor
should include the components following from the fact that there are a few stages during the flooding
process, IMO (2002 (b)), IMO (2004), Dudziak et al. (2001), Santos et al.(2001), Santos et al. (2002),
STAB (2003): creation of damage (stage 1), transient heel and intermediate flooding (stage 2),
progressive flooding (stage 3), final stage (stage 4). During the above mentioned stages the internal
and external impacts may appear according to the following: wind heeling moment, action of waves,
ballast/cargo shift, crowding of people, launching life saving appliances, etc.
6. Summary
The alternative performance-oriented risk-based method for assessment of safety of damaged ships is
briefly presented in the paper. No details given because of limited space available. The current work
regarding the method is associated with integrating the performance-oriented and risk-based analyses
into the system briefly presented in Fig. 3. The method is novel to some extent and is currently
published by the Gdansk University of Technology.
The method uses the performance-oriented risk-based approach. The elements of Safety Case and
Formal Safety Assessment methodologies are incorporated within the method. The hazard
identification, scenario development, ship hydromechanics analysis, risk estimation and risk control
options are combined together. In this respect, the method is a risk-based design method as it
integrates the systematic risk analysis in the design process with the reduction of risk embedded as a
design objective.
Acknowledgements
The author would like to express his sincere gratitude to the Ministry of Science and Education, for
the support to carry out the investigation on novel solutions for the assessment of safety of ships, and
Chair of Naval Architecture, Faculty of Ocean Engineering and Ship Technology, Gdansk University
of Technology, for the scientific and research support.
References
ABS document (2000). Guidance notes on Risk Assessment Applications for the Marine and Offshore
Oil and Gas Industries. American Bureau of Shipping, New York, June 2000.
BARKER C.F., ARM P.E., CAMPBELL C.B. (2000); Risk management in total system ship design.
Naval Engineers Journal, July 2000.
DUDZIAK J., GRZYBOWSKI P. (2001); Computer simulations of ship behaviour in waves
according to the research activities of the Ship Research and Shipbuilding Centre in Gdansk.
Proceedings of the 1st Summer School “Safety at Sea”, Chair of Ship Hydromechanics, Faculty of
Ocean Engineering and Ship Technology, Gdansk University of Technology, Gdansk, 28-29th August
2001.
Gdynia Shipyard (1999-2005). Design data – container ship of 1100 TEU. Gdynia 1999-2005.
GERIGK M. (2005) (a); Formalna metoda oceny bezpieczeństwa statków w stanie uszkodzonym na
podstawie analizy przyczyn i skutków wypadków. XXXIII Zimowa Szkoła Niezawodności, Metody
Badań Przyczyn i Skutków Uszkodzeń, Sekcja Podstaw Eksploatacji Komitetu Budowy Maszyn
Polskiej Akademii Nauk, Szczyrk 2005, ISBN 83-7204-421-X.
376
GERIGK M. (2005) (b); Safety assessment of Ships in Critical Conditions using a Knowledge-Based
System for Design and Neural Network System. 4th International Conference on Computer and IT
Applications in the Maritime Industries COMPIT’2005, Hamburg, 8-11 May, 2005, ISBN 3-00-
014981-3.
GERIGK M. (2005) (c); A performance oriented risk-based method for safety assessment of ships.
16th International Conference on Hydrodynamics in Ship Design, 3rd International Symposium on
Ship Manoeuvring, Technical University of Gdańsk, Foundation for Safety of Navigation and
Environment Protection, Gdańsk-Ostróda, 7-10 September 2005, ISBN 83-922935-0-9.
GERIGK M. (2005) (d); Challenges of modern assessment of safety of ships in critical conditions.
BALKEMA – Proceedings and Monographs in Engineering, Water and Earth Sciences. Proceedings
of the 12th International Congress of the International Maritime Association of the Mediterranean
IMAM 2005, Volume 2, Lisboa, Portugal, 26-30 September 2005, Published by Taylor & Francis /
Balkema, London / Leiden / New York / Philadelphia / Singapore, ISBN (Set): 0 415 39036 2.
GRABOWSKI M., MERRICK J.R.W., HARRALD J.R., MAZZUCHI T.A., DORP J.R. (2000). Risk
modelling in distributed, large-scale systems. IEEE Transactions on Systems, Man, and Cybernetics –
Part A: Systems and Humans, Vol. 30, No. 6, November 2000.
IMO document (1997); Interim guidelines for the application of formal safety assessment (FSA) to the
IMO rule-making process. MSC/Circ.829, MEPC/Circ.335, London, 17 November 1997.
IMO document (2002) (a); Guidelines for formal safety assessment (FSA) for use in the IMO rule-
making process. MSC/Circ.1023, MEPC/Circ.392, London, 5 April 2002.
IMO document (2002) (b); Development of Revised SOLAS Chapter II-1 Parts A, B and B-1,
Investigations and proposed formulations for the factor “s”: the probability of survival after
flooding. SLF 45/3/3, London, 2002.
IMO document (2004); Development of Revised SOLAS Chapter II-1 Parts A, B and B-1. Report of
the SDS Working Group. SLF 47/WP.6/Add.1, London, 16 September 2004.
IMO document (2005) (a); Report of the Maritime Safety Committee on Its Eightieth Session. MSC
80/24/Add.1, London, 2005.
IMO web site (2005) (b) : http://www.imo.org.
SANTOS T.A., SOARES C.G. (2001) Ro-Rp ship damage stability calculations using the pressure
integration technique. International Shipbuilding Progress, 48, no.2 (2001), pp. 169-188.
Santos T.A., Soares C.G. (2002); Probabilistic survivability assessment of damaged passenger Ro-Ro
ships using Monte-Carlo simulation. International shipbuilding Progress, 49, no.4 (2002), pp.275-300.
STAB 2000 (2000): Selected papers from the Proceedings of the 8th International Conference STAB
2003. Madrid, September 2003.
SSRC web site (2005): http://www.ssrc.na-me.ac.uk.
377
ISO 17894 - Marine Programmable Electronic Systems and an alternative
approach to complying with Lloyd's Register Classification Requirements
Duncan Gould, Lloyd’s Register1, London/UK, [email protected]
Abstract
1. Introduction
1.1 Background
After the Overview of the Assessment Process for Software within the Marine Sector (Gould 2005
(1)) paper submitted and presented to COMPIT 2005 which covered the developing understanding of
the risks presented by shipboard software, Lloyd’s Register has prepared this paper outlining
significant developments undertaken and foreseen in this area.
Classification societies assess Programmable Electronic Systems (PES) because the extent of
software based systems exploitation on board has resulted in the safe and effective operation of
virtually all ships becoming software dependent to varying degrees. The fundamental ship operational
functions including navigation, propulsion, manoeuvring, machinery automation, power management,
stability, emergency management and environmental performance, amongst others, are implemented
through PES on a wide variety of ships.
Furthermore, the general trend has been for increased application of Commercial-Off-The-Shelf
(COTS) PES adapted for marine use and integration of systems from a wide range of suppliers due to
technical and cost reasons. On a technical basis, PES have provided the ability to realise greater
system flexibility, the possibility of automating functions that previously would have required
operator involvement and the ability to realise previously unachievable functionality. On a cost basis,
savings may be realised through reduced initial COTS costs and, potentially, reduced through life
manning costs through automation of functions.
However, the inherent increase in system complexity presented to stakeholders by PES is
compounded by integration of functionality from different suppliers’ systems as a result of the current
practices in procurement. Many shipyards and owners retain limited PES development capabilities
with these responsibilities commonly devolved to system suppliers, or even further devolved, through
requirements specifications at the contract stage, to PES and software sub-suppliers who may not be
familiar with a ship environment.
The potential benefits of increasingly complex integrated PES must be balanced against the additional
risks they present in the form of misunderstood system and integration requirements, the extent
software is relied upon to provide any particular service and potentially significant risks to safety as
shown in the vignettes in the Appendix and two following examples:
378
A passenger ship used a common generation pool of conventional diesel generators and common rail
diesel generators to supply the ship electrical system and the electric propulsion motors. The
electrical system managed by the automation system was integrated with the propulsion remote
control system. To improve ship emissions during manoeuvring, when engine loads change rapidly, a
low emission mode was developed and implemented in the remote control system software that
requested the automation system kept the conventional engine load at a fixed level whilst the common
rail engine would adjust to the load changes during manoeuvres.
However, with the propulsion system being the largest consumer the existing conventional propulsion
limitation safeguard was also arranged in the automation system software to prevent reverse power
of generating sets when regenerating propulsion power during braking by fixing a limitation on the
propulsion torque. During manoeuvring in the low emission mode, the power limitation safeguard
was initiated because of low load on the common rail engines causing the propulsion motor to remain
stopped and not respond when the navigating officer requested increased power. Only exceptional
reaction by the ship’s officers and crew prevented grounding and averted possible disaster.
The impact of introducing of new functionality on the whole system had not been sufficiently
considered to identify this unintended consequence and prevent this failure.
The software in the integrated machinery automation system onboard a passenger ferry underwent
minor software modifications by the system supplier at the request of the owner. During this
modification, unrelated parts of software code were inadvertently deleted. The vessel re-entered
service following the modification, and continued in normal operation until such time that the vessel
suffered a black-out due to a machinery failure. The expected automatic starting of standby
machinery and re-starting of essential machinery implemented through the automation system failed
and the ship's engineering officers had to revert to manual control, causing delays in restoring the
machinery to normal operational conditions. On investigation by the manufacturer, it was discovered
that the power restoration failure was caused by the latent error. It was subsequently understood that
the manufacturer encountered difficulties in determining the precise software configuration of the
system prior to the modification and the corresponding documentation.
Boxes 1 & 2. Example vignettes emphasising the emergent risks presented by the application of increasing complex integrated PES.
379
complexity presented by system integration, requirements for all classed integrated PES were
introduced to the Rules in 2003. Crucially, these requirements recognise the need for the management
of system integration to ensure systems will be safe and effective and introduce responsibilities for
integration management, communication between stakeholders and consideration of the effect of
failures on the total ship system moving beyond the traditional ‘single failure’ approach.
During a ship’s operational life, it is subject to periodic surveys by the classification society to ensure
continuing conformance with Rule requirements. When modifications, including software
modifications, to the approved arrangements are proposed and following incidents where PES may
have been a contributory factor to failures, the Rules require assessment that incorporates
consideration of the approved software quality plan.
2. ATOMOS
The European Union (EU) funded the ATOMOS project (Advanced Technology to Optimize
Maritime Operational Safety) as an instrument to aid the implementation of the EU’s Common
Transport Policy strategy of moving transport from the road to the sea considering the escalating
congestion on railways and roads of Europe, by addressing the necessary improvements required in
inter-modal service integration, service frequency, user-friendliness, transparency, dependability,
safety, efficiency and competitiveness to increase the attractiveness of shipping. The project
consortium partners included universities, marine industry manufacturers, human factors
professionals and regulators, Lloyd’s Register took a leading role in ATOMOS.
One development emerging from this project was a set of general principles for the development,
maintenance and use of PES in marine applications at all stages throughout the life-cycle of the
product. These principles provide a holistic methodology for all stakeholders to ensure that all
elements of a marine PES combine to provide a satisfactory system in the context of use. The
principles are shown in Table 1 overleaf.
380
Table 1: General principles for the development and use of programmable electronic systems in
marine applications. (ISO 17894)
381
contains information for all parties involved in the specification, operation, maintenance and
assessment of such systems. The principles and guidance in the document are largely based on
requirements in national and international standards.
ISO 17894:2005 contains enlargement on the ATOMOS developed principles with recommended
criteria and guidance for the development and use of dependable PES for shipboard use. The standard
does not contain normative references but draws on a number of other standards including:
Conformance to the principles clearly requires stakeholder involvement and recognition of human
factors throughout the system lifecycle and when correctly applied addresses the system risks
emerging from the levels of integration currently encountered in the marine industry.
Context of use
‘The users, goals, tasks, equipment (hardware, software and materials), and the physical and social
environments in which a product is used.’
The general principles recognise that PES operate within a broader total system including equipment
under control, external systems, users, user tasks, the physical and operating environment.
Furthermore, it is these components as a whole that achieve the intended business goals of the ship
operator. The principles require PES to be designed as an “integrated whole” with coordination
382
between stakeholders in order to achieve goals effectively and safely.
PES development must consider its effect on the other elements in the total system to be cost-
effective and safe. ISO 17894:2005 presents a convenient method of summarising the elements of the
total system by using a ‘context of use’ statement for the PES represented in Figure 1 below. This
framework uses context of use in risk assessment, design and testing to ensure a total system
approach which considers the users and their effect.
Within a specific context of use it is necessary to assess the level of risk associated with each
particular PES. This allows an appropriate development life cycle to be defined and followed in order
to provide assurance that the PES will behave dependably. The level of treatment required in the ISO
17894 framework is commensurate with the PES risk. Typical examples of higher risk systems may
include PES associated with propulsion, steering, navigation or communications. Lower risk PES
may include passenger information systems with the actual risk posed by a particular PES dependent
upon its specific context of use and the hazards involved.
Fig. 1: Components of the total system and related standards taken from ISO 17894:2005. (ISO
17894)
The assessment framework draws on systems engineering concepts and processes for the
management of risk in the definition of complex systems through decomposition into defined and
realisable components with specified interfaces implemented in hardware, software and people, i.e.
concepts and processes for the design and operation of a total system.
Applying a systems approach provides the benefits of good structured design, allocation of function,
definition of interfaces and needs for cohesion between system elements to give the most effective,
efficient and safe overall system and design process (or minimum risk) throughout the life of the PES.
Typically, the ISO 17894:2005 route to compliance will involve Lloyd’s Register working in a
collaborative framework with the stakeholders to ensure the principles defined in the standard have
been fulfilled, and services will be available to:
• owners in their required documentation of context of use, allocation of functions to PES and the
definition of user requirements;
• design authorities required to provide documented evidence demonstrating compliance with the
design intent;
• shipyards, or the appointed system integrator, in their required application of the principles to
their, and supplier, work; and
• to COTS suppliers, or supplier of relatively standard PES, in defining a generic context of use, as
permitted by Annex A of ISO 17894:2005, that is required to be reassessed for specific particular
ship or applications on board.
Noting, that these services would be provided on the basis of verifying conformance with the required
criteria provided to support the general principle in Clause 7 of ISO 17894:2005, and that all of the
383
principles are applicable to all PES through life, as clarified in Annex B.
However, at present, classification society involvement normally commences relatively late in the
system life cycle at the plan approval stage as described in 1.4. above at the end of the planning phase
as indicated in Figure 2 below. Any reworking required to comply with the Rules can incur
significant additional expense and time and this situation is exacerbated the further the delivery phase
progresses and classification surveys are undertaken. Clearly, the same principles apply in work
required to demonstrate conformance with ISO 17894:2005. For this reason, these services will be
available to stakeholders from the concept stage onwards to help prevent unnecessary costs and
delays and provide the best possible system for the envisaged context of use.
Fig. 2: Version of the "Vee" model of the system lifecycle showing interaction with the Classification
Society only begins at the end of the planning phase. (1), adapted from (ISO 17894).
As stated, providing the ISO 17894:2005 alternative route embodied as part of the classification
process will better identify the risks and ultimately lead to installation of a system fit for the purpose
intended when the life-cycle principles based systems approach is correctly applied, i.e. identifying
and mitigating the risks as they occur and before they are inherent in a design requires application of
the principles from the outset.
Furthermore, there exists the possibility for stakeholders to achieve maximum benefit by stakeholders
working in a collaborative framework during the development and use of marine PES with a focus on
the verifying conformance to the ISO 17894:2005 principles and supporting criteria outside of the
regulatory process. As a result, Lloyd’s Register now offers a consultancy service outside of
classification services called Dependable Systems Review (DSR) which aims to provide through life
assistance for clients.
The DSR service is arranged in a modular format following the lifecycle model described in ISO
17894:2005 and shown in Figure 2. DSR broadly follows the process shown in Figure 3. However, as
shown by the explanatory intentions, examples and benefits on the right of this figure, the DSR
service is offered in partnership. Working with ship owners, yards and suppliers to help achieve
dependability of onboard computer based systems by implementing a total systems approach; i.e.
people and technology working together to carry out tasks effectively. DSR provides essential advice
to support the key phases of system development with specialist technical advice throughout the
system concept, specification and design stages; helping to get it right first time. Dependable Systems
Review helps clients acquire and utilise dependable software intensive systems, thereby lowering the
through life business risk to all stakeholders.
384
Supporting the key phases of
Establish system development through
Context of Use concept, specification and design
phases.
Examples include ambiguity of
Identify Issues specification, context of use, and
stakeholder responsibility;
Unclear system integration and
lifecycle activities definition;
PES dependability
Assess Risks
Client support during concept,
specification and design phases;
Providing impartial, expert advice on
demand to help to resolve errors,
omissions and ambiguities in the
client project deliverables and
processes with scope to suit clients
Fig. 3: Simplified model showing intention of the Lloyd’s Register Dependable Systems Review
service process.
4. Conclusions
The increasing use of PES in essential and safety related applications on board ships and the
complexity caused by integration of technologies has given rise to new risks that the systems will be
challenging to integrate, may fail due to latent errors or incompatibility or operator error, that the
system cannot be used effectively, that safety cannot be objectively verified, and, ultimately, systems
that do not satisfy customer business needs and cannot be supported through life. Lack of awareness
or avoidance of these issues is not acceptable in the face of these risks and there is a perceived need to
continue the progression towards a total systems approach enabling people and technology working
together to carry out tasks effectively.
In the face of these emergent issues, and after significant work in the development of the marine
industry specific International Standard ISO 17894:2005, Lloyd’s Register will introduce compliance
with this standard as an acceptable alternative to demonstrate PES suitability for classification
purposes this year including a commitment to be available earlier in the development phase of
shipboard PES to ensure that the maximum benefit from this risk based systems approached is
realised as effectively and efficiently as possible.
Furthermore, Lloyd’s Register will continue to provide and develop the Dependable Systems Review
product to help facilitate a holistic application of ISO 17894:2005 and systems engineering principles
to the development and use of shipboard PES.
Acknowledgements
This paper was completed in conjunction with ongoing work in the Lloyd’s Register. The views
expressed are those of the author and do not necessarily represent the policy of Lloyd’s Register.
385
References
1. GOULD, D.A. (2005); Overview of the Assessment Process for Software within the Marine
Sector, COMPIT 2005 Conference Paper, http://www.ssi.tu-harburg.de/compit/papers.html
2. Rules and Regulations for the Classification of Ships July 2005, Lloyd’s Register, www.lr.org
3. Software Conformity Assessment System, Procedure SC94, Lloyd’s Register, www.lr.org
4. IEC 60092-504:2001, Electrical installations in ships - Part 504: Special features - Control and
instrumentation, www.iec.ch
5. [ISO 17894] (2005), Ships and marine technology – Computer applications – General principles
for the development and use of programmable electronic systems in marine applications,
www.iso.org
6. MESSER, A.C. and TWOMEY, B.J. (1997); Software Based Systems in the Marine Environment,
Lloyd’s Register Technical Association Paper 5
386
Appendix - Example vignettes. (Messer and Twomey -1997)
Propulsion Control
The ship was fitted with an automatic speed pilot at the owner’s request. Although not a classification item, the
speed pilot was connected to the propulsion control system. During trials, it was discovered that the speed pilot
was automatically disconnected by the propulsion system when the propulsion control lever reached a certain
position, without any audible or visual alarm to notify the navigating officer that the speed pilot was no longer
in control.
Upon investigation it was discovered that operating parameters of the speed pilot were incompatible with the
selected propulsion system operating mode. This could have resulted in the ship continuing on a dangerous
course without informing responsible personnel.
387
Parametric Cost Assessment of Concept Stage Designs
Marcus Bole, Graphics Research Corporation Ltd, Gosport, UK, [email protected]
Abstract
Deriving the cost of a vessel in the early design stages can be difficult. The design itself may only be
represented in a conceptual form providing little concrete data against which a cost can be
generated. Aspects of the design may yet to be determined leading to a great deal of uncertainty.
Consequently, there is little incentive to look into the costs of a design in anything more than an
indicative manner. Early evaluation of costing can be based on weight or space depending on the
type of vessel as the quantity of materials or level of outfitting can be determined with a greater level
of certainty.
Paramarine’s early stage design environment is based on the Building Blocks methodology developed
by UCL. Combined with a parametrically defined structural definition, the complete design can be
deconstructed into materials, equipment and construction activities allowing the producability to be
evaluated before reaching the initial design stage. In both areas of the software, searchable design
data is associated with semantic information (space, weight, type etc) which can be audited to identify
items for cost evaluation. Time to perform a cost evaluation is reduced as is the potential for
mistakes. However, the designer is left with just having to assign a cost values, a potentially
laborious process.
This paper discusses the challenge of automating the process of cost assignment by using the
semantic information associated with each item to determine how it is produced. By defining
“production processes”, cost can be assigned by evaluating how much material and resource each
item requires. A (microscopic) cost evaluation can now be performed as early as concept design
more easily than many existing subjective rule based approaches. Furthermore, the costing model is
based on concrete data which may be determined directly from production activities.
1. Introduction
The cost of a new vessel is often pushed to the fringes of development by those designing the
technical and engineering aspects of the product. However, it should not be forgotten that as well as
producing a design which is balanced in terms its engineering characteristics, the primary objective is
to make money for the (shipyard or design consultant) business and deliver a product that if possible
represents value for money to the owner, leaving the door open for repeat orders in the future.
2. Background
Capturing construction cost during the design of a new vessel is one of the most difficult parts of the
design process. The factors that cost depends on are always changing and only once the production
design is finalised is it possible to make a direct evaluation. However, the pressure to deliver a new
vessel on time and on budget means that construction must begin before detailed aspects of the design
are finalised.
Construction cost must be tracked during the design process to ensure that the project remains viable
to both yard and customer particularly as late changes introduced into the design can have
considerable cost impacts. It is easy to imagine cost being established by accumulating the value of
parts and labour required to construct the vessel. However, in the earlier stages of design there is
rarely enough resolution in the physical components of the vessel product model to establish cost to a
satisfactory level of certainty. Consequently, alternative approaches are used to establish cost by
comparing critical factors in a new design with the delivery of previous vessels.
The role of the cost engineer in the design process is to provide models which are capable of
establishing a cost value from data available at the different design stages. Cost estimation is often
regarded as a mysterious art as it is somewhat more of a statistical discipline compared with the other
engineering activities. Establishing a cost estimate at any stage of the design requires a high degree
388
of appreciation of the processes which occur in both design and construction process. Detailed
costing may require knowledge of how long it takes certain construction processes to be conducted,
for example, joining a stiffener to plate taking into account size, material and welding technique,
while costing for a concept design will require, for example, knowledge of how the utilisation of
different spaces of the vessel impacts on cost. The cost engineer requires both a good database of
historic information on previous ships and good contacts with industrial partners to forecast how
technical and financial changes may impact on construction costs. Once this information is
established, the cost engineer uses expertise to identify the cost estimation models which correlate
well with both the type of vessel and capabilities of the shipyard and experience to enhance
confidence in the result predicted by the model. The importance of good cost estimation cannot be
undervalued as it will be one of the main factors upon which a customer will base the decision to
move forward from design to construction. Consequently, it may be seen that the competitiveness of
a shipyard may be encapsulated in the cost engineer’s database making it a good reason to limit the
number of people who have access to the data and this factor may contribute to the cost engineer’s
illusiveness.
This situation makes it difficult for the naval architects and design engineers to have a full
appreciation of the effect of their decisions. The relationship between design and costing engineers is
not always close because solutions with technical merit are not always the most cost effective.
However, it is not necessary for the design engineers to have a complete knowledge of costing
techniques. A basic appreciation of the factors involved and advice from the costing engineers should
be enough to guide any decision process in the early stages of the design.
As ship design tools become more comprehensive, the possibility of supporting different engineering
disciplines becomes ever more possible. In the case of cost estimation, often cost engineers are
provided with hard copy information from data that must be manually extracted to populate the
costing model. This means that for every design update, this process must be undertaken to establish
the current construction costs. However, by integrating cost modelling techniques into ship design
tools it is no longer necessary to manually translate data and the process of cost estimation can
become more automated.
The integrated ship design tool Paramarine, produced by Graphics Research Corporation Ltd (GRC),
is one of the few to provide an advanced ship and submarine early stage (concept) design module in
addition to the common range of design and analysis features associated with most ship design tools.
As part of the development of the early stage design module, many existing users of the Paramarine
system were interested in being able to produce an estimate of cost from the design data entered into
the tool. In 2004, GRC embarked on a research project, ITMC41, funded by the Ship Builders and
Ship Repairers Association (SS&A), to develop a construction evaluation module in partnership with
the Tribon system. The Design for Production (DfP) module remained an internal development until
2005 when funding assistance was obtained from the Price Forecasting Group (PFG) to complete the
module. As part of this development, two cost estimation techniques were developed to address
different resolutions in the level of detail in the vessel product model.
The characteristics of cost are very similar to weight. Early in the design process there is a very low
degree of confidence in the values of both cost and weight as the resolution of the design is too
coarse. As weight embodies the amount of physical material in a vessel it is often correlated with
costing information to produce an estimate. Other factors such as main particulars and space
utilisation can also be used as drivers of costing models. As mentioned in 2, the role of the cost
engineer is to identify parameters that drive the costing of different aspects of a vessel and use
previous experience to construct a costing model. Consequently, any design parameters or ship
characteristics which correlate well with cost may be used to drive a cost estimation model.
Cost estimation models for naval vessels are often more comprehensive than those used for
commercial ships in general, Ennis, (1998). As well as business, there are also political and social-
economic factors behind the production of these types of vessels. Furthermore, complexity of
systems means that naval vessels may be three to five times more expensive than commercial vessels
of a similar size. Consequently, cost estimation models for naval vessels are developed to capture
389
many different factors and be flexible to different approaches and data resolution with the overall aim
of having greater confidence in the results for the information available.
Ross, (2004), discusses several approaches which may be used to estimate the construction costs of a
vessel. The techniques may be classified into two groups which naval cost estimation techniques
often describe as Ship Work Breakdown Structure (SWBS) and Product-Orientated Work Breakdown
Structure (PWBS).
390
4. Paramarine Concept Design Environment
Paramarine is an integrated ship and submarine design environment developed by embracing the full
capabilities of modern object-orientated software development. The tool itself features an object-
orientated design framework which allows the parametric connection of all aspects of both the
product model and analysis elements together. The system supports analysis disciplines in common
with most ship design tools such as stability, powering and structural analysis, which when combined
with parametric connectivity, allow designers to build up complex designs using all of the features of
the solid modelling kernel provided by the industry standard Parasolid tool set. In addition,
Paramarine features several unique modules specifically orientated towards the development of
concept designs where the role of the vessel may require the designer to explore innovative solutions.
As mentioned in 2, users had requested that Paramarine should be able to address costing aspects of
concept design. The tool already provides the user with the ability to define their own functions and
calculations so techniques based on simple parametric expression can be included without any
additional development. Even so, Paramarine Early Stage Ship Design module already models many
design characteristics of a vessels such as weight, buoyancy for example and cost could be
incorporated in a similar fashion. However, it was also felt that the tool could support a much more
advanced approach to cost estimation taking into account materials, equipment, systems and
production effort. As a result, the Design for Production module was developed as a way of
analysing production with respect to construction cost.
391
Fig. 1: Detailed Building Block Model of a Frigate Fig. 2: Building Block details of the
(Geometry) hull and propulsion.
The analysis process is mostly automated and the user only has to provide definition for the build
strategy, Fig. 3, and resolve the intersection between plate subassemblies (i.e. which structure in an
intersection between a bulkhead and deck should be continuous), Fig. 4.
392
Production parameters, Fig 5, are then defined to specify how “continuous” materials such as plating,
stiffeners and service lines (pipe and cabling) may be supplied to the construction site. Coating such
as paint, insulation etc, can be assigned to parts of the design and included in the production
calculations. For equipment and service line design definition, the analysis references the design
information from the early stage design module.
Fig. 5: Production parameters defining the size Fig. 6: Hierarchy of an individual production
of actual plate, stiffeners and service line parts. block.
Once the user has set up all the basic production information, a single calculation module is used to
analyse the entire definition. This is a very large calculation and can take around an hour to perform
the analysis for a very detailed definition. The analysis module takes the vessel definition through a
very similar process to ship construction. For each sub assembly, it combines plating and stiffener
definition together and then subdivides them on the basis of the production parameters details. Then
the junctions between individual plate and stiffeners are identified, defining all the locations where
there is a need for welding or cut outs. Service lines are analysed in a similar manner taking into
account whether sections such as bends will be constructed in the yard or brought in as piece parts.
The results of the analysis are presenting in a hierarchy based on the build strategy, Fig 6. At the
lowest level, the sub assembly brings together individual plate, stiffeners and junctions of a panel that
will go to make up a production block. A production block captures all the sub assemblies, service
lines and equipment associated with that part of the build strategy. It also captures all of the junction
information required to join all of the sub assembly parts together. Subsequent production blocks
higher up in the hierarchy capture the junctions between the block beneath. The benefits of this
approach is that it is possible to capture differences in the manufacturing process, and hence cost,
which may depend on where the block is located both within the build strategy, geographical within a
shipyard or across multiple yards.
The number of parts generated by this analysis may be very large, Fig. 7. However, as this
information is generated parametrically, mostly from the structural definition and audited
automatically there is no particular overhead on the user. In fact, it is not usually necessary to review
the data in detail because the parts produced by the calculation are of no interest on an individual
basis at the concept stage of design.
393
a) b)
c) d)
Fig. 7: Production information for a hold of a bulk carrier for the structural definition alone
a) plating, b) stiffener lengths (square define end of the stiffener run), c) continuous junctions (welds
between plate and stiffeners) and d) discrete junctions (welds between stiffeners and intercostals).
Both the Early Stage Design and Design for Production provide logical methods that design and cost
engineers can understand and work together to populate calculations. Audit functions are used to
accumulate the characteristics for all items in the respective hierarchies and compared with costing
information. Items that do not have an associated cost definition are flagged up as an infringement
and listed, Fig. 8. This mechanism ensures that all items are incorporated in the costing calculation
unless explicitly excluded.
However, the analysis produced by the Design for Production calculation my identify tens of
thousands of parts which, although it is not necessary to cost individual parts, the number of cost
cases may number in the thousands. For example, the frigate illustrated in Figs. 3 and 4 has a basic
structural definition with realistic scantlings, no equipment or systems and produces 1664 individual
cost case scenarios. Populating a cost database with this quality of information would take an
394
incredible amount of time. Each scenario, Fig. 9, defines the complete cost information for all the
processes an individual part may go through in the construction process and it remains the job of the
user to collate the composition of this costing information. Consequently, the management of this
information is incredibly difficult as a single process in construction may affect several cost
scenarios.
a) b) c)
Fig. 9: Example cost scenarios for (a) plate (8mm), (b) continuous junctions (4mm Flat Plate to 4mm
Flat Plate), (c) Discrete Junctions (7in ‘T’ profile stiffener through 5mm plate).
In the early stages of design, the SWBS is a very effective approach as it uses a few characteristic
design parameters to determine the cost of the vessel, but as previously mentioned it cannot capture
factors resulting from the introduction of new technology or when a shipyard has no previous
experience with a vessel of a particular type. The PWBS approach can capture this information but it
would usually be inefficient to perform this type of cost assessment in the early design stages as there
is not enough detail in the design and too much effort would be required to identify all the costs. The
Design for Production module is capable of performing such a cost assessment on an early stage
design because it can use a parametrically defined structural definition and automatically identifies all
the items requiring costing from both the structural definition and outfitting. However, the detailed
aspects can only be hidden up to the point where it is necessary to assign cost details to each costing
scenario.
Both Early Stage Design and Design for Production modules aim to assist the designer as much as
possible by performing calculation and definition automatically as part of the design process. The
requirement on the designer to assign a significant number of cost details even if from a standard
database goes against the approach generally followed by Paramarine. Consequently, an improved
means of populating all these cost details had to be identified to allow the Design for Production
module to be used effectively in the concept design of ships and submarines.
A review of the cost scenarios identified by the module highlights that the majority of them are very
similar, particular items relating to structure, with only material details being different. Costs for
each scenario could be generated on the basis of the processes each goes through in the construction
process. Consequently, the procurement, forming and joining process could each be defined
separately for a range of material sizes. The number of discrete processes is fairly low compared with
the number of cost scenarios and the metrics relating to the amount of time and resources each
process requires should be known to the yard. By chaining processes together, the details of each cost
scenario can be determined by accumulating the amount of cost and resource required by process for
the size of material or task involved. A database of yard process could be developed and updated as
new technology is introduced. As each process represents something realistic, it is not necessary for
the user to determine the breakdown of costs for each cost scenario. This is something that is now
performed automatically by the software.
To implement process based costing, two key object definitions are required. One to define the
details of the process and a production schema used to combine processes together and select what
cost items the processes will be applied to.
395
6.1 Process Definition
Processes are generally fairly simple definitions as they only have to capture the relationship between
the nominal size of a material item to be processed and the related amount of time or money it takes
to process it. There are three key parameters to each process:
metric: Metric defines what aspect of the material the process works on and is used
to select the right cost or effort values from the process data. The options
are thickness, length or area. For example, the amount of time it takes to
weld or cut a unit length of plate is dependant on the thickness.
resource or cost type: Processes can be cost or resource based. Resource based processes need to
be associated with a resource type and are used to define the cost of
utilisation. For example, a flame cutting process should be associated with a
flame cutter resource, a worker requiring wages, tools and administration.
Both resource and cost type are used to provide additional breakdown
information in the production audit.
data: The data associated with each process defines how long it takes or how
much it costs for the process to function on a unit of material characterised
by the chosen metric. The data can be entered in the form of a single scalar
value, Fig. 10a, or a single value, Fig. 10b, or range based lookup tables,
Fig. 10c.
The process definition database only needs to be populated once and updated when new techniques
are introduced, costing is updated or process duration reassessed. The database can be used as a
resource within the design office and imported into a design when costs needs assessed. Processes
are defined as generically as possible so that any type of cost or resource based operation may be
accommodated.
a) b) c)
Fig. 10: Process based on (a) scalar multiplier, (b) single value
lookup table, (c) range based lookup table.
396
Type Schema Name Coverage
Material Plate Procurement and forming of plate material
Processing Stiffener Procurement and forming of stiffener sections
Service Line Procurement and forming of cable lengths or pipe elements
Coating Application of surface treatment (paint, insulation etc)
Continuous Continuous Junctions Joining Plate to Plate, Stiffener to Plate or Stiffener Web to
Junctions Flange
Discrete Stiffener Junctions Joining ends of Stiffeners to Plate or other Stiffeners
Junctions Intercostal Junctions Cut outs and Junctions where stiffeners or service lines
penetrate plate
Service Line Junctions Joining ends of cable or piping runs
In order to assign process based costing information to cost scenarios, four types of definition
information are required:
selection criteria: The selection criteria are used to define the cost scenarios which should be
addressed by the schema. Criteria select scenarios on the basis of the
configuration i.e. the type of joint (fillet, butt) or material shape (curved, flat)
and material specification.
costing metric: The costing metric of the production schema and process metric need to be
compatible and an error is raised if an incompatible process is attached to a
schema. For most schemas the costing metric is implicit, plate is measured in
area, stiffeners are measured in length etc. However, for discrete junctions the
metric is sometimes dependant on the junction configuration. The costing
metric is used in cases where this information needs to be explicitly defined
allowing the right processes to be attached.
processes: References the processes which should be applied to the cost scenarios
fulfilling the selection criteria.
Production schemas for material and continuous junctions are fairly straight forward to construct as
all of the details required to make the process work are contained within the cost scenario. However,
discrete junctions are just points in space, associated with the junction configuration and details of the
materials that intersect. The dilemma of identifying the right level of detail for concept design is
again faced as there is not nearly enough information in the cost scenario to identify what
construction processes should be applied. Furthermore, a single object is used to assign cost details
for all discrete junctions, Fig9c. However, a single generic production schema for all discrete
junctions is potentially too generic making it very difficult for the user to intuitively work out how to
generate cost information based on the objects structure.
After several different approaches the solution (which still remains to be implemented in the software
at the time of writing, hence the lack of appropriate cost information for all discrete junctions in the
example in the next section) is to specialise producing three separate production schema discrete
junction scenarios covering stiffener end, intercostals and service lines. Specialising allows a specific
range of process tools to be provided for each type of junction and make the assignment of processes
more intuitive. The intercostal junctions are perhaps the most complicated because of the variety of
processes that may need to be used each comparable to a different characteristic of the intersecting
stiffener or service line. A cut out for example will be a process with length equivalent to the
perimeter of the profile while the weld length may only be along one side of the flange or web. To
allow this, separate costing metrics are required for each process.
397
a) b) c)
Fig. 11: Example production schema definitions (a) procurement (attached processes are illustrated
in Fig. 10a), (b) forging (selection criteria chooses only curved plate or stiffeners), (c) continuous
junctions (selection criteria distinguishes between fillet and butt junctions so that different weld
process can be used).
To generate cost and resource information, the production calculation is audited thereby producing a
large list of all the production specifications and junctions. Each item is passed through to the costing
database which performs a search to match the details of the specification or junction with a cost
definition, Fig. 12. The details of the selected cost, parameterised by the relevant area or length of the
specification or junction, are added to a second audit list.
The process of generating cost information using production processes needs to be transparent so that
the origin of any costing data can be determined. To achieve this, the object which produces costing
information creates its own cost definition database for every specification and junction on which a
production process functions, Fig 13. It receives the audit list of production specifications and
junctions and for each item meeting the criteria of a least one process produces a cost definition. The
cost definition is then sent to each process which assigns a separate cost element to the definition.
Each cost element is visible beneath a definition allowing the user to query the details of any cost
item.
398
Fig. 12: a costing database produced manually. Fig. 13: a costing database generated from
processes. The elements of each map to the
individual processes which produced the cost
definition
7. Example Calculation
To demonstrate this process on a full ship definition, a cost will be generated for the frigate example
shown in Fig. 4. The costing will only cover materials and continuous junctions as processes
covering discrete junction are still under development. Furthermore, the example contains no systems
definition (cabling or piping). The basic information used to generate costs is as follows:
399
are defined to address covering materials, forming and continuous junctions. To allow the calculation
to function discrete junctions are covered by a dummy process developing zero cost.
a) b)
Fig. 13: Results of the production analysis, (a), the tree view structure containing the production
breakdown hierarchy of all the parts and junctions, (b) the graphical view of the plates, stiffeners and
junctions of the entirety of block 11.
A cost evaluation of the production analysis is performed by associating an audit object with the
production calculation and the cost database. For the example, the cost evaluation of this design in
terms of material and labour due to continuous junctions is shown in Table 3. Note that the labour
cost shown for the “production_envelope” includes not only the labour from each block, but also the
work required to join the blocks together. The audit is capable of presenting detail results down to
individual parts. This, however, does make for very large tables.
400
The audit is also capable of presenting a breakdown of resources in terms of the utilisation of each
resource at every stage in the construction. Table 4 shows a breakdown of work hours for the welder
and rolling machine setup in this example.
8. Conclusion
Cost is often a secondary consideration for the engineers concentrating on delivering the technical
aspects of a new design. Costing can only be addressed once the technical details have been resolved
and it is possible to review the composition of the design. This two stage process results in a degree
of separation between technical and costing departments working on the project and creates the
situation where there may be a need for further design iterations. While these two engineering groups
operate separately there may be little opportunity to go through an optimisation process to improve
cost.
Cost is seldom an issue addressed by integrated ship design tools. The calculations are generally
simple and often remain within the domain of the costing engineers. However, there is a considerable
amount of data within a ship product model which could be used for generating costing information if
analysed and presented in a manner which costing engineers can utilise. The Paramarine Design for
Production module aims to bring both the technical and costing engineers closer together by
providing a tool in which both groups can interface their skills and expertise. The module conducts a
considerable amount of analysis on a simplified representation of the production information reducing
the amount of manual effort required to identify all the cost elements of a design and it reduces the
amount of cost information that must be provided by using implied production processes to generate
cost details for both material and work elements of construction. When used in conjunction with the
Early Stage Design module, users have the facilities to generate costing information directly from the
contents of the module as demonstrated in the example or use a ship work based approach based on
the types of weight or spaces in the design.
Although the Design for Production module will be part of the Paramarine release in the middle of
2006 it has already attracted a lot of interest. This would seem to reinforce the conclusion of limited
support for detailed cost estimation in many of the integrated design tools used for both ship and
submarine design. GRC hopes to work closely with users of this new module to understand how cost
analysis can be integrated into the design process and allow the technical engineers to understand how
they can produce a more cost effective design.
401
References
ANDREWS. D., PAWLING. R., (2002), SURFCON – A 21st Century Ship Design Tool, University
College London, 2002.
ENNIS, K. J., DOUGHERTY, J. J., LAMB, T., GREENWELL, C. R., ZIMMERMANN, R. (1998),
Product-Orientated Design and Construction Cost Model, Vol. 14, Journal of Ship Production,
February 1998.
GAMMA, E., HELM, R., JOHNSON, R., VISSIDES, J., (1995), Design Patterns: elements of
reusable object-oriented software, Addison Wesley.
ROSS, J. M., (2004), A Practical Approach for Ship Construction Cost Estimating, COMPIT’04,
Siguënza, May 2004.
SCHNEEKLUTH, H., BERTRAM, V., (1998), Ship Design for Efficiency and Economy,
Butterworth+Heinemann.
402
Automation of the Ship Condition Assessment Process for
Accidents Prevention
Abstract
For oil tankers to be more environmentally friendly along their life cycle, IMO has set forth a
condition assessment scheme for single hull tankers, which involves huge amounts of measurement
information. Performing those inspections efficiently requires processing measurement information
on a real time basis, resulting in cost savings because fast assessment of the ship condition and
decision-making could be done while the ship is still in the dock for maintenance.
Measurement information consists of thickness measurements, visual assessment of coating and
cracks detection. In the existing situation, because there is no standardization of data, it is recorded
manually on ship drawings or tables, which are very difficult to handle. Measurement information
takes a long time to report and to analyse, leading to some repairs being performed at the next
docking of the ship.
The system being developed in the European project “CAS”, in order to address these issues, is
applicable to any ship type and includes such innovative features as: development of a simplified and
flexible ship electronic model which can be refined to fit the needs of inspections, addition of
measurement information in this ship model, integration of robotics, easy handling of measurement
information using virtual reality, immediate worldwide access.
Systematic comparison and consistency checks of measurement campaigns will trigger electronic
alerts. Repair decisions and residual lifetime of the structure will be calculated with modern methods
of risk based maintenance modelling.
The purpose of the “CAS” project is to transform the workflow of hull condition measurements
(mostly thickness measurements, crack detection and coating condition) on board any vessel into a
fully electronic process.
The project is a three years duration, EC financed project. It has started on 01 February 2005 and will
last three years. The budget is 3.2 M Euros.
The partners represent all actors involved in the thickness measurement process: classification society,
thickness measurement company, software designer, repair ship yard, ship owner and charterer. A
company specialized in robotics and automation, that will develop a robotic measurement system for
the CAS project, completes this consortium.
403
< Bulkhead fr 25 12.0 -.- >
Structural element
404
HCM Pictures (Autocad, etc)
Visualization
Click !
x
405
HCM (Hull Condition Model) Measurements Input Tool
(plate thicknesses, cracks,
<Bulkhead fr 25 12.0 -.-> coating condition)
Click !
406
3.6 Probabilistic condition prediction
Probabilistic methods will be applied to try to predict the condition of the structure in the future:
starting from known conditions of the ship’s hull, using predictive models of degradation of a ship’s
hull, the tools will predict the condition of the structure at selected points of time in the future.
Fig. 5: The MHC robot working upside down on a Fig. 6: The MHC can also be used for
ship’s bottom underwater operations (here ship hull cleaning
with a 700 bar pressure tool).
For the CAS project this robot is equipped with a three probe NDT system. This device delivers three
A-Scans of the measurements, but will also be able to deliver C-Scans of the measured surface (in
combination with the position log of the measured point).
Fig. 7: The MHC equipped Fig. 8: The measurements are displayed on the screen of the control
with an ultrasonic station of the MHC. The control station will be the link to the CAS
measurement system in the system
front
The complete process will be validated via a full scale demonstration on board a real ship in dry dock,
in a repair shipyard.
407
5. Dissemination of the standard
All this technical work is only useful if accompanied with the proper level of dissemination, for the
concept to be used in the real life.
The project includes a presentation of the concept to IACS (the International Association of
Classification Societies) in order to promote the standard and eventually to incorporate the HCM
measurement standard into the classification common rules.
6. Future exploitation
- Freeware IAC
- S TM
Basic tools Input tools Co
(Visualizator, basic
Input tools)
HCM
HCM Robot Co
Input tools
- Payware - CLASS
Class tools (alarms, etc)
- Payware - EDITOR
Owner
Charteer
Shipyard
References
[1] JARAMILLO, D.; CABOS, C.; RENARD, P. (2005), Efficient Data Management for
Hull Condition Assessment, 12th Int. Conf. on Computer Applications in Shipbuilding
(ICCAS), Busan/Korea
408
Interactive 3D Environments for Ship Design Review and Simulation
Brian Sherwood Jones, Digital Design Studio, Glasgow School of Art, UK,
[email protected]
Martin Naef, Digital Design Studio, Glasgow School of Art, UK, [email protected]
Mairghread McLundie, Digital Design Studio, Glasgow School of Art, UK, [email protected]
Abstract
The effectiveness of the design review process at the early stage of novel and complex ship designs
has a strong influence on project success or failure. Managing complexity and covering a large
decision space impose heavy demands on the process. This paper presents a new approach to
supporting the review process using interactive, immersive 3D environments linked to simulation
models. The system, still under development, enables users who are non-expert in CAD to modify
design parameters in real-time using a virtual-reality-based interface and receive immediate feedback
from simulations and design rule checking systems. Review planning and post-analysis will be
supported through an integrated annotation and logging system. We describe the design rationale for
the system, some technical challenges, and how these will be addressed.
Introduction
European shipbuilding plans to build more complex ships to tighter timescales with lower costs, and
to give fuller consideration to more aspects of design (Waterborne, 2005). This combination of
demands will place great demands on the design review process (Sherwood Jones & Anderson, 2005).
Trade-offs and optimisations will need to be made against multiple criteria within the context of a
review exercise. Offline resolution will be too slow. The computer model of the design, supported by
simulation models and stored data, will need to provide the design team with support for naturalistic
decision making. A high-quality visualisation of the design will be necessary to counter the lack of
familiarity with complex, innovative designs.
The move to risk-based design in an environment of Goal-Setting Regulation will require greater use
of demonstration and exploration of the design in a range of operational and emergency situations.
These examinations will need to be conducted in real time if programme constraints are to be met.
Building on work for the automotive sector (Anderson, 2002) the Digital Design Studio (DDS) has
started to develop an environment to support the early-stage review process for complex ship designs.
The environment is immersive, and allows direct interaction with the physical model in 3D; the
implications of changes to key parameters are shown as outputs from online simulation models.
The environment allows modifying key parts of a ship model, such as moving or resizing structural
parts, using an intuitive user interface based on tracked data gloves. The visual environment and
model is linked to a functional model that implements design rule checking and numerical simulation.
Design rule violations and simulation results are streamed back to and visualised in the immersive
environment, enabling the user to interactively assess design changes.
Future development will address both technical and process-orientated issues concerning the human
interaction with and around the model, including the development of suitable interface environments
tailored to review activities.
This paper describes the environment being developed and its proposed manner of use.
1. Background
409
In the naval sector, the slow ‘drumbeat’ of new designs places demands on skill retention, and
reviewers may not be as experienced as would be desirable, potentially requiring additional support
(Robb, 2002). The platforms are complex in terms of numbers of systems, the number of viewpoints
or emergent properties to be considered in a design review has increased, and the pass/fail criteria
have become more demanding. Many stakeholders do not have regular experience of 3D CAD, and
find it difficult to interrogate the design; anecdotal evidence suggests that customer specialists have
definite problems with current CAD technology. Naval platforms have an unfortunate history of cost
and time overruns, and design reviews have an important role to play in preventing such problems.
In the merchant ship sector, technology is changing rapidly, and it is not so simple to examine some
deck plans and assess whether or not the design will ‘work’ physically or functionally. European
shipbuilding has been set ambitious targets for the next few years (Waterborne, 2005). For the
platform, they include:
• Increase ship productivity 30%
• Reduce ship energy consumption 25%
• Improve safety, environmental performance.
For the design and build process, the targets are to:
• Reduce time to delivery 20%
• Reduce design and build labour 50%
• Take a lifecycle approach (increasing the number of emergent properties to be managed by
project stakeholders).
Individually, these targets are demanding. Collectively they are challenging. The current CAD design
approach and design review process will become a bottleneck. In addition, the move to goal-based
regulation will require that design reviews are used to demonstrate that the platform will ‘work’ rather
than demonstrate compliance with prescriptive requirements.
The offshore sector is somewhat more accustomed to goal-based regulation, but there is still much
work to be done in its effective application at the early stages of design. A new generation of
platforms is just emerging which will place considerable demands on the ability of design teams and
regulators to assure themselves that all aspects of the design will ‘work’.
410
“Bryson made the case that real-time exploration is a desirable capability for scientific
visualization and that IVR [Immersive Virtual Reality] greatly facilitates exploration capabilities.
In particular, a proper IVR environment provides a rich set of spatial and depth cues, and rapid
interaction cycles that enable probing volumes of data. Such rapid interaction cycles rely on
multimodal interaction techniques and the high performance of typical IVR systems. While any of
IVR’s input devices and interaction techniques could, in principle, work in a desktop system, IVR
seems to encourage more adventurous use of interaction devices and techniques.” (Van Dam et al,
2000)
3. Related work
4.1 Background
The AutoEval system was conceived as a digital means of allowing groups including senior
executives and designers to evaluate automotive designs in large scale, improving the quality of
design decisions and reducing the need for clay models, thus streamlining and reducing costs in what
is a largely digital design process. To be suitable for this purpose, the system had to have an intuitive
411
interface for non-frequent users who may have little familiarity with 3D modelling software; it had to
be capable of large scale, high quality 3D visualisation; and it had to be suitable for use by small
groups.
A ‘proof-of-concept’ 3D interface incorporating real-time visualisation, gesture interaction with
tactile feedback and 3D sound was developed, which explored a range of methods of interacting with
digital models displayed in 3D space (Anderson et al, 2001). The interface uses single hand gesture,
with tactile and 3D audio feedback, to interact with a workbench-sized stereoscopic 3D display.
4.2 Infrastructure
The system is based round a Fakespace Immersive Workbench. CrystalEyes shutter glasses allow the
user to see the model in 3D space. Unlike fully immersive systems using head-mounted displays, this
semi-immersive system brings the 3D model into the user’s space, allowing them to see themselves
and other members of the group as well as the model.
The user interacts with the system using a tracked CyberTouch™ sensored glove with vibro-tactile
stimulators on the fingers and palm. This glove can sense the bend and relative position of the fingers
and thumb, allowing interaction via gesture; combined with tracking it can sense the hand’s position
in space, allowing the user to explore and manipulate the digital model directly in 3D space.
3D spatialised audio provides sound ‘cues’ and feedback for different actions; sound moves in space
appropriate to the manipulation of the model.
The system was originally developed on the SGI platform; it has recently been ported to a high end
PC platform.
Fig. 1: A user moves their head ‘inside’ the model to inspect a viewpoint
412
4.4 Operations
Two main classes of operation were implemented in the original system: transformation and
evaluation.
Transformation operations include ‘translation’, ‘rotation’ and ‘scale’. Translation is achieved either
‘freehand’, simply by picking up the model and moving it, or along particular axes by selecting the
‘move’ menu option, and then the axis from the small 3D menu which appears. Rotation can be
achieved similarly either ‘freehand’, by picking up the model and rotating it in space through
movement of the hand and wrist, or relative to its origin by selecting the ‘rotation’ menu option, then
the axis from the sub-option; rotating the wrist rotates the model round that axis (Fig. 2).
To scale an object, the user selects the ‘scale’ option from the menu. A 3D sub-menu appears with
‘scale cubes’ corresponding to scale in the x, y, z axes or proportional (Fig. 3). The user ‘grasps’ the
Fig. 2: Rotating the model; the 3D sub-menu Fig. 3: A screen shot with the Scale option
can be seen to the right of the user’s hand chosen, showing the 2D main menu and the
3D ‘scale cube’ sub-menu
appropriate cube with thumb and forefinger to select the required scale option; once the cube is
grasped, curling the remaining fingers into the hand will reduce the model size; stretching them out
will increase the model size. When the grasp gesture is released, the model remains at that size,
allowing repeated scale operations. While scaling up or down is in progress, the audio feedback
increases and decreases in pitch respectively.
Audio feedback is used to indicate the activation of different operations; for example when a user has
selected the rotation option, once they grasp the model, the ‘rotation’ audio sequence is played until
they release the model. Different operations each have their unique audio ‘signature’.
Evaluation operations include a 3D guillotine tool, a 2D plane, the ability to select and remove parts
of the model, and a dynamic evaluation tool: ‘haptic lights’. The 3D guillotine tool is a ‘cutting plane’
which can be grasped like any other object and freely rotated to any angle by moving the wrist (Fig.
4). Moving it back and forth through the model allows internal details to become visible (Fig. 5).
Calibrated, it could also be used as a measuring tool allowing dimensions easily to be read off the
model, although this has not yet been implemented.
Fig. 4: The 3D guillotine tool in use Fig. 5: Screen shot of the 3D guillotine tool
used vertically to view a cross-section
through the model
413
Fig. 6: Screen shot showing the 2D plane with schematic drawing
The 2D plane allows a schematic drawing, for example, to be moved back and forth through the
model (Fig. 6). In the automotive context, this allows the 3D model to be checked against the
‘package’ data. In a shipbuilding context, it could be used in a similar way, e.g. to check the model
against deck plans or mould lines. It can also be used as a quick check against any defined 2D criteria
e.g. for maintenance removal routes. . The pick tool allows individual elements of the model to be
selected, allowing parts to be removed for inspection; for example removing the upper deck to reveal
the structure beneath. A dynamic evaluation tool has also been implemented: ‘haptic lights’ which can
be picked up and moved around and within the model. In an automotive context light is used to check
the integrity of surfaces; however the principle of dynamic tools could be extended to include
hydrodynamic flow, for example.
414
The DDS approach is intended to provide better support to NDM and to analytical interrogation. It is
also intended to support the social interaction within a design team, and to promote the exploration of
win-win solutions.
415
conceptual design review in this environment, and are clearly not fully understood at this stage of
development
416
process noticeably. Optimizing such tools for speed may be required as such performance
requirements were usually not part of the original design specifications.
More complex simulations (e.g. CFD calculations) that can not deliver real-time updates have to be
decoupled from the interactive system. The system visualizes the results from the last calculation, but
indicates if input parameters have changed rendering the previous results invalid. New calculations
are triggered explicitly. Even though the systems are decoupled, there is still a direct link available to
keep the user interface integrated.
The integration of simulation tools also poses implementation challenges. Many such systems are
designed as interactive applications and do not expose automation interfaces for remote control.
Support and cooperation from the original tool developer will be required.
6. Conclusions
All parts of the marine sector are starting to use new technology at an accelerating rate, and regulatory
regimes are being adapted to allow this. The tools proposed here are intended for circumstances where
there is a significant risk that design errors will not be identified by traditional methods.
The use of VR to date has been largely driven by fashion, with less empirical investigation into its
effectiveness. Previous research into ‘collaborative immersive environments’ or ‘a collaborative
virtual environment’ has concerned remote collaboration by single users, sometimes with avatars, in a
shared virtual environment.
417
The DDS approach will contribute to knowledge on how immersive systems impact, both positively
and negatively, on the decision making processes of groups of users using a single immersive
environment. The early indications are that it offers considerable potential in improving the
effectiveness of the review of complex marine platforms.
Acknowledgements
The authors would like to acknowledge the support of our partner, Chris Ross, at QinetiQ, on this
project. We would like to thank our colleagues at the Digital Design Studio who originally developed
the AutoEval system, and Paul Anderson and Ian Johnston for their input.
References
ANDERSON, P. et al. (2001), Three Dimensional Human-Computer Interface, Patent application no.
PCT/BG01/02144
ANDERSON, P. et al. (2002), The role of emerging visualisation technologies in delivering
competitive market advantage, 2nd International Conference on Total Vehicle Technology, IMechE
BUCCIARELLI, L. (1996), Designing Engineers, MIT Press , Cambridge
FULK, J. and STEINFIELD, C. (1990), Organizations and Communications Technology, Sage
Publications, Newbury Park, CA.
ISO/IEC 15288:2002 (2002), System engineering - system lifecycle processes
KLEIN, G.A. et al. (1993), Decision Making in Action: Models and Methods, Ablex Publishing Corp.,
Norwood, NJ.
MCGRATH, J.E. and HOLLINGSHEAD, A.B. (1994) Groups Interacting With Technology, Sage
ROBB, M. et al. (2002), Design of Efficient General Arrangements: a software aid to the decision
making process, INEC 2002
SHERWOOD JONES, B. and ANDERSON, P. (2005), Diversity as a Determinant of System
Complexity. 2nd Workshop on Complexity in Design and Engineering, Glasgow, 10-11 March
VAN DAM, A. et al. (2000), Immersive VR for Scientific Visualization: A Progress Report, IEEE
Computer Graphics and Applications Nov/Dec pp 26 et seq
VINCENTI, W. (1990), What Engineers Know and How They Know It, Johns Hopkins Press,
Baltimore
WATERBORNE TP (2005), Waterborne Transport & Operations - A Key Asset for Europe’s
Development and Future.
WHYTE, J. (2001), Business Drivers For The Use Of Virtual Reality In The Construction Sector,
AVR II and CONVR 2001 Conference at Chalmers, Gothenburg, Sweden, October 4th-5th
WIEBE, E. N. et al. (1997), Organizational assessment of integrating CAD and Product Data
Management Tools in the furniture industry, Furniture Manufacturing and Management Center
Technical Report (Tech. Rep. No. 1997 3) North Carolina State University, Raleigh, NC
418
Simulation of a maritime pre-fabrication process
Jeroen A.J. Kaarsemaker, Delft University of Technology, Delft/The Netherlands,
[email protected]
Ubald Nienhuis, Delft University of Technology, Delft/The Netherlands
Abstract
The shipbuilding industry -which is continuously facing a dynamic and rather instable global market-
competitiveness- is particularly faced with pressure relating to reliable and short delivery dates at
relatively low prices. These factors are more important than ever. To meet this challenge, it is
important for shipyards and their co-operating partners (e.g., suppliers and subcontractors) to
achieve an optimal utilization of resources, to make a feasible planning, and to keep to this planning.
Planning, scheduling and coordinating (control) of internal processes and chained processes can be
improved significantly by computer simulation of dynamic production and logistic process models.
These models take into account all dependencies and details of the complex process and product.
This paper gives an introduction of developing production simulation models and applies the
technology to control the internal production and logistic processes at the supply company Metalix
BV. The model was built on the basis of the discrete-event simulation package eM-Plant and the
universally applicable Simulation Toolkit for Shipbuilding (STS) of the Flensburg Shipyard Company.
On the one hand, with this simulation model the internal lay-out and production scenarios can be
tested, analyzed, and optimized. On the other hand, this model can also be used for research in the
area of supply chain simulation in ship production.
The paper starts with a discussion of both production simulation and the STS. Next, the paper briefly
introduces the production process at Metalix BV. Third, process parameters and product data
required as input for the simulation model are written down. Fourth, the main features of the model
are described. Finally, several considerations with regard to the implementation [of the newly
developed simulation model] in a real-life setting and in supply chain setting are given.
1. Introduction
In the last decades the Dutch maritime industry was focused on product development and
improvement. To cope nowadays with the severe international competition, the Dutch maritime
industry has to enhance its competitiveness not only by a high quality product but also by further
process improvement, leading to reliable and short delivery times and relatively low prices.
Improvement of shipbuilding process control is needed to achieve this, but factors as; the number of
production steps, the enormous amount of parts and subassemblies, and the far-reaching interference
with subcontractors make shipbuilding a very complex process which lies near the limit of or is even
beyond the human grasp.
Within the shipbuilding industry, simulation to control processes has been applied with success in the
steel building area. Result of this experience is the universally applicable Simulation Toolkit for
Shipbuilding of the Flensburg Shipyard Company (FSG), Steinhauer (2005). The Ship Production
department of Delft University of Technology (DUT) included the simulation in ship production
theme in its research program, Alphen (2004), and participates since spring 2004 in the Simulation
Cooperation in the Maritime Industries (SimCoMar). Goals of this cooperation are the further
development of the STS, knowledge exchange in applying simulation and joint research. The actual
cooperation partners are DUT, FSG, Nordseewerke Emden, the Technical University of Hamburg-
Harburg and the Center of Maritime Technologies, Steinhauer (2005).
419
2. Production simulation
2.1 Simulation
Planning, scheduling and coordinating (control) of internal processes and processes chained across
organizations and departments can be improved significantly by computer simulation of dynamic
production and logistic process models. This kind of simulation (production simulation, see Fig. 1) is
defined as the dynamical reproduction of a complex system (complete production process) which
can’t be described mathematically. From examining this model, which takes into account all
dependencies and details of the complex process and product, conclusions (depending on the
objective of the simulation study) can be drawn which are translatable to the real system.
This simulation technique does not comprise an automatic optimization that gives automatically an
optimal solution via an algorithm. Indeed such optimization would remove the manager from the loop
and needs to be very robust to give meaningful answers. In the first instance the simulation technique
is a decision aid for the question “What happens when?” during planning or “What now?” during
operation. Especially, simulation is also a “mind strengthener” which can find by interaction with a
user (different scenarios can be modelled by varying system parameters) the most useful, the most
flexible or in general the “best” solution. An optimization is only possible by iterative model
modifications, Košturiak (1995).
The application of simulation in production planning can fulfil different tasks on different decision
levels within an enterprise, also shown in Fig. 1. At the strategic planning level of a shipyard, the
production program for a ship is defined based on the early design, Steinhauer (2005). During the
tactical planning stage, simulation supports the production planning and control systems (what, where,
when, who), Košturiak (1995). For a shipyard, the tactical planning tends to the optimization of the
plan for the next weeks in certain production stations. In operative control at a shipyard, the foremen
on the shop floor realize the plans and react on possible disturbances like lost material, production
errors or machine breakdowns, Steinhauer (2005).
420
2.2 Simulation model
Making a simulation model of a production process can be split up in four phases (Fig. 2):
1. Analysis: introduction to the production processes and facilities of the company under study.
Collection of material flow diagrams, process parameters and dimensions of production facilities.
2. Data: collection of necessary product, process and project data
3. Model: creation of the simulation model of the targeted production area and process. The use of
standard tools from object libraries, like the STS, speed up this process.
4. Validation and verification: comparison of the simulation model with the production process
regarding the objective of the simulation project, to examine the correctness of the
implementation and the correspondence of the model with the observed reality. Synchronizing the
model with real life determines in a large measure the success of the simulation model.
Obviously, this requires the extensive availability of real production data.
Upon completion of these four phases, implementation in the operational processes takes place
including interfacing with existing systems, introduction of the tool set, organizational embedding and
training of employees.
Input (Fig. 3) for a simulation model can be considered as; the system constraints, the information
collected during the “Analysis” phase, and the data collected during the “Data” phase. The model
should produce suitable output to enable “Validation and verification”. As such the model should stay
as close as possible to the data formats normally employed in the manufacturing environment. Where
this is insufficient it is vital that improvements to the prevailing data management are introduced.
2.2.1 Input
By means of logical system constraints, considering the objective of the simulation model, certain
homogeneous technological areas can be insulated from their environment with the purpose to be
modelled. This proper demarcation is important and should follow the rules of system modelling. The
process description consists of a process scheme and route scheme. A process scheme captures the
outlined production process, which elucidates all different steps (storage, transport, waiting, actions,
operations) A route scheme elucidates for each production station how material is supplied and how
products are exported and with which means of transport. The facility data are needed to model the
production plant, they consist of the main parameters of available resources; a plan of the plant,
dimensions and process parameters of production facilities and among other things required personnel
and means of transport. Generic methods are required to enable simulation of assembly processes,
they are described by; different assembly strategies, assembly sequences for every assembly type, and
421
process time formulas. These should be preferably based on physical properties rather than on overall
statistics based on historical data. Modelling the static production plant is possible with the input data
described so far (number 1 until 5 in the INPUT table in Fig 3)).
Product data and planning data are required to simulate the production in the “static” simulation
model of the production plant. A production planning is needed to start activities in the model and via
the personnel planning numbers of personnel with certain membership and qualifications are allocated
to regarding facilities. If the model is validated, the planning of production and personnel can be
optimized via an iterative process. Product data is required to supply material with the right physical
attributes in the model, to produce the product with the right physical attributes and to export it to its
destination.
2.2.2 Modelling
As mentioned by Bernaert (2005), especially for modelling production processes, object-oriented
discrete event simulation (DES) is useful. The DES expression classifies the way of time advance of
the simulation. In this type of simulation, the simulation executive (or event controller) orders events
chronologically in an “event list”, then, at the specified time the first event on the list is executed (i.e.
relevant model logic is carried out) and removed. Subsequently, the simulation clock “jumps” to the
time of the next event (vs. continuous simulation) in the “event list”. While the simulation is running,
new events are generated and inserted at the appropriate point in the list. These events may be
triggered by certain pre-conditions in which case they are not scheduled but wait to be released for
processing.
In object-oriented software, data and mechanisms are structured different from traditional software.
Anything related to a single entity (both data and event routines) are bundled together to form a class
(e.g. a machine). Objects (e.g. machine1, machine2, etc.) of the machine class can then be created. In
object-oriented simulation software, the functionality developed is part of a library, not a model.
Therefore the functionality can be used to build many different models quickly, especially since it can
be exchanged with other users, Ball (2001).
An object-oriented discrete event simulation model is built on the basis of dynamical and stationary
elements. Two other important elements are events and resources, Košturiak (1995).
Dynamical elements represent physical objects like material and means of transport but also
information. These elements move in time through the system and cause condition changes. They are
characterized by attributes which can be changed from outside. They only assimilate information and
execute no operations. Conversely, stationary elements (processors, buffers or control) execute
operations. These get in contact with dynamical elements and are able to change their attributes (read
and process). Stationary elements can activate or deactivate other elements within the system by
sending messages to them. Events originate from the interaction between dynamical and stationary
elements. In “event simulation”, these events take care of the actual manipulation and evocation of
condition changes. Resources are characterized by limited capacities, e.g. personnel and machines,
Košturiak (1995).
The model structure depends on the system constraints. If the simulation model should contain several
Homogeneous Technological Areas1 (HTA) grouped in an Integrated Technological Area2 (ITA), then
it is sensible to apply a modular structure. A modular structure of extensive system models increases
the clarity and the usability. In that case, HTAs are modelled as a partial model in its own network.
This partial model is inserted into the universal ITA model network (with the highest hierarchy)
which connects the different HTAs by means of a logistical network. The different modules (partial
models) are modelled independent of each other and can be replaced and changed independent from
the total model. In this way a very powerful environment is created which fits the complexity of a
shipbuilding manufacturing process very well and which allows a gradually growing model of an ever
larger part of shipbuilding operations, ultimately covering simulation of the engineering process as
well, see e.g. Coenen (2006).
1
The regarded processes in an HTA are strongly interwoven and can’t be subdivided any further
2
ITAs represent the various departments of a factory, e.g. the fabrication shop for steel parts manufacturing
422
2.2.3 Output
Suitable output (Fig 3) exists of tabular or graphical representations, e.g. bar charts and Gantt-charts,
which quickly gives an insight in and an overview of the simulated production. Resource utilization
ratios are particularly useful for bottleneck analyses. The possibility to compare the planning with the
simulated production in a Gantt-chart on different detailed outline levels allows an insight in and an
overview of the performance of the total production facility. It shows which production orders are in
time or delayed and which action or operation caused that. The combination of resource performances,
comparison of planning with a simulated production realisation and the possibility to trace every part
in the simulated production (“Part statistics” and “Transportation table”) enables searching for reasons
for delays and disturbances, which normally are not obvious because of all dependencies in the
process. From these, conclusions can be drawn regarding improvements to production planning and
resource management.
3.1 Metalix BV
A simulation model is developed to control the internal production and logistic processes at the supply
company Metalix BV, a joint venture of IHC Holland Merwede BV and ODS BV. This supplier of
pre-manufactured metal components is situated at the IHC Holland shipyard location in Kinderdijk
(Fig. 4), close to Rotterdam.
Fig. 3: The IHC Holland shipyard location in Kinderdijk with a top view of the Metalix plant
423
The IHC Holland shipyard in Kinderdijk has, like many other shipyards, outsourced the processes of
steel preparation and steel parts manufacturing3. The steel stockyard and the fabrication shop are still
located on the same site, but under management of Metalix. Metalix supplies the steel parts for this
shipyard and for other companies. The pre-fabricated steel components are supplied in the form of
ordered packages for shipbuilding and construction. Pre-fabricated steel for the IHC-yard is directly
delivered to its panel line4 in the form of panel5-packages.
3
During the steel parts manufacturing process, all the steel components needed for panels and sections for the
steel construction of hulls are made. This includes cut plate parts, bent shell plates, cut and bent profiles.
4
Panel line: production station developed to partly automate the production of panels
5
Panel: stiffened welded plate field also called a surface section or 2D-block
424
Fig. 4: Route scheme (left) and the production process scheme (right)
The combination of a route scheme and a production process scheme in one figure (layered on each
other) is required to comprehend the production process. Such a figure elucidates at which station/spot
the events from the process scheme take place, or which route is used for transport.
425
Fig. 6: Useful simulation achievements for resource management
All manufactured steel parts are produced on the cutting machines. To increase the throughput, the
use of these resources should be optimized at all times. Plate delivery and part removal should be
planned in such a way that the cutting portal is continuously operational. At the moment that one plate
is cut on a cutting support, the other cutting support in the same cutting portal should be loaded with a
new plate. Alternatively parts should first be removed from that cutting support. To prevent the
cutting portal being brought to a standstill (waiting for an action or for steps in the cutting operation
process) requires evaluation of planning alternatives and facility alternatives. With the evaluation of
planning alternatives it is for example possible to determine smart plate sequences. With the
evaluation of facility alternatives it is possible to determine the influence of changing crane speeds,
buffer dimensions, hall arrangements, or the influence of separating the marking and printing wagons
from the cutting portal. These are all alternatives which couldn’t be easily evaluated without the
application of simulation.
Operation resources:
• Cutting machines: speeds, process times, dimensions portal and rails, number and dimensions of
wagons, personnel requirements, configuration (all different steps within the cutting operation
process are considered during simulation)
• Cutting supports: dimensions, process times
• Bending machines: working time, preparing time, personnel requirements
Logistic resources:
• Cranes: dimensions portal/bridge and rails, driving and lifting speeds, maximum weight,
personnel requirements
• Trucks: dimensions, driving speeds, maximum weight, personnel requirements
• Lift trucks: dimensions, driving speeds, maximum weight, personnel requirements
• Roller conveyors: dimensions, rolling speed, maximum weight
• Areas: dimensions, position and dimensions of zones
The personnel required to operate the resources are defined with a certain membership, qualification
and quantity. A main parameter of the fixed assets is the required qualification of the operator(s). The
membership of an employee depends on his/her multi-skilledness and determines if he/she is only
allowed to work on a particular location. The number of available employees with a certain
membership and qualification(s) in a certain shift are defined in the personnel planning. If a resource
426
needs an operator with a certain qualification, but no employees with that qualification with the right
membership is planned in that particular shift, then the resource will not be activated.
Depending on the quality of data, they need to be converted, sorted and possibly corrected. Obviously
in a professional simulation environment this is all automatic. In simulation it is necessary to assign a
unique name for every order, the Activity. This can for example be structured as follows: “yard
number + section number + subsection number”. This unique name (Activity) is used in the whole
simulation model to link plates and parts with the order data, see Fig 8. By means of this Activity it is
possible to find in the product database which parts belong to an Activity and which parts will be or
are produced from which plate.
Fig. 8 gives a typical example of the information needed to create a simulation input file. If some
information is not yet available, that information can then be created for example on the basis of the
parameters of Fig. 9. These parameters are e.g. based on average values or formats from the product
database.
Every order is noted in the production planning, but not all required product data is always available
via de product database. This is filled with data when a plate is nested, but nesting occurs only
relatively shortly before actual production starts. The situation that the production of a certain order
needs to be simulated before the particular order concerned is nested, will occur often. In that (or in
other) cases, the lack of product data should be circumvented by generating product data with the help
of parameters, see Fig. 9.
Fig. 8: For simulation input required product data and personnel data
427
Fig. 9: Parameters for generating simulation input
The delivery list is a list of ordered plates, every plate has its own “Plate ID” and it is known for
which order (Activity) every plate is ordered. The “Plate ID” is used to link a nesting file to a plate.
The “Due date” is provided for every Activity, the delivery date is calculated with the parameter
“Days between delivery date and due date”.
The production planning is used to start the production of an Activity on the required production date.
The “Due date” (according to the required planning, not necessarily achievable!) is provided for every
Activity, the production date is calculated with the parameter “Days between production date and due
date”.
Via the machine planning it is known which plates will be cut on which cutting machine. The “Plate
ID” identifies which parts, with which dimensions and with an own “Part ID”, are cut from the
corresponding plate. A code for the follow-up route in the production is added to the Activity, this
code may during sorting and packaging be replaced by the customer-specific identification.
A year planning for personnel is generated from personnel, shift and calendar data. In the planning,
the required personnel are defined with a certain membership, qualification and quantity, see Fig 8.
428
5.2 Model
Based on the presented procedures and activities, a simulation model was built on the basis of the STS
and by programming methods6 which start actions and control the elements in the model. A view of
the structure of the Metalix model is shown in Fig 10. A partial model of a production hall and a 3D
view of the whole simulation model are presented in Fig 11.
Fig. 11: Simulation model of Metalix in 3D with 2D representation of a partial model of a production
hall
5.3 Output
The STS collects automatically part statistics for all STS-objects and transportation tables. An
example is shown in Fig. 12, which shows single status statistics of an object in terms of waiting,
working, blocked and maintained. Noted must be that all graphs were obtained for a hypothetical
production situation. On the basis of these utilization ratios, it is possible to obtain a quick and
ordered insight in the influence of resource utilization on the production process, which is particularly
useful for bottleneck analyses.
Fig. 12: Grouped resource utilization ratios and all single status statistics for an individual cutting
machine (hypothetical situation)
On the basis of the resource utilization ratios it is not possible to assess which order causes a delay or
which order is delivered too late in relation to the majority that is delivered on time. Without an in-
depth comparison of the simulated production realisation with the planning, no conclusions can be
made regarding the planning. Therefore, it is possible to create an MS-Project input file after every
6
A method queries information from an object and returns a value. It calculates a value and starts one or several
actions that control the behavior of the object, eM-Plant 7.5 (2005).
429
simulation run. The “Gantt-chart” representation is split up in four different outline levels; every next
outline level goes deeper into details with regard to the production of an order, as can be seen in Fig.
6. By smart sorting and usage of the different outline levels it is possible to quickly get an overview of
and insight in the course of the planned production and its delays and disturbances.
The first outline level refers to the section level, indicated with order. The second outline is
considered as the subsection level, indicated with activity. Outline level three represents the plate
level and the last outline level concerns about the operation level. Every outline level shows when the
production of the regarding product (section, subsection, plate) or operation was started and finished
and when the order is ready for delivery. As a comparison it is indicated what the planned production
time was for this order. This allows facile identification of orders or activities that are not executed
according to the planning.
The combination of the resource utilization ratios, the MS-Project file and the possibility to trace
every part in the simulated production with the part statistics and transportation table, enables
searching for cause and effect in the production realisation and identify ways to improve it. These are
usually far from obvious because of all interdependencies in the process. From such analyses
conclusions can be drawn regarding improved production planning and resource management.
430
6. Concluding remarks
This paper describes the development of a simulation model for a steel pre-fabrication plant. It
includes virtually the whole production process of the company under study. This paper does not
describe the validation and verification process. However, from the comparison of the production
planning with the output of the simulation it was concluded that the simulation model is able to
approach the reality without significant deviations from the planning.
The model is therefore applicable for operational control of the process and for testing alternative
scenarios and analyzing various facility lay-outs.
The study has shown that it is feasible to model a company like Metalix in the form of a simulation
tool. Preliminary experiences have indicated that the presented simulation techniques are useful for
the improvement of the competitive position of Metalix.
Because of the possibility to use the STS, the build-up time for the simulation model was drastically
reduced. The development of the presented simulation model of Metalix is for someone who is used
to work with the STS a matter of weeks, instead of months or years. How much time is needed for
interfacing with data systems of the particular company of course depends on the level of data
management already available. This study proved that the toolkit is quite universally applicable.
In parallel to the work reported here, a simulation model of the panel line of IHC Holland in
Kinderdijk was developed. This is a unique situation which offers the opportunity to do research in
the area of multi-company supply chain simulation in ship production. Fine tuning of processes by
means of a shared planning which is based on federative simulation beyond the company boundaries
is a promising methodology to improve cooperation. This future scenario was considered during the
development of both models. And it will, amongst others, be the subject of ongoing research at Delft
University of Technology, dedicated to Supply Chain Simulation in Ship Production. It has the goal to
examine the possibilities of simulating the total cooperative ship production process, leading to
federative simulation beyond various organizational entities, thereby optimising the yard process
including subcontractors. This will in turn result in an optimised integrated planning and material
flow. For this research special tools to enable federative simulation need to be adapted and
implemented (High Level Architecture (HLA) for Modelling and Simulation) which can be seen as an
addition to the STS.
References
1. ALPHEN, VAN H.; GUYT, A.; NIENHUIS, U.; WAGT, VAN DER J.C. (2004), Virtual
Manufacturing in shipbuilding processes, European Shipbuilding, Repair and Conversion- The
future, 2-3 November 2004, London
2. BALL, P. (2001), Introduction to Discrete Event Simulation:
http://www.dmem.strath.ac.uk/~pball/simulation/simulate.html
3. BERNAERT, S.M.M. (2005), Simulation of production in a shipyard’s machining centre,
proceedings of COMPIT 2005, Hamburg, pp.391-398
4. COENEN, J. (2006), A simulation-based toolkit for monitoring, evaluation, planning and control
of engineering, proceedings of 6th international symposium on Tools and methods of competitive
engineering (TMCE) 2006, Ljubljana
5. eM-Plant 7.5 (2005)
6. KOŠTURIAK, J.;GREGOR, M. (1995) Simulation von Produktsystemen, Wien 1995
7. STEINHAUER, D. (2005), SAPP – Simulation Aided Production Planning at Flensburger,
proceedings of COMPIT 2005, Hamburg, pp.391-398
431
Lifecycle structural maintenance software
Richard P. Neilson, Vice President SafeShip Development, American Bureau of Shipping,
[email protected]
Satyajit Roychaudhury, Director Offshore Software Development, American Bureau of Shipping,
[email protected]
Christopher Serratella, Chief Engineer – SafeShip Project, American Bureau of Shipping,
[email protected]
Larry Benthall, Project Manager, American Bureau of Shipping, [email protected]
Abstract
The American Bureau of Shipping has developed a computer application for maintenance planning
for ship hull structures. A three dimensional structural model is used as the foundation for recording
the condition of the ship from inspections, and the model contains all the pertinent information
relevant to each element of the structure including scantlings, material grade, dimensions and date of
construction and repair. All attributes of maintenance significance can be entered into the model
including critical areas of concern for inspection, findings from inspections, damages, repairs,
gaugings, hull coating condition and pitting. This paper describes how the application and
particularly how this application can be used as the basis for a targeted critical area based or a risk-
based inspection (RBI) program for a hull structure.
1. Introduction
After delivery of the vessel to the owner, the emphasis of a vessel information system shifts from the
shipyard’s manufacturing-oriented PLM system to the owner’s maintenance and operation oriented
system. This useful detailed information shifts from a static part-definition view required in the design
and construction phase towards a time-based condition and prediction emphasis required during the
operations phase. A natural fit for much of an owner’s perspective of lifecycle management is often
better suited towards the data framework required by Classification Societies, since they also have a
lifecycle participatory role with the vessel during design and construction which naturally feeds into
the condition and analysis of the vessel and its maintenance needs during operation.
As analysis techniques become more sophisticated, ship and offshore structures have become more
complex and innovative. As a result, the designs do not necessarily mimic those of their predecessors
with certain aspects unique in their configuration. In particular, naval vessel design emphasizes
minimum structural weight in order to maximize payload and speed. Compounding this problem is the
fact that many organizations differentiate between the “Capital Expenditure” and the “Operational
Expenditure” segments. This can mean that design features which improve a vessel’s operability,
inspectability and maintainability are rejected at the procurement stage in the interest of decreased
initial costs and as a result, the traditional, prescriptive inspection and maintenance methods are no
longer the most effective. It is imperative that those elements of the vessel’s structure more prone to
corrosion and fatigue be identified early in the design process and carried as known “critical areas”
during the operational phase.
In the marine world, companies operating in the offshore oil and gas sector with effective asset
integrity management programs for their platforms (i.e. Floating Production, Storage, and Offloading
vessels or FPSOs) strive to consider all aspects of operation and maintenance throughout the life-
cycle of that asset from design to decommissioning.
This paper will describe an approach to developing a Risk-Based Inspection plan for vessel structures
and present a tool to aid in the management of that plan that has the potential to result in significantly
improved asset integrity management and cost savings.
While there remains a strong need for traditional rule-based and prescriptive approaches, marine
assets are becoming more complex, have a higher degree of novelty, and many aspects of their
designs are falling outside of traditional Class rules. Ever-expanding technologies often require the
abandonment of trusted methods, the stretching of boundaries, and the adoption of new unfamiliar
432
procedures. Risk and reliability based design, operation, and integrity management programs are all
becoming more commonplace in this environment.
In addition, the operator’s control of the integrity management of its assets must now reach far beyond
the minimum. Society now expects due diligence and proactive management from vessel operators.
Today’s organizations must also adapt to constant advances in technology while burdened with the
mandate to do more with less as budgets grow leaner by the day.
Companies with effective asset integrity management programs strive to consider these facts
throughout the life-cycle of that asset. The leading edge operators ensure that operability and
maintainability are considered from the initial concept and detailed design, through construction,
commissioning, onward through operation and potentially through a life extension program.
This application was developed by ABS to provide a tool for the surveyors to report survey findings
in a more accurate and consistent manner, and for the use of owners and operators as they may desire.
The architecture of the system of the system at the moment is a standalone application, with plans for
expanding to an enterprise solution for ABS, and for customers, as shown in Fig. 1a.
433
2.1 Benefits
• Hull Maintenance provides the owner/operator the tools to effectively track & manage the
structural integrity of their asset.
• Structure is modeled as built.
• History is recorded on the structure via gaugings, anodes, pitting, coating condition, damages
& repairs.
• Inspection Plans can be generated and tracked.
• Critical Areas can be defined and tracked for a Risk Based Inspection Approach (RBI).
• Digital images/files can be linked to the inspection findings.
2.4 Navigation
The navigation within the application was designed with the aid and feedback of personnel
responsible for hull maintenance. Multiple options have been provided for the user to navigate
through the model to access the various functions:
• Whole vessel - The highest level is the 3D view of the vessel (Fig. 2), where the user is able to
view the compartment arrangement of the whole vessel.
• Compartment - The second level is the detailed view of a compartment showing all the structural
parts (Fig. 3). The user can hide these parts to get a better view of the internal arrangement.
• Structure - The third level is the 2D detailed view of the structural part (Fig. 4). This view
displays the plates, stiffeners and brackets that constitute the structural part. The user can select a
subpart and query on the attributes as shown in the following figure.
434
Fig. 2: View Whole Ship
435
2.5 Updating the Model with Condition of Vessel
The condition of the vessel is updated on layers associated with the inspection event type and
identification, and the date of the event.
436
In addition, the application can analyze the vessel based on updated condition:
• produce corrosion trends and life prediction graphs of the subparts
• calculate the section modulus at any longitudinal section on the vessel based on the gauged
results.
Digital images can be linked to the damage and a recommendation attached to it. Damages are tracked
in the system as “Open” (not repaired), “Partially Done” (partially repaired) or “Done” (completely
repaired).
In the above example (see Fig. 7), an area has been defined on the structural part and an associated
recommendation specified. Users of this application will be able to view the list of recommendations
(Open, Partially Done or Done). This assists the maintenance personnel to quickly see the status of the
outstanding recommendations and the recommendation associated with this damage.
Repairs to a damage can be planned and executed in a graphical manner. The repair can be defined by
drawing a polygon around the area that is damaged. Once this is defined the system provides a
material list that can serve as guidance for the repair. Once the repair is carried out, the user can
replace the damaged area with the replaced structure and change status to “Done”.
437
The following dialog box (Fig. 8) shows how the user can define the inspection plan for a
compartment and provide the inspection schedule and the scope of the inspection. A text box is
provided to provide comments, if any.
Once the inspections have been defined the following timeline (Fig. 9) provides a convenient way to
track these inspections. The time line provides an overview of the inspections that are due, overdue
and done. Clicking on the timeline bars would navigate the user to the findings for that inspection
when it has been completed and view the attached digital image.
438
Fig. 10: Inspection Findings
439
2.7 Coatings
Extensive features related to coatings have been provided. An overview of the features has been listed
below:
• Coating Assessment-Coating Type & Specs: Allows the user to enter/ edit manufacturers,
coating types and specification for coating of a tank/ compartment.
• Compartment Coating Assessment: The user can enter total area (can also be automatically
calculated), area of breakdown, area needing repair, overall condition, schedule the coating
repair and provide comment.
• Coating Assessment-Repair Procedure: The application automatically brings up the coating
specification (Surface Preparation and Coating Types) and the user enters : coating
application method, blast material, and whether scaffolding is needed.
• Coating Assessment Report: A Coating Assessment report complete with specification and
repair procedure can be generated.
2.8 Anodes
Anodes can be attached to the structural parts, condition recorded and reported upon.
The marine and offshore industries are drawing upon the lead set by other industries, such as the
nuclear and aircraft, in the application of risk-based approaches for design and in-service inspection.
These approaches are now moving into the upstream sector of the oil and gas industry and to a lesser
extent, the shipping industry.
The goal of a risk based inspection study is to ensure that the resources (inspection manpower and
costs) are distributed to where the there is the most probable benefit to risk reduction.
Of particular interest has been the application of risk based inspection (RBI) techniques in which
experienced based data related to corrosion, corrosion protection, and fatigue performance as well as a
better understanding of these degradation mechanisms is applied to set inspection frequencies and
scopes. The implementation of these risk and reliability based inspection techniques into the
development of a plan provides an alternative to prescriptive time based inspection planning.
Degradation models and input from subsequent inspections are used to forecast the condition of the
structure. The risk-based method includes aspects of the condition-based methods using trending
techniques to estimate likelihood, but it also factors in an estimation of the consequences of the
structures degradation and potential failure, enabling the program resources to be optimized and
focused towards inspecting those items which have a greater risk weight overall.
RBI for hull structures is becoming a prevalent trend in the offshore oil and gas industry. Operators
feel there are significant benefits in developing RBI plans that are tailored to their asset in regards to
both design and operation. By taking this approach, the inspections are more targeted and the
operational constraints better managed, resulting in a more optimized inspection program while
maintaining the same level of safety.
Note that for the risk-based approach, a major contributor is the foundation of experience from the
Class rules.
There are significant benefits in developing a plan that is tailored to a specific class or type of vessel
rather than following a rule based approach. The following provides a list of some key benefits from a
risk-based inspection plan.
• Asset Specific Plan – The plan is tailored for the particular design and operational variables
such that resources are focused on the highest risk components. This can influence inspection
frequency and compartment inspection sequencing (i.e., sampling amount tanks). The
advantage of this are focused inspections, for example a focus on fatigue since strength
related failures are a much lower risk due to the loading. The plan can also incorporate overall
business requirements, such as required storage requirements or compartment downtime
limitations.
• Demonstrable Basis for the Inspection Plan – An RBI plan provides rational basis for the
extent and methods of inspections based on structural analysis and structural reliability. This
440
allows additional flexibility for inspection planning and execution as well as a better
understanding of which items are critical.
• For RBI, the data collected from the inspections is used to validate and update the degradation
models and determine if adjustments in future inspections are warranted. As a result, some
form of electronic data management tool is typically required to store and trend data.
3.1 Methodology
The overall approach to the development of the RBI plan involves the use of structural reliability
methods which are then applied to determine the inspection intervals based on environmental loading
as applied to strength considerations of the hull girder as well as to stiffened and un-stiffened plate
panels. By tracing the time-varying reliability index for these structural components, the risk-based
inspection intervals can be determined. This methodology has recently been implemented to provide
the foundation for several risk-based inspection (RBI) plans for floating oil production units located
offshore West Africa.
The plan development not only includes structural analysis results but also historical data, tank service
condition data, condition summary, qualitative leak risk assessment and information on all other
external structures that may affect the hull inspection.
The process starts with the structural analyses, consisting of both strength and fatigue assessments.
The analyses provide global stress and fatigue results in the “as-is” condition as well as local models
of critical areas to further refine the assessment. The results of these assessments allow the
identification of specific critical areas of the structure that are more prone to high stress levels or
fatigue damage, so that they can then be targeted in the inspection program.
The “degradation models” are comprised of calibrated limit state equations for various failure modes
and used within the reliability analysis draw upon the vessel’s past history, structural analyses and a
qualitative risk analysis. The degradation models enable forecasting and time varying reliability
methods to be used in determining acceptable inspection intervals as well as most appropriate
inspection methods for those selected intervals. These methods allow the results from degradation
model predictions, structural/fatigue analysis results and other factors to be assessed and compared
using probabilistic techniques to pre-defined reliability targets.
The reliability targets are driven by risk and the potential consequences identified as part of the
qualitative risk assessment. The results from the degradation modeling and reliability analysis are
inspection intervals for a component or system that will allow that component or system to maintain
an acceptable level of reliability. These models and analyses are updateable so that the most recent
information is used when determining the reliability level for both strength and fatigue.
The qualitative risk assessment identifies the potential consequences related to hull structural damage.
The risk assessment is used to highlight and account for other factors that may impact hull integrity
not necessarily covered by the strength and fatigue analyses or the reliability analysis (such as leak
potential from pitting damage, coating breakdown, etc.). The results from this assessment are used to
adjust the individual component target reliabilities up or down on a risk basis which in turn influences
the required inspection intervals. Furthermore, the input from the operations personnel and risk results
generated during the exercise provides a forum to identify key or critical inspection locations as well
as understand potential consequences (i.e., impact to operations) related to the structural integrity of
the hull. (See Fig. 12 and 13).
441
4.20
1 x estimated corrosion rate
Annual Reliability Index
4.15 Analyzed Panel
2 x estimated corrosion rate
4.10 3 x estimated corrosion rate
4.05
4.00
3.95
3.90
3.85 Annual Target, Consequence II
3.80
0 2 4 6 8 10 12 14 16
Time (Year)
Detail 13 >500
Horizontal SH > 250 (P/S)
6
Girder 387 (P)
No Inspection
Detail 2 Visual Inspection on Same Connection
No. 3 103 (S)
110 (P) MPI on Same Connection
Detail 13
Fatigue Life 111 (S) 5 Consequence III Target
4
Safety Index
3
Detail 1 >500
SH > 250 (P/S)
>500
>500
Detail 5 (P/S)
2
Detail 3 (P/S)
>500 1
Detail 4 (P/S) >500
Detail 11 (P/S)
Detail 7 153 (P) >500
SH > 250 152 (S)
Detail 9 (P/S)
0
0 5 10 15 20 25
199 (P) Year
Detail 12 123 (S)
21 (P) >500
Detail 8 21 (S)
Detail 10 (P/S)
The final stage of the process is the development of a forward-looking risk based structural inspection
plan for the asset. The key to this part of the RBI is the development of a general rule set for
combining the results of the qualitative risk ranking, structural analysis results as well as the
degradation and reliability model results. A systematic approach is utilized which uses the strength
and fatigue reliability as the primary basis (i.e., starting point) for setting the intervals and then draws
upon other data such as sampling inspections, critical inspection points, outstanding issues as well as
general Class requirements to adjust the inspection frequency intervals.
442
3.2 RBI Study Results
A brief summary of the conclusions drawn from the study include a determination of an optimized
inspection schedule for the hull structure as well as the definition of various areas requiring increased
scrutiny, hereafter defined as “critical areas”. These critical areas are made up of a series of key
locations that have been deemed to require monitoring above and beyond the typical Class
requirements of visual examination and UT gauging. They are made up of strength (yielding and
buckling), damage prone, and fatigue sensitive areas within the hull. Typical examples of critical
areas found in the study are summarized in Table I.
The process of developing the RBI program involves determining an optimized schedule of
inspections for the various hull structural components, taking into account the various degradation
mechanisms at play in tandem with the critical areas identified within the study.
Applicable
Description Impact Actions & Responsibilities
Location
Wing Tanks Cracks detected in fatigue The typical inspection sampling is 4 similar
cross ties of 2nd reliability connections (i.e., inspections of connections in
frame aft of OTB same tank and/or other tanks under similar loading
conditions. Up-sampling required if a crack is found
as part of the inspection of 4 connections in order to
obtain an acceptable reliability level..
2S, 4P, 4S Severe localized safety If severe wastage is observed, technical support
and Fore Peak wastage in way of must be informed of condition and provided
critical connections thickness measurements and diagrams showing
(i.e., crane pedestal, effected region. This type of deterioration will be
stair tower, turret addressed on a case-by-case basis to determine the
arms and strut) and type of mitigation required.
deck to hull
connections
Wing Tanks Localized buckling strength This has been deemed a strength critical area per the
LBH local plate reliability reliability calculations. Close visual inspection for
paneling (i.e., around signs of buckling is required during scheduled wing
regions which have tank inspections.
been reinforced)
3. The Process Using the ABS Hull Maintenance Application for RBI
Once the results of the RBI study are known, the program results are then entered into the ABS
SafeShip Hull Maintenance (HM) software tool which can then be used to manage the data for the
program. This tool has the ability to create an inspection plan for the unit, utilizing critical area
inspection scopes, and can track a variety of degradations in graphical form such as corrosion levels,
anode depletion, coating failure, and other various structural related damages.
The HM program can be used as a complete inspection management tool. The package enables
inspection work packets for general inspection as well as the critical areas to be created which include
non-destructive testing inspections, such as eddy-current and magnetic particle inspection, as well as
thickness measurements and close visual inspections. The program can be use for both risk based and
prescriptive (i.e., rule based) plans.
Critical areas as defined by the RBI study are then created within the HM database for monitoring by
the integrity team on the asset. See Fig. 30 for an example.
443
Mobil Producing Nigeria Gauging Locations
Inspector Name: Date Of Insp.:
T1
T2
T3
T3
T3
T3
T3
T2 T3
T1
Note:
No thickness gaugings
taken at T1
Mobil Producing Nigeria CVI Inspection Report Thickness Gauging Pattern T2 – Strut Plate Forward of Ring Stiffeners
See Note 1
Thickness Gauging Pattern T3 – Strut
Plate Forward of Ring Stiffeners
Arm Turret
Strut Transition FWD Transition Turntable
Shell Plate Piece Structure
Stiffeners
T4
Turret Strut
Conclusion
Responsible fleet managers are turning more frequently to a structural maintenance approach based
upon maintenance history, structural analysis and risk analysis. A system for the rational development
of a structural maintenance program based on risk principles utilizing a graphic display tool has the
following advantages:
• Easily understood visual images
• Permanent record of complete structural history
• Potential for trending of structural failures against a fleet of vessels
• Application of inspection resources based on historical maintenance needs and consequence
of failure
• Potential cost savings
444
Requirements on Software Tools to Support an Efficient, High-Quality
Assembly Process
Hans-Günther Mütze, AVEVA GmbH, Hamburg/Germany, [email protected]
Abstract
It is a well-known fact that the quality and the efficiency of the assembly process have a significant
impact on the cost level of a shipbuilding project. This paper suggests that shipbuilding support
software can make the assembly process more efficient, provided that it is well developed in the
following three main areas:
• Very early definition of a dynamic break-down structure, supported by relevant analysis
tools.
• Efficient modelling of topologically connected structural members, enabling an automatic
extraction of accurate part definitions, including details such as edge preparation and
compensation for thermal distortion.
• Support of the assembly process by automatically produced documentation in combination
with physical marking on parts providing a 3D-lockdown of part positions within the
assembly.
The supporting Product Information Model should support a gradual build-up of continuously refined
data in order to enable early estimates as well as highly accurate information from the analysis of the
final model.
The goal should be to leave practically no room for assembly mistakes, minimize the need for excess
material and to abolish manual fitting and rework. An additional benefit of these requirements is high
quality also in the parts manufacturing work, all leading to shorter lead-times and a reduced overall
cost.
1. Introduction
A shipbuilding project has a lifecycle ranging from concept studies during the initial design phase
until the commissioning and delivery of the finished ship. This is a complex process often carried out
under difficult commercial constraints and time pressure. Therefore one of the most important success
factors is the shipbuilder’s ability to plan its resources in terms of facilities, material and labour.
All parts in a ship must be assembled into one product through many stages of assembly. The
ambition is to manage both the planning and the physical activities efficiently by carrying out
assembly operations at an as early stage as possible. The planning of the assembly process requires
extensive support to organize the design data to result in production assemblies. Furthermore, the
design data have to be created in a way that the manufacturing of parts can be done efficiently and the
manufactured parts should be ready for easy assembling.
445
Fig. 2: Assembly of parts to subassemblies and of subassemblies to assemblies,
example from Tribon M3
2. Modelling
During the last 10 to 20 years, the shipyards have gradually increased their use of 3D models for the
steel structure as well as outfit items like equipments, pipe systems, ventilation, cabling etc. The
today’s approach is to have a complete ship as 3D model for calculations, coordination, planning,
manufacturing and assembling of parts.
Fig. 3: From initial design to basic design phase, examples from Tribon M3
One of the first decisions to be taken is how the ship is to be split into main blocks. Available tools
must be able to quickly define alternative block boundaries and to calculate the weights based on the
early 3D hull steel model. The weights are critical since they have to match the maximum lifting and
transport capacity of the yard’s facilities.
446
Fig. 4: Production Block Weight Calculation based on Design Block Steel (M3 example)
At this stage also early estimates of surface areas (for painting) and weld lengths can be obtained,
both being important parameters in the planning process. As a result of these activities a Build
Structure is defined. It specifies how the ship is subdivided into assemblies and subassemblies. It also
defines how these units are assembled and built using the available hull steel line capacity.
Once the first block division is decided, the steel model is sub-divided into the production blocks by
the block seams. In order to find the optimal block division, the position of the block seams should be
dynamic. Then the block division can be defined early and later adapted to new requirements e.g.
from the production planning. Any such changes must naturally also affect the production panels and
the assemblies, where these parts are collected.
To save time and reduce costs, different disciplines work in parallel during the shipbuilding process.
Typically detail designers work in parallel the structural designers as well as with the -assembly and
production planners. All of them prepare parts of the information, necessary at the end to manufacture
the parts and assemble them together to form the real ship. During the whole process the model must
be viewed from at least two perspectives. One for modelling of Blocks and Panels and another for
assembly based on assembly structures. It should be possible to create Production Material, weight
calculations and drawings from both views
2.2 Efficient Modelling and automatic part creation for manufacture and assembly
In the detail design phase, the production panels first defined in the basic design phase are refined
with more details like endcuts on profiles to create the final parts to be manufactured and assembled.
Also the production material is defined here.
When the panel was not defined via the basic design phase, the geometry of the parts should be
defined with its topological connections and dependencies to make the definition consistent against
modifications. Coordinates should be used only if not otherwise possible. When e.g. the below frame
should be created, the first definition is the mould plane of this e.g. FR73. The boundary of this panel
should not be defined via coordinates but the surrounding objects should be used, here the inner
bottom, the left longitudinal girder, the outer contour and the right longitudinal girder
447
Design Blocks / Panels Production Blocks / Assembly
Basic Designers View Panels Production View
Detail Designers View
Persistent
Links
Persistent
Persistent
Links
Links
Fig. 6: Concurrent Basic and Detail Design and Assembly Planning in Tribon M3
To maintain the consistent model, additional steel panels as well as additional details on already
defined ones should be defined with topological connections and dependencies. E.g. the stiffeners on
the frame should be defined as connected to above deck and below shell stiffeners, the cutout depends
in its position and size on the position and size of the intersecting stiffener, a notch is related to a shell
seam, brackets connect according to rules two stiffeners, stiffeners are placed on the for or aft side of
the plate etc.
All the above described design work defines the shapes of parts as they will be, afterwards in the real
ship.
However, for production and manufacturing reasons, a full model must also contain production
related attributes influencing the shape of parts. Such attributes include bevelling, shrinkage, excess
etc. Only then can an automatic part production process create the parts with correct production sizes.
It should be possible to define bevel at plate edges, holes, profile traces and profile ends. Bevels
should be definable through types (including angle, nose height, gap, symbols in drawings and
nestings) manually or fully automatically by the system to avoid user errors. To reduce the amount of
welding material and time, also varying bevel that can follow automatically the contour should be
possible to be calculated.
From the heat involved in the welding of parts in the assembly process, the parts shrink. The classical
way to handle this is to give excess, which has to be cut away by hand after the assembly process, a
costly procedure in terms of time and money as well as edge quality. This leads to the requirement to
avoid excess and let the software calculate the shrinkage based on shipyard experiences.
Shrinkage should be possible to be calculated automatically for planar as well as curved parts, for
overall shrinkage e.g. when parts are connected to a plate, but also as local shrinkage, when plates are
connected to each other. Thus, the amount of excess can be reduced dramatically which reduces
production time and increases the quality.
All this contributes to make the assembly process as easy as possible.
448
Fig. 7: Typical Panel in the double bottom, example Tribon M3
449
Fig.9: Example for an NC File for plate cutting
The parts generation process should generate the parts as defined in the modelling process and take
bevel, shrinkage and excess into account as well as bending allowance for knuckled parts. It should
also create profiles and plates with all relevant marking lines in their correct place, e.g. for stiffeners
extended with calculated shrinkage and the same for part ends.
It will help also a lot in the assembly process, when the parts connected to other parts can be set on
the marking lines and there are other marking to show the exact position of the two parts to each
other. The parts generation process should calculate this automatically.
Marking triangles can “Lock” parts in three dimensions on planar and curved panels. Triangles should
be positioned to allow for weld shrinkage and expected weld gaps etc.
Aims to remove the need for excess material:
Improving accuracy
Improving productivity
Reducing cycle times
3. Assembly Planning
The assembly process starts with the connection of parts to subassemblies and of subassemblies to
assemblies etc. Assembly Planning should give the possibility to model the building structure exactly
as it is made in the workshop independent of the modelling structure. Hull and all outfit steel
structures as well as outfitting like pipes, equipments, cableways or ventilation ducts should be
collected to assemblies. Also planning data and production material like drawings should be able to
extract.
450
Fig.11: Detailed Assembly Definition Procedure
It is obvious that it is necessary to be able to define the assembly orientation in the software so that
automatic drawings and weld calculations can be based on that position. E.g. a frame with stiffeners
on the aft side will be produced in “Fore down” position in the workshop.
451
Fig.13: Example of a frame in Tribon M3 Assembly Planning
4. Weld Planning
The calculation of the welds should be based on the assembly structure and take the orientation of the
assembly into account, which is normally different than the orientation of the parts in the ship.
Only relevant welds should be calculated, i.e. where parts in this assembly stage are welded together.
Subassemblies need to be looked upon as 1 part each. Exactly the welds should be shown for this
process with length, weld thickness and weld position etc.
452
For each assembly a Weld Planning list should be created that contains all relevant information like
weld length, position weld size etc. to weld the parts together and to calculate the time needed to
create the assembly.
Also a neutral file should be exported for post processing for welding robots.
All above described actives should lead to that the person in the workshop who has to assemble parts
to a subassembly has an automatically generated assembly drawing with part list, weight and COG
information and a weld list with information about weld length, size and position. He also should have
the needed parts on a pallet available to the time where he needs them. The plate parts e.g. have
marking lines where to connect the profiles with material side indication and a locking marking
(triangle) which he can find also on the profile. He only needs to put the profile onto the marking line
on the plate on the correct size and check that the triangles are in line and weld the parts together. This
avoids a lot of errors and increases the quality. Some examples of such information from Tribon M3
are shown below.
453
Fig. 16: Examples of a plate part in Tribon M3
454
Fig.18: Example of a flange part in Tribon M3
455
Conclusions
In above explanations we could show that an early definition of a dynamic break-down structure leads
to early planning and concurrent design activities, that can reduce costs and save time. An efficient
detail modelling can create all necessary information to be able to manufacture and assemble the parts
to build up the ship.
The workshop persons can get parts with all relevant markings and drawings and lists for the
assembly to be built so that they “only” need to put the parts together according to the markings and
the assembly drawings and lists and weld them together according to the weld lists. Also the weight
and COG is available for transportation planning. The workshop people can look into the model on
Assembly level (their level).
All this together makes the assembly process easier, shorter, and cheaper and more accurate.
456
Index of Authors
457
458
6th International Conference on
Computer Applications and Information Technology in the Maritime Industries
COMPIT'07
Il Palazzone, Cortona/Italy, 23-25 April 2007
Manuel Armada IAI-CSIC, Spain Chuck Calvano ONR, USA Ubald Nienhuis TU Delft, NL
Volker Bertram ENSIETA, France Carlos Gonzalez SENER, Spain Philippe Rigo ANAST, Belgium
Berend Bohlmann FSG, Germany Martha Grabowski Le Moyne, USA Marcos Salas Univ. Austral, Chile
Christian Cabos GL, Germany Yvonne Masakowski NWC, USA Bastiaan Veelo NTNU, Norway
Emilio Campana INSEAN, Italy Ehsan Mesbahi Univ. Newcastle, UK
Venue: The conference will be held at Il Palazzone, a renaissance palace set in the famous
Tuscan landscape in Cortona, between Florence and Rome.
Accommodation is available at and near the conference venue.
Format: Papers to the above topics are invited and will be selected by a selection committee.
There will be hard-cover black+white proceedings and papers may have up to 15
pages. Papers will also be made available in pdf format.
Fees: 550 Euro participation fee (including all meals and conference dinner)
275 Euro for students incl. PhD students
459