Advances in Intelligent Robotics and Collaborative Automation2
Advances in Intelligent Robotics and Collaborative Automation2
Series Editors
Tarek Sobh André Veltman
University of Bridgeport, PIAK and TU Eindhoven,
USA The Netherlands
Editors
Richard Duro
Yuriy Kondratenko
Gregory P. Bierals
Electrical Design Institute, USA
River Publishers
Published 2015 by River Publishers
River Publishers
Alsbjergvej 10, 9260 Gistrup, Denmark
www.riverpublishers.com
Open Access
This book is distributed under the terms of the Creative Commons Attribution-Non-Commercial
4.0 International License, CC-BY-NC 4.0) (http://creativecommons.org/licenses/by/4.0/), which
permits use, duplication, adaptation, distribution and reproduction in any medium or format, as
long as you give appropriate credit to the original author(s) and the source, a link is provided to
the Creative Commons license and any changes made are indicated. The images or other third
party material in this book are included in the work’s Creative Commons license, unless indicated
otherwise in the credit line; if such material is not included in the work’s Creative Commons
license and the respective action is not permitted by statutory regulation, users will need to obtain
permission from the license holder to duplicate, adapt, or reproduce the material.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in
this book are believed to be true and accurate at the date of publication. Neither the publisher nor
the authors or the editors give a warranty, express or implied, with respect to the material contained
herein or for any errors or omissions that may have been made.
Printed on acid-free paper.
While every effort is made to provide dependable information, the publisher, authors, and editors
cannot be held responsible for any errors or omissions.
Contents
Preface xiii
v
vi Contents
Index 313
xiii
xiv Preface
Automation. The chapters have been thought out to provide an easy to follow
introduction to the topics that are addressed, including the most relevant
references, so that anyone interested in them can start their introduction to
the topic through these references. At the same time, all of them correspond to
different aspects of work in progress being carried out in various laboratories
throughout the world and, therefore, provide information on the state of the
art of some of these topics.
The first part, “Robots”, includes three contributions:
“A Modular Architecture for Developing Robots for Industrial Applica-
tions”, by A. Faíña, F. Orjales, D. Souto, F. Bellas and R. J. Duro, considers
ways to make feasible the use of robots in many sectors characterized by
dynamic and unstructured environments. The authors propose a new approach,
based on modular robotics, to allow the fast deployment of robots to solve
specific tasks. In this approach, the authors start by defining the industrial
settings the architecture is aimed at and then extract the main features that
would be required from a modular robotic architecture to operate successfully
in this context. Finally, a particular heterogeneous modular robotic architecture
is designed from these requirements and a laboratory implementation of it is
built in order to test its capabilities and show its versatility using a set of
different configurations including manipulators, climbers and walkers.
S. Osadchy, V. Zozulya and A. Timoshenko, in “The Dynamic Char-
acteristics of a Manipulator with Parallel Kinematic Structure Based on
Experimental Data”, studies two identification techniques which the authors
found most useful in examining the dynamic characteristics of a manipulator
with a parallel kinematic structure as an object of control. These techniques
emphasize a frequency domain approach. If all input/output signals of an
object can be measured then the first one of such techniques may be used
for identification. In the case when all disturbances cannot be measured the
second identification technique may be used.
In “An Autonomous Scale Ship Model for Parametric Rolling Towing
Tank Testing”, M. Míguez González, A. Deibe, F. Orjales, B. Priego and
F. López Peña analyze a special kind of robotic system model, in particular, a
self-propelled scale ship model for model testing, with the main characteristic
of not having any material link to a towing device to carry out the tests. This
model has been fully instrumented in order to acquire all the significant raw
data, process them onboard and communicate with an inshore station.
The second part “Control and Intelligence” includes four contributions:
In “Autonomous Knowledge Discovery Based on Artificial Curiosity Driven
Learning by Interaction”, K. Madani, D. M. Ramik and C. Sabourin investigate
Preface xv
Yuriy Kondratenko
Richard Duro
List of Figures
xix
xx List of Figures
xxix
List of Abbreviations
xxxi
xxxii List of Abbreviations
Abstract
This chapter is concerned with proposing ways to make feasible the use of
robots in many sectors characterized by dynamic and unstructured environ-
ments. In particular, we are interested in addressing the problem through a new
approach, based on modular robotics, to allow the fast deployment of robots
to solve specific tasks. A series of authors have previously proposed modular
architectures, albeit mostly in laboratory settings. For this reason, their designs
were usually more focused on what could be built instead of what was
necessary for industrial operations. The approach presented here addresses the
problem the other way around. In this line, we start by defining the industrial
settings the architecture is aimed at and then extract the main features that
would be required from a modular robotic architecture to operate successfully
in this context. Finally, a particular heterogeneous modular robotic architec-
ture is designed from these requirements and a laboratory implementa-
tion of it is built in order to test its capabilities and show its versatility
using a set of different configurations including manipulators, climbers and
walkers.
1.1 Introduction
There are several industrial sectors, such as shipyards or construction, where
the use of robots is still very low. These sectors are characterized by presenting
dynamic and unstructured work environments where the work is not carried out
in a chain production line, but rather, the workers have to move to the structures
that are being built and these structures change during the construction process.
Basically, these are the main reasons to explain the low level of automation in
these sectors. Despite this, there are some cases in which robot systems have
been considered in order to increase automation in these areas. However, they
were developed for very specific tasks, that is, as specialists. Some examples
are robots for operations such as grit-blasting [1, 2], welding [3], painting
[4, 5], installation of structures [6, 7] or inspection [8, 9]. Nevertheless, their
global impact on the sector is still low [10]. The main reason for this low
penetration is the high cost of the development of a robot for a specialized
task and the large number of different types of tasks that must be carried out
in these industries. In other words, it is not practical to have a large group of
expensive robots, each one of which will only be used for a particular task
and will be doing nothing the rest of the time.
In the last few years, in order to increase the level of automation in the
aforementioned environments, several approaches have been proposed based
on multi-component robotics systems as an alternative to the use of one robot
for each task [11–13]. These approaches seek to obtain simple robotic systems
capable of adapting, easily and quickly, to different environments and tasks
according to the requirements of the situation.
Multi-component robotic systems can be classified into three categories:
distributed, linked and modular robots [14]; however, in this work, only the
last category will be taken into account. Thus, we explore an approach based
on modular robotics, which basically seeks the re-utilization of pre-designed
robotic modules. We want to develop an architecture that with a small set of
modules can lead to many different types of robots for performing different
tasks.
In the last two decades, several proposals of modular architectures for
autonomous robots have been made [15, 16]. An early approach to modular
architectures resulted in what was called ‘modular mobile robotic systems.’
These robots can move around the environment, and they can connect to
one another to form complex structures for performing tasks that cannot be
carried out by a single unit. Examples are CEBOT [17] or SMC-Rover [18].
Another type of modular architecture is lattice robots. These robots can form
1.1 Introduction 3
is designed for and their principal characteristics, the missions the robots will
need to perform in these environments and the implications these have on
the motion and actuation capabilities of the robots. Obviously, there are also
a series of general characteristics that should be fulfilled when considering
industrial operation in general. Consequently, we will first start by identifying
here the main features and characteristics a modular architecture should
display in order to be able to handle a general dynamic and unstructured
industrial environment. This provides the requirements to be met by the
architecture so that we can address the problem of providing a set of solutions
to comply with these requirements. An initial list of required features would
be the following:
• Versatility: The system has to allow to easily build a large number of
different configurations in order to adapt to specific tasks;
• Fast deployment: The change of configuration or morphology has to
be performed easily and in a short time so that robot operation is not
disrupted;
• Fault tolerance: In case of the total failure of a module, the robot has to
be able to continue operating minimizing the effects of this loss;
• Robustness: The modules have to be robust to allow working in dirty
environments and resisting external forces;
• Reduced cost: The system has to be cheap in terms of manufacturing and
operating costs to achieve an economically feasible solution;
• Scalability: The system has to be able to operate with a large num-
ber of modules. In fact, limits on the number of modules should be
avoided.
To fulfil these requirements, a series of decisions were made. Firstly, the
new architecture will be based on a modular chain architecture made up of
heterogeneous modules. This type of architecture has been selected because
it is well known that it is the general architecture that maximizes versatility.
On the other hand, using homogeneous modules is the most common option
in modular systems [15, 16, 21–23], because it facilitates module reuse.
However, it also limits the range of possible configurations and makes the
control of the robot much more complex. In the types of tasks we are
considering here, there are several situations that would require a very simple
module (e.g., a linear displacement actuator), but which would be very
difficult (complex morphology), or even impossible in some cases, to obtain
using any of the homogeneous architectures presented. Thus, for the sake
of flexibility and versatility, we have chosen to use a set of heterogeneous
6 A Modular Architecture for Developing Robots for Industrial Applications
Figure 1.1 Diagram of the selected missions, tasks and sub-tasks considered, and the required
actuators and effectors.
8 A Modular Architecture for Developing Robots for Industrial Applications
The next layer shows the set of possible particular tasks we have considered
as necessary according to the previous types of mission, such as grit-blasting,
tank cleaning etc. The sub-task layer represents the low-level operations the
modular system must carry out to accomplish the task of the previous layer.
The next layer represents the kinematic pairs that can be used to perform
all the sub-tasks of the last layer. As mentioned above, these pairs only
have one degree of freedom. In this case, we have only chosen two kinds
of kinematic pairs: prismatic and revolution joints. Nevertheless, each joint
was implemented in two different modules in order to specialize the modules
to different motion primitives. For the prismatic joint, we have defined a
telescopic module with a contraction/expansion motion and a slider module
with a linear motion over its structure. The revolution joint also leads to
two specialized modules: a rotational module where the rotational axis goes
through the two parts of the module, like in wheels or pulleys, and a hinge
module. Finally, in the last layer we can see five examples of different effector
modules.
Once the actuator modules have been defined, we have to specify the shape
or morphology and the connecting faces of each module. Also, and again to
increase the versatility of the architecture, each module has been endowed with
a large number of attachment faces. This also permits reducing the number
of mechanical adapters needed to build different structures. The distribution
of the attachment faces will be located on cubic nodes or connection bays
within each module. This solution allows creating complex configurations,
even closed chains, with modules that are perpendicular, again increasing the
versatility of the architecture.
These mechanical connections have to be easily operated in order to
allow for the speedy deployment of different configurations. To this end,
each attachment face has been provided with mechanisms for transmitting
energy and communications between modules in order to avoid external wires.
We have also included mechanisms (proprioceptors) that allow the robot to
know its morphology or configuration, that is, what module is attached to what
face. This last feature is important because it allows the robot to calculate its
direct and inverse kinematics and dynamics in order to control its motion in
response to high-level commands from an operator.
The robots developed have to be connected to an external power supply
with one cable to guarantee the energy needed by all the actuators, effectors and
sensors. Nevertheless, the energy is shared among the modules to avoid wires
form module to module. In addition, each module contains a small battery
to prevent the risk of failure by a sudden loss of energy. These batteries,
1.3 Implementation of a Heterogeneous Modular Architecture Prototype 9
combined with the energy bus between the modules, allow the robot to place
itself in a secure state, maximizing the fault tolerance and the robustness of the
system.
Finally, for the sake of robustness, we decided that the communications
between modules should allow three different communication paths: a fast and
global channel of communications between all the modules that make up a
robot, a local channel of communications between two attached modules and a
global and wireless communication method. These three redundant channels
allow efficient and redundant communications, even between modules that
are not physically connected or when a module in the communications path
has failed.
Summarizing, the general structure of a heterogeneous modular robotic
architecture has been obtained from the set of requirements imposed by
operation in an industrial environment and the tasks the robots must perform
within it. It turns out that given the complexity of shipyard environments, on
which the design was based, the design decisions that were made have led to
an architecture that can be quite versatile and adequate for many other tasks
and environments. In the following section, we will provide a more in-depth
description of the components of the architecture and their characteristics as
they were implemented for tests.
Figure 1.2 Different types of modules developed in this project: three effectors on the left
part, a linker on the top, a slider on the right, and in the middle there is a rotational module, a
hinge module and a telescopic module.
nodes, which act as connections bays. The shape of the nodes varies depending
on the type of module (e.g., it is a cube for the nodes of the slider and telescopic
modules). All of the free sides of these nodes provide a connection mechanism
that allows connecting them to other modules. The size of the nodes without
the connection mechanism is 48x48x48 mm; it is 54x54x54 mm including the
connectors.
On the other hand, the rotational modules have two nodes and allow
their relative rotation. These modules are differentiated by the position of
the rotation shaft. Whereas the rotational axis of the rotation module goes
through the center of both modules, in the hinge it is placed in the union of
both nodes and perpendicularly to the line connecting their centers. The main
characteristics of the actuator modules are described in Table 1.1.
node carries a servo with a gear that engages another gear coupled to the shaft.
The reduction ratio is 15:46. The servo is modified and its potentiometer is
outside attached to a shaft that is operating at a 1:2 ratio with respect to the
main shaft. This configuration permits rotations of the module of 360º.
Two of the tracks are wider than the other two because they are employed
to transmit power (GND and +24V). The other two tracks are employed to
transmit data: a CAN bus and local asynchronous communications. The local
asynchronous communications track in each connector is directly connected to
the microcontroller, while the other tracks are shared for all the connectors of
the module. To share these tracks in the node, we choose a surface mount
and insulating displacement connector placed at the bottom of the PCB.
This solution is used to serially connect the PCBs of the node together in
a long string and it allows two modules on the same robot to communicate
even in the case of a failure in a module in the path of the message.
1.3.3 Energy
A need for the modular system to require a wire or tether to obtain power
or perform communications would limit the resulting robots’ motions and
their independence. Therefore, one aim of this work is for the architecture
to allow for fully autonomous modular robots. This is achieved by means
of the installation of batteries in each module and, when the robot needs
more power, expansion modules with additional batteries can be attached
to it. However, in industrial environments it is often the case that the tools
the robots need to use do require cables and hoses to feed them (welding
equipment, sandblasting heads, etc.) and, for the sake of simplicity and length
of time the robot can operate, it makes a lot of sense to use external power
supplies. For this reason, the architecture also allows for tethered operation
when this is more convenient, making sure that the power line reaches just
one of the modules and then it is internally distributed among the rest of the
modules.
The modules developed in this work are powered at 24V, but each module
has its own dc converter to reduce the voltage to 5V to power the servomotors
and the different electronic systems embedded in each module.
1.3.4 Sensors
All of the modules contain specific sensors to measure the position of their
actuator. To this end, the linear modules have a quadrature encoder with
0.32 mm accuracy in their position. The rotational modules are servo con-
trolled, so it is not necessary to know the position of the module. But, in order
to improve the precision of the system, we have added a circuit that senses
the value of the potentiometer after applying a low pass filter.
14 A Modular Architecture for Developing Robots for Industrial Applications
1.3.5 Communications
One of the most difficult tasks in modular robotics is the design of the
communications systems (local and global). On the one hand, it has to ensure
the adequate coordination between modules, and on the other hand, it has to
be able to respond quickly to possible changes in the robot’s morphology.
That is, it has to adapt when a new module is attached, unattached or even
when one module fails. The robot’s general morphology has to be detected
through the aggregation of the values of the local sensing elements in each
module as well as the information they have on the modules they are linked to.
For this, we use an asynchronous local communications line for inter-module
identification (morphological proprioception).
On the other hand, a CAN bus is used for global communications. It allows
performing tasks requiring a critical temporal coordination between remote
modules. Also, a MiWi wireless communications system is implemented as
a redundant system that is used when we have isolated robotic units or when
the CAN bus is saturated.
Additionally, all the modules, except the rotational one, have a micro-USB
connection to allow communications to an external computer. This feature and
a boot loader allow us to employ a USB memory to load the program without
the use of a programmer for microcontrollers. Figure 1.3 shows the printed
circuit board (PCB) of the slider module containing all the communications
elements.
1.3 Implementation of a Heterogeneous Modular Architecture Prototype 15
Figure 1.3 Control board for the slider module and its main components.
1.3.6 Control
The system control is responsible for controlling and coordinating all the local
tasks within each module, as well as the behaviour of the robot. To do this,
in this work, each module carries its own electronics board with its micro-
controller (PIC32MX575F512) and a DC/DC converter for power supply.
The micro-controller is responsible of the low-level tasks of the module:
controlling the actuator, managing the communications stacks and measuring
the values of its sensors. As each actuator module has its own characteristics
(number of connection faces, encoder type, etc.) and the available space inside
the modules is very limited, we have developed a specific PCB for each kind
of actuator module. As an example, Figure 1.3 shows the top and bottom side
of the control board for the slider module.
Besides the low-level tasks, this solution permits choosing the type of
control to be implemented: centralized or distributed. While in a distributed
control scheme, each of the modules contributes to the final behaviour through
the control of its own actions depending on its sensors or communications to
other modules. In a centralized control scheme, one of the modules would
be in charge of controlling the actions of all the other modules, with the
advantage of having redundant units in case of failure. Additionally, all
modules employ the CAN bus to coordinate their actions and to synchronize
16 A Modular Architecture for Developing Robots for Industrial Applications
their clocks. Obviously, this architecture allows for any intermediate type of
control scheme.
1.4.1 Manipulators
One of the most important pillars of industrial automation are manipulators.
Traditional manipulators present a rigid architecture, which complicates their
use in different tasks, and they are very heavy and big to be transported in
dynamic and unstructured environments. Nevertheless, modular manipulators
can be very flexible as they can be entirely reconfigured to adapt to a specific
task and the modules can be transported easily across complex environments
and then they can be directly assembled on the workplace.
The configuration choice of the manipulator is highly application depen-
dent and it is mostly determined by the workspace shape and size, as well as
other factors such as the load to be lifted, the required speed, etc. For instance,
the different types of modules in the architecture can also be used to easily
implement spherical or polar manipulators. These type of manipulators present
a rotational joint at their base and a linear joint for the radial movements
as well as another rotational joint to control their height. Thus, a spherical
manipulator is constructed using just five modules as shown in the pictures of
Figure 1.4. This robot has a magnetic effector to adhere to the metal surface:
a rotational module, a hinge module and a prismatic module for motion and a
1.4 Some Configurations for Practical Applications 17
Figure 1.4 Spherical manipulator moving a load from one place to another.
final magnetic effector to manipulate metal pieces. We can see how the robot
is able to take an iron part using the electromagnet placed at the end of the
manipulator and carry it to another place. The whole process takes around
10 seconds.
Another very common type of manipulator is the cartesian robot. They are
constructed using just linear joints and are characterized by a cubic workspace.
The ease with which it is possible to produce speed and position control
mechanisms for them, their ability to move large loads and their great stability
are their major advantages.
An example of a very simple and fully functional cartesian robot is
displayed on the left image of Figure 1.5. It is constructed using only two
linear modules and a telescopic module for the implementation of its motions,
two magnetic effectors to adhere to the metal surface and a smaller magnet
that is used as a final effector. The two large magnets used to adhere the robot
18 A Modular Architecture for Developing Robots for Industrial Applications
to the metal surface provide better stability than the previous spherical robot
and reduce the vibrations on the small magnetic end-effector. In addition, we
could implement a gantry style manipulator, as we can observe on the right
image of Figure 1.5. This gantry manipulator has great stability as it uses
four magnets to adhere to the surface and provides a very stable structure
to achieve a high accuracy positioning of its end-effector. Furthermore, this
implementation can lift and move heavier loads as it has two pairs of modules
working in parallel.
Figure 1.7 Climber and Walker Robots for linear and surface missions.
20 A Modular Architecture for Developing Robots for Industrial Applications
a hinge module between them. This configuration allows the robot to move
and to turn, making it useful for surface inspection tasks performed with an
ultrasonic sensor or other final effectors.
Approximations that are more complex can be created with better locomo-
tion capabilities using other sets of modules. For example, a well-known way
to move through an environment is by walking. This way of moving also allows
stepping over small obstacles or irregularities. A very simple implementation
of a walking robot is shown in Figure 1.8. This configuration is made up of
two hinge modules, each one of them with a magnetic effector, joined together
by a rotational module. This biped robot is capable of walking over irregular
surfaces, stepping over small obstacles and even of moving from a horizontal
to a slanted surface.
1.6 Conclusions
A new heterogeneous modular robotic architecture has been presented which
permits building robots in a fast and easy way. The design of the architecture
is based on the main features that we consider, in a top-down fashion, that a
22 A Modular Architecture for Developing Robots for Industrial Applications
References
[1] C. Fernandez-Andres, A. Iborra, B. Alvarez, J. Pastor, P. Sanchez,
J. Fernandez-Merono and N. Ortega, ‘Ship shape in Europe: cooperative
robots in the ship repair industry’, Robotics & Automation Magazine,
IEEE, vol. 12, no. 3, pp. 65–77, 2005.
[2] D. Souto, A. Faiña, A. Deibe, F. Lopez-Peña and R. J. Duro, ‘A robot
for the unsupervised grit-blasting of ship hulls’, International Journal of
Advanced Robotic Systems, vol. 9, no. 82, 2012.
[3] G. de Santos, M. Armada and M. Jimenez, ‘Ship building with rower’,
IEEE Robotics & Automation Magazine, vol. 7, no. 4, pp. 35–43, 2000.
[4] B. Naticchia, A. Giretti and A. Carbonari, ‘Set up of a robotized system
for interior wall painting’, in Proceedings of the 23rd International
Symposium on Automation and Robotics in Construction (ISARC), pp.
3–5, 2006.
[5] Y. Kim, M. Jung, Y. Cho, J. Lee and U. Jung, ‘Conceptual design and
feasibility analyses of a robotic system for automated exterior wall
painting’, International Journal of Advanced Robotic Systems, vol. 4,
no. 4, pp. 417–430, 2007.
[6] S. Yu, S. Lee, C. Han, K. Lee and S. Lee, ‘Development of the curtain
wall installation robot: Performance and efficiency tests at a construction
site’, Autonomous Robots, vol. 22, no. 3, pp. 281–291, 2007.
[7] P. Gonzalez de Santos, J. Estremera, E. Garcia and M. Armada, ‘Power
assist devices for installing plaster panels in construction’, Automation
in Construction, vol. 17, no. 4, pp. 459–466, 2008.
References 23
Abstract
The chapter presents two identification techniques which the authors found
most useful in examining the dynamic characteristics of a manipulator with
a parallel kinematic structure as an object of control. These techniques
emphasize a frequency domain approach. If all input/output signals of an
object can be measured, then the first one of such techniques may be used for
identification. In the case when all disturbances can’t be measured, the second
identification technique may be used.
2.1 Introduction
Mechanisms with parallel kinematics [1, 2] compose the basis for the
construction of single-stage and multi-stage manipulators. A single-stage
manipulator consists of an immobile basis, a mobile platform and six guide
rods. Each rod can be represented as two semi rods Aij and an active kinematics
pair Bij (Figure 2.1).
then the block diagram of the mechanism with parallel kinematics can
be represented as shown on the Figure 2.2 where W u is an operator which
characterizes the influence of the control signals vector u on the output vector x
and W ψ is an operator which describes the influence of the disturbance vector
ψ on the output vector x. In this case, in order to find the mathematical model,
it is necessary to define these operators. If we want to find such operators
based on experimental data, then two variants of the research task can be
enunciated.
The first variant will be applied if the components of vectors u1 , x and ψ
can be measured fully (the complete data). The second variant will be applied
in the case when only the components of vectors u1 and x can be measured
(the incomplete data).
So the research on the dynamics of the mechanism with parallel kinematics
can be formulated as follows: to find transfer function matrices Wu , Wψ and
also to estimate the influence of vectors u1 and ψ on vector x on the base of
known complete or incomplete experimental data.
The solution of such a problem has been found as a result of three stages of:
• The development of algorithms for the structural identification of a
multivariable Dynamic object with the help of complete or incomplete
data;
• Collecting and processing experimental data about vectors u1 , x and ψ;
• The verification of the results of the structural identification.
Y p = F1p · U p , (2.4)
where F1p is a transfer function matrix, all the poles of which are located
in the left half-plane (LHP) of complex variables. It is equal to
After introducing the expressions (2.8) and (2.9) into the Equation (2.6),
the quality functional can be shown as follows:
1
j∞
J = 2·π·j · tr F1p −En ×
−j∞
(2.10)
Up · U p∗ Up · Y p∗ F1p ∗
× · ·A · ds.
Yp · U p∗ Yp · Y p∗ −En
A = A0∗ · A0 ; (2.12)
D is a fraction-rational matrix with particularities in the left half-plane (LHP)
which is defined on the basis of the algorithms in articles [3, 4] from the
following equation
D · L · D∗ =U p · U p∗ , (2.13)
where L is a singular matrix, each element of which is equal to one; bottom
index * designates the Hermitian conjugate operation; H0 +H+ +H− is a
fraction-rational matrix which is equal to
H0 + H+ + H− = A0 · Y p · U p∗ ·D∗−1 · L+ ; (2.14)
L+ is the pseudo inverse to matrix L [5]; matrix H0 is the result of the division;
H+ is a proper fractional rational matrix with poles that are analytic only in the
right half-plane (RHP); H− is a proper fractional rational matrix with poles that
are analytic in LHP. In accordance with the chosen minimization procedure,
a steady and physically realized variation F1p which delivers a minimum to
the functional (2.10) is equal to
F1p = A−1 −1
0 · (H0 + H+ ) · D . (2.15)
If one takes into account matrices W2 , F1p from Equations (2.3) and (2.15),
then an unknown object transfer function matrix Wob can be identified with
the help of the following expression
Wob = W2−1 · F1p . (2.16)
The separation [4] of the transfer function matrix (2.16) makes it possible to
find the unstable part of the object transfer function matrix with the help of
equation
2.4 Algorithm for the Structural Identification 33
Wob2 = W− , (2.17)
where W− is a fraction-rational matrix with particularities in the RHP.
An algorithm for the structural identification of the multivariable dynamic
object with an unstable part on the base of the vectors u and x emplees the
implementation of the following operations:
• Search the matrix W2 as a result of the left-hand removal of the unstable
poles from Xp , which differ from the poles of Up ;
• Factorization of the weight matrix A from (2.12);
• Identification of the analytical complex variable matrix D
Equation (2.13);
• Calculation of H0 + H+ as a result of the separation (2.14);
• Calculation of F1p on the basis of the Equation (2.15);
• Identifying Wob2 by the separation of the product (2.16).
In this way, we have substantiated the algorithm for the structural identification
of the multivariable dynamic object with the help of the complete experimental
data.
P · x = M · u1 + ψ, (2.18)
ψ = Ψ · Δ, (2.19)
where Δ is the vector of the single δ-correlated “white” noises.
If one takes into account expression (2.19), then Equation (2.18) can be
rewritten as follows:
x = P −1 · M · u1 + P −1 · Ψ · Δ (2.20)
and a transfer function matrix which must be identified can be defined as the
expression
φ = φ11 φ12 = P −1 · M P −1 Ψ . (2.21)
So, the Equation (2.20) can be simplified to the form
x = φ · y, (2.22)
ε = x − φ · y. (2.25)
Sεε is a transposed matrix of the identification errors spectral densities
Sεε = Sxx − Syx φ∗ − φSxy + φSyy φ∗ ; (2.26)
Syx = Sux SΔx ; (2.27)
Suu Om×n
Syy = , (2.28)
On×m SΔΔ
2.4 Algorithm for the Structural Identification 35
, S , S are the transposed spectral density matrices of the vectors x,
Sxx uu yy
, S , S is the transposed cross-spectral density matrices between
u, y; Sxy yx ux
vectors x and y, y and x, u and x; SΔx is a transposed cross-spectral density
matrix which is found on the basis of the Wiener’s factorization [3] of the
additional connection equation
−1
SxΔ SΔx = Sxx − Sxu Suu Sux , (2.29)
K0 +K+ is a transfer function matrix with the stable poles, which is defined as
a result of the following equation right part separation [7]
K0 + K+ + K− = R0 Syx D∗−1 . (2.32)
Figure 2.5 Graphs of chances in the coordinates of the platform’s center of mass.
38 The Dynamic Characteristics of a Manipulator with Parallel Kinematic
The state space dynamic model of the mechanism is defined with the help
of the System Identification Toolbox of the Matlab environment. Considering
the structure of vector u, defined by (2.33), allows to obtain the equation of
the hexapod’s state as follows:
where y(s), x(s), u(s), ψ(s) – the Laplace image of the vector, En – the identity
matrix, s – the independent complex variable.
After solving the system of Equations (2.35) with respect to the vector
of the initial coordinates of the mechanism x, the following matrices of the
transfer functions Wu and Wψ (Figure 2.2) were obtained
The analysis of the matrix structure of (2.38) and (2.39) and the Bode
diagrams (Figure 2.6) shows that this mechanism can be classified as a multi-
resistant mechanical filter with the input signals and the disturbances energy
bands lying in the filter spectral band pass. The eigen frequency of such a filter
is close to 11s−1 and depends on the moments of inertia and the mass of the
moving elements of the mechanism (Figure 2.1).
The ordinary differential equations dynamics model of the system can
be obtained if you present the transfer function matrices Wu and Wψ as a
product of the polynomial matrices P, M and Mψ with the minimum order
polynomials:
Wu = P −1 M, (2.40)
Wψ = P −1 Mψ . (2.41)
To find the polynomial matrices P and M with the minimum order polynomials,
we propose the following algorithm:
• By moving the poles to the right [3], the transfer function matrix Wu
should be introduced as follows:
−1
Wu = NR DR (2.42)
Mψ = P Wψ . (2.44)
40 The Dynamic Characteristics of a Manipulator with Parallel Kinematic
The application of the algorithms (2.42) and (2.43) to the original data
represented by expressions (2.38), (2.39) made it possible to obtain the
polynomial matrices P, M, Mψ . Then, application of the inverse Laplace
transform under the zero initial conditions allowed determining the following
system of ordinary differential equations
⎡ 12(s+0.095)(−s+0.15)
⎤
7.87|s+0.12|2 0 |s2 +0.29s+0.034|2
⎢ |s +0.29s+0.034|2
2 ⎥
⎢ 45.5|s+0.075|2
0 ⎥
Suu =⎢ 0 |s2 +0.64s+0.16|2
⎥;
⎣ 12(−s+0.095)(s+0.15) ⎦
2.88|s+0.095|2
|s2 +0.29s+0.034|2 0 |s2 +0.29s+0.034|2
⎡ ⎤
8.87(s+0.1) −0.54
s2 +0.29s+0.034
0 s2 +0.29s+0.034 0
⎢ 6.75(s+0.075) 0 ⎥
⎢ 0 0 ⎥
D=⎢ 1.34(s+0.11)
s2 +0.64s+0.16
1.04(s−0.057) ⎥. (2.49)
⎣ 0 s2 +0.29s+0.034
0 ⎦
s2 +0.29s+0.034
0 0 0 1
0.023
Sεε = . (2.54)
|s + 0.037|2
The identification error mathematical mean is equal to zero and its relative
variances is equal to
j∞
Sεε ds
−j∞
Eε = = 0.0157. (2.55)
j∞
Sxx ds
−j∞
Obviously it is clear that the main part of the error ε oscillations power
density is concentrated in the area of the infrasonic frequencies. The presence
of such an error is explained by the limited duration of the experiment.
Figure 2.11 The scheme simulation model of the mechanism with parallel kinematics.
Figure 2.12 Graphs of the change of coordinates X of the center of mass of the platform.
2.8 Conclusions 47
Figure 2.13 Graphs of thee changes of Y coordinates of the center of mass of the platform.
2.8 Conclusions
The conducted research on the mechanism with a parallel structure dynamics
made it possible to obtain the following scientific and practical results:
• Substantiate two new algorithms for the structural identification of the
multivariable moving object dynamic models. The first one of them
allows to define the structure and parameters of a transfer function matrix
of the object with unstable poles using the regular vectors "input-output".
The second one allows identifying not only the model of a mobile object
but also the model of the non-observed stationary stochastic disturbance;
• Three types of models which characterize the dynamics of the manipula-
tor with a parallel kinematics are identified. This allows to use different
modern multidimensional optimal control systems synthesis methods for
designing the optimal mechatronic system;
48 The Dynamic Characteristics of a Manipulator with Parallel Kinematic
References
[1] J. P. Merlet, ‘Parallel robots. Solid mechanics and its application’,
V.74 – Kluwer Academic Publishers, 2000, pp. 394
[2] S. V. Volkomorov, J. P. Hagan, A. P. Karpenko, ‘Modeling and optimiza-
tion of some parallel mechanisms’, Information Technology. Application
2010, No. 5. pp.1–32 (in Russian).
[3] M. C. Davis, ‘Factoring the spectral matrix’, IEEE Trans. Automat.
Cointr. – 1963.- AC-8, N 4, pp. 296–305.
[4] V. N. Azarskov, L. N. Blokhin, L. S. Zhitetsky, ‘The methodology of
constructing optimal systems stochastic stabilization’, Monograph, Ed.
Blokhin L. N. - K.: NAU Book Publishers, 2006, pp. 440 - Bibliography:
pp. 416–428 (in Russian).
[5] F. R. Gantmakher, ‘Theory matrits.-4th ed’, Nauka, 1988. p. 552 (in
Russian).
[6] V. Kucera ‘Discrete line control: the polynomial equation approach’.
Praha: Akademia, 1979, pp. 206
[7] F. A. Aliev, V. A. Bordyug, V. B. Larin, ‘Time and frequency methods
for the synthesis of optimal regulators’, Baku: Institute of Physics of the
Academy of Sciences, 1988, pp. 46 (in Russian).
3
An Autonomous Scale Ship Model for
Parametric Rolling Towing Tank Testing
Abstract
This chapter presents the work carried out for developing a self-propelled
scale ship model for model testing, with the main characteristic of not having
any material link to a towing device to carry out the tests. This model has been
fully instrumented in order to acquire all the significant raw data, process them
onboard and communicate with an inshore station.
3.1 Introduction
Ship scale model testing has traditionally been the only way to accurately
determine ship resistance, propulsion, maneuvering and seakeeping charac-
teristics. These tests are usually carried out in complex facilities, where a large
carriage, to which the model is attached, moves it following a desired path
along a water tank.
Ship model testing could be broadly divided into four main types of
tests. Resistance tests, either in waves or still water, intended to obtain the
resistance of the ship without taking the effects of its propulsion system into
consideration. Propulsion tests aimed at analyzing the performance of the ship
propeller when it is in operation together with the ship itself. Maneuvering
tests the objective of which is to analyze the capability of the ship for carrying
out a set of defined maneuvers. And finally, seakeeping tests aimed at studying
the behavior of the ship while sailing in waves [1].
The former two tests are carried out in towing tanks, which are slender
water channels where the model is attached to the carriage that tows it along
the center of the tank. The latter two, on the other hand, are usually performed
in the so-called ocean basins where the scale model can be either attached to
a carriage or radio controlled with no mechanical connection to it.
However, there exist certain kinds of phenomena in the field of seakeeping
that can be studied in towing tanks (which are cheaper and have more
availability than ocean basins) and that are characterized by showing very
large amplitude nonlinear motions. The interactions between the carriage and
the model due to these motions, which are usually limited to a maximum
in most towing tanks, and the lack of space under the carriage, reduce the
applicability of slender channels for these types of tests.
One of these phenomena is that of ship parametric roll resonance, also
known as parametric rolling. This is a well-known dynamical issue affecting
ships, especially containerships, fishing vessels and cruise ships, and it can
generate very large amplitude roll motions in a very sudden way, reaching
the largest amplitudes in just a few rolling cycles. Parametric roll is due to
the periodic alternation of wave crests and troughs along the ship, which
produce the changes in ship transverse stability that lead to the aforementioned
roll motions.
Resonance is most likely to happen when the ship sails in longitudinal seas
and when a certain set of conditions are present, which include a wave encoun-
ter frequency ranging twice the ship’s natural roll frequency, a wavelength
almost equal to the ship length and a wave amplitude larger than a given
threshold [2].
Traditionally, parametric roll tests in towing tanks have been carried out
by using a carriage towed model, where the model is free to move in just
some of the 6 degrees of freedom (typically heave, roll and pitch, the ones
most heavily influencing the phenomenon) [3, 4]. However, this arrangement
limits the possibility of analyzing the influence of the restrained degrees of
freedom [5], which may also be of interest while analyzing parametric roll
resonance, or it may interfere on its development.
The main objective of the present work is to overcome the described
difficulties for carrying out scale tests in slender towing tanks where large
amplitude motions are involved, while preserving the model capabilities for
being used in propulsion (free-running), maneuvering and seakeeping tests.
3.2 System Architecture 51
default behavior. The user can choose between both control methods any time;
there’s always a human at the external transmitter to ensure maximum safety
during tests.
has been placed near the ship’s center of gravity with the objective of
improving its performance;
• Thrust sensor: a thrust gauge has been installed to measure the thrust
generated by the propeller at the thrust bearing;
• Revolution and torque sensor: in order to measure the propeller revolu-
tions and the torque generated by the engine, a torque and rpm sensor
has been installed between both elements;
• Sonars: intended to measure the distance to the towing tank walls and
feed an automatic heading control system;
• Not directly devoted to tested magnitudes, there are also a temperature
sensor, battery voltage sensors and current sensors.
Data acquisition is achieved through an onboard mounted PC, placed forward
on the bottom of the model. The software in charge of the data acquisition
and processing and ship control is written in Microsoft .Net, and installed in
this PC. This software is described in the following section. There is a Wi-Fi
antenna at the ship’s bow, connected to the onboard PC that enables a Wi-Fi
link to an external, inshore workstation. This workstation is used to monitor
the ship operation.
An overview of the model where its main components are shown is
included in Figure 3.1.
and control the ship: acquisition, computation and actuation. In every time
step, once the system is working, all the sensor measurements are collected.
The indispensable sensors that need to be connected to the system are: the
Inertial Measurement Unit (IMU) which provides the acceleration, magnetic,
angular rate and attitude measurements, the sonars and thrust gauge, in this
case connected to a data acquisition board, and the revolution and torque
sensor (Figure 3.2).
Once the sensor data are acquired, the system computes the proper signals
to modify the rudder servo and the motor ESC commands. These signals can be
set manually (using the software interface from the Wi-Fi linked workstation,
or the external RC transmitter) or automatically. Applying a controller over
the rudder servo, using the information from the sonar signals, it is possible
to keep the model centered and in course along the towing tank. Another
controller algorithm is used to control the ship speed.
The software is based on VB. NET and to interact with the system from an
external workstation, a simple graphical user interface has been implemented
(Figure 3.3).
From an external workstation, the user can start and finish the test, activate
the sensors to measure, monitor the data sensors in real time, control the rudder
servo and the motor manually using a slider or stop the motor and finish the
test. All the acquired data measurements are saved in a file for a future analysis
of the test.
Figure 3.3 Graphical user interface to monitor/control the model from an external
workstation.
relationship. For these cases, the towing carriage is used as a reference and
the speed is maintained by keeping the ship model in a steady relative position
to the carriage.
The speed control strategy to cope with this composed speed was initially
tested as shown in Figure 3.4. It was initially done by means of a double PID
controller; the upper section of the controller tries to match the ship speed
with a set point selected by the user, cv . This portion of the controller uses the
derivative of the ship position along the tank, x, as an estimation of the ship
speed, evx . The position x is measured through the Laser Range Finder sensor
placed at the beginning of the towing tank, facing the ship poop, to capture the
ship position along the tank, and send this information through a dedicated RF
modem pair, to the Mini-PC onboard. The bottom section, on the other hand,
uses the integral of the ship acceleration in its local x-axis from the onboard
IMU, va , as an estimation of the ship speed, eva . Each branch has its own PID
controller, and the sum of both outputs is used to command the motor.
Both speed estimations come from different sensors, in different coor-
dinate systems, with different noise perturbations and, over all, they have
different natures. The estimation based on the derivative of the position along
the tank has little or zero drift over time, and its mean value matches the real
speed on the tank x axis, and changes slowly. On the other hand, the estimation
based on the acceleration along the ship’s local x-axis is computed by the
onboard IMU, from its MEMS sensors, and is prone to severe noises, drift over
time and changes quickly. Furthermore, the former estimation catches the slow
behavior of the ship speed, and the latter its quick changes. This is the reason
to use different PID controllers with both estimations. The resulting controller
follows the user-selected speed setpoint, with the upper branch eliminating
any steady-state speed error, while minimizing quick speed changes with the
lower branch.
Later on, a different speed control approach was introduced in order to
improve its performance. Being the Laser Range Finder output an absolute
measure of ship position, the speed obtained from its derivative is significantly
more robust than the one estimated from the IMU in the first control scheme,
and has no drift over time or with temperature. The new speed control
algorithm is based in a complementary filter [6]. This filter estimates the ship
speed from two different speed estimations, with different and complementary
frequency components, as shown in Figure 3.5.
This two signal complementary filtering is based upon the use and
availability of two independent noisy measurements of the ship speed, v(s):
the one from the derivative of the range finder position estimation (v(s)+n1 (s))
and the one from the integration of IMU acceleration (v(s)+n2 (s)). Each of
these signals has their own spectral characteristics, here modeled by their
different noise levels, n1 (s) and n2 (s). If both signals have complementary
spectral characteristics, transfer functions may be chosen in such a way as to
minimize speed estimation error. The general requirement is that one of the
transfer functions complement the other. Thus, for both measurements of the
speed signal [7]:
H1 (s) + H2 (s) = 1.
This will allow the signal component to pass through the system undistorted
since the output of the system will always sum to one. In this case, n1 is
predominantly high-frequency noise and n1 is low-frequency noise; these two
cases, enough room has been left as to allow transversal and vertical mass
adjustment. Moreover, two sliders with 0.5 kg weights have been installed for
fine tuning of the mass distribution.
3.3 Testing
As mentioned above, the proposed model is mainly intended to be used in
seakeeping tests in towing tanks, where the carriage may interfere in the motion
of the ship. The application of the proposed model to one of these tests, aimed
at predicting and preventing the appearance of parametric roll resonance, will
be presented in this section.
As it has been already described, parametric roll resonance could generate
large roll motions and lead to fatal consequences. The need for a detection
system, and even for a guidance system, has been recursively stated by the
maritime sector [8]. However, the main goal is to obtain systems that can, in
a first stage, detect the appearance of parametric roll resonance, but that, in a
second stage, can prevent it from developing.
As mentioned, there are some specific conditions that have to be present for
parametric roll to appear, regarding both ship and waves characteristics. Wave
encounter frequency should be around twice the ship natural roll frequency,
their amplitude should be over a certain threshold that depends on ship
characteristics, and their wavelength should be near the ship´s length. Ship roll
damping should be small enough not to dissipate all the energy that is generated
by the parametric excitation. And finally, ship restoring arm variations due to
wave passing along the hull, should be large enough as to counteract the effect
of roll damping.
60 An Autonomous Scale Ship Model for Parametric Rolling Towing Tank Testing
7.48 seconds. On the other hand, parametric roll fully develops in a short
period of time, usually no more than four rolling cycles [10].
The training of the artificial neural network forecaster has been achieved
by using the roll time series obtained in the towing tank tests that will be
described in the following section.
These algorithms have been implemented within the ship onboard control
system, and their performance analyzed in some of the aforementioned towing
tank tests.
Figure 3.7 Roll and pitch motions in parametric roll resonance. Conventional carriage-towed
model.
This tank is 100 meters long, 3.8 meters wide and 2.2 meters deep. It is
equipped with a screen type wave generator, directed by a wave generation
software, capable of generating longitudinal regular and irregular waves
according to a broad set of parameters and spectra. The basin is also equipped
with a towing carriage able to develop a speed of up to 4.5 m/s. As it has already
been mentioned, in this test campaign, trials at different forward speeds and
also at zero speed have been carried out.
Regarding the zero speed runs, in order to keep the model in position and
to try to avoid as much as possible any interferences of the restraining devices
in the ship motions, two fixing ropes with two springs have been tightened to
the sides of the basin and to a rotary element fixed to the model bow. Moreover,
another restraining rope has been fitted between the stern of the model and the
towing carriage, stowed just behind of it. However, this last rope has been kept
loose and partially immersed, being enough for keeping the model head to seas
without producing a major influence on its motion. In the forward speed test
runs, the model has been left sailing completely free, with the exception of a
security rope that would be used just in case the control of the ship was lost.
In order to set the adequate speed for each test run, a previous calibration
for different wave conditions has been carried out to establish the needed
engine output power (engine servo command) for reaching the desired speed
as a function of wave height and frequency. The exact speed value developed
in each of the test runs has been measured by following the model with the
towing carriage, which has provided the instantaneous speed along the run.
The calibration curve has been updated as the different runs were completed,
providing more information for subsequent tests. However and considering
that the model is free to move in the six degrees of freedom, the instantaneous
speed of the model may be affected by surge motion, especially at the
conditions with highest waves.
A total number of 105 test runs have been carried out in head regular waves.
Different combinations of wave height (ranging from 0.255 m to 1.245 m
model scale) and ratio between encounter frequency and natural roll frequency
(from 1.80 to 2.30) have been considered for three different values of forward
speed (Froude numbers 0.1, 0.15 and 0.2) and zero speed, and two different
values of metacentric height (0.370 m and 0.436 m). From the whole set of
test runs, 55 correspond to the 0.370 m GM case, while 50 correspond to a
GM of 0.436 m.
The results obtained from the different test runs have been applied for
determining ship roll damping coefficients and validating the performance of
the mathematical model described in the previous subsection, for validating
3.3 Testing 65
the correctness of the so computed stability diagrams, and for training and
testing the ANN detection system.
In addition to this, the results of pitch and roll motion obtained with the
proposed model are presented in Figure 3.10, for the sake of comparison with
the results obtained with the conventional model (Figure 3.7).As it can be seen,
the pitch time series doesn’t present the peaks observed in the conventional
Figure 3.10 Roll and pitch motions in parametric roll resonance. Proposed model.
66 An Autonomous Scale Ship Model for Parametric Rolling Towing Tank Testing
Figure 3.11 Comparison between experimental and numerical data. Resonant case.
Figure 3.12 Comparison between experimental and numerical data. Non-Resonant case.
3.3 Testing 67
Figure 3.15 Forecast results. 30 neuron, 3 layer MP. 10 seconds prediction. Resonant case.
instrumented is able to move without any restriction along any of its six degrees
of motion; consequently, the system produces optimal measurements even in
tests cases presenting motions of large amplitude.
At is present development stage, the system only needs to use the towing
carriage as a reference for speed and position. A more advanced version that
could eliminate the use of this carriage is under development. This towing
carriage, together with its rails, propulsion and instrumentation, is a very
costly piece of hardware. The final version of the system could be constructed
at a fraction of this cost and it will be a true towless towing tank, as it
will allow performing any standard towing tank test without the need of an
actual tow.
References
[1] T. I. Fossen, ‘Handbook of Marine Craft Hydrodynamics and Motion
Control’, John Wiley & Sons, 2011.
[2] W. N. France, M. Levadou, T. W. Treakle, J. R. Paulling, R. K. Michel
and C. Moore, ‘An investigation of head-sea parametric rolling and its
influence on container lashing systems’, Marine Technology, vol. 40(1),
pp. 1–19, 2003.
[3] A. Francescutto, ‘An experimental investigation of parametric rolling
in head waves’, Journal of Offshore Mechanics and Arctic Engineering,
vol. 123, pp. 65–69, 2001.
[4] I. Drummen, ‘Experimental and Numerical Investigation of Nonlinear
Wave-Induced Load Effects in Containerships considering Hydroelas-
ticity’, PhD Thesis, Norwegian University of Science and Technology,
Norway, 2008.
[5] International Towing Tank Conference (ITTC), ‘Testing and Extrapo-
lation Methods. Loads and Responses, Stability. Model Tests on Intact
Stability’, ITTC 7.5–02-07–04.1. 2008.
[6] R. Brown, P. Hwang, ‘Introduction to Random Signals and Applied
Kalman Filtering, Second Edition’, John Wiley and Sons Inc., 1992.
[7] E. R. Bachmann, ‘Inertial and Magnetic Tracking of Limb Segment
Orientation for Inserting Humans into Synthetic Environments’, PhD
Thesis, Naval Posgraduate School, USA, 2000.
[8] K. Døhlie, ‘Parametric Roll - a problem solved?’, DNV Container Ship
Update, 1, 2006.
[9] R. M. Golden, ‘Mathematical methods for neural network analysis and
design’, The MIT Press, 1996.
References 71
Abstract
In this work, we investigate the development of a real-time intelligent system
allowing a robot to discover its surrounding world and to learn autonomously
new knowledge about it by semantically interacting with humans. The learning
is performed by observation and by interaction with a human. We describe the
system in a general manner, and then we apply it to autonomous learning of
objects and their colors. We provide experimental results both using simulated
environments and implementing the approach on a humanoid robot in a real-
world environment including every-day objects. We show that our approach
allows a humanoid robot to learn without negative input and from a small
number of samples.
4.1 Introduction
In recent years, there has been a substantial progress in robotic systems able
to robustly recognize objects in the real world using a large database of
pre-collected knowledge (see [1] for a notable example). There has been,
however, comparatively less advance in the autonomous acquisition of such
knowledge: if contemporary robots are often fully automatic, they are rarely
fully autonomous in their knowledge acquisition. If the aforementioned
substantial progress is commonsensical regarding the last-decades’ significant
developments in methodological and algorithmic approaches relating visual
information processing, pattern recognition and artificial intelligence, the
languishing in the machine’s autonomous knowledge acquisition is also
obvious regarding the complexity of the additional necessary skills to achieve
such “not algorithmic” but “cognitive” task.
Emergence of cognitive phenomena in machines have been and remain
an active part of research efforts since the rise of Artificial Intelligence (AI)
in the middle of the last century, but the fact that human-like machine-cognition
is still beyond the reach of contemporary science only proves how difficult
the problem is. In fact, nowadays there are many systems, such as sensors,
computers or robotic bodies, that outperform human capacities; nonetheless,
none of the existing robots can be called truly intelligent. In other words, robots
sharing everyday life with humans are still far away. Somewhat, it is due to
the fact that we are still far from fully understanding the human cognitive
system. Partly, it is so because it is not easy to emulate human cognitive skills
and complex mechanisms relating those skills. Nevertheless, the concepts of
bio-inspired or human-like machine-cognition remain the foremost sources of
inspiration for achieving intelligent systems (intelligent machines, intelligent
robots, etc. . . ). This is the way we have taken (e.g. through inspiration
from biological and human knowledge acquisition mechanisms) to design the
investigated human-like machine-cognition based system able to acquire high-
level semantic knowledge from visual information (e.g. from observation). It
is important to emphasize that the term “cognitive system” means here that
characteristics of such a system tend to those of human cognitive systems. This
means that a cognitive system, which is supposed to be able to comprehend
the surrounding world on its own, but whose comprehension would be
non-human, would afterward be incompetent of communicating about it with
its human counterparts. In fact, human-inspired knowledge representation and
human-like communication (namely semantic) about the acquired knowledge
become key points expected from such a system. To achieve the aforemen-
tioned capabilities, such a cognitive system should thus be able to develop
its own high-level representation of facts from low-level visual information
(such as images). Accordingly to the expected autonomy, the processing from
the “sensory level” (namely visual level) to the “semantic level” should be
performed solely by the robot, without human supervision. However, this does
not mean excluding interaction with humans, which is, on the contrary, vital for
4.1 Introduction 75
see [13, 14]). The goal of this system is to allow a humanoid robot to anchor
the heard terms to its sensory-motor experience and to flexibly shape this
anchoring according to its growing knowledge about the world. The described
system can play a key role in linking existing object extraction and learning
techniques (e.g. SIFT matching or salient object extraction techniques) on one
side, and ontologies on the other side. The former ones are closely related to
perceptual reality, but are unaware of the meaning of objects they are treated,
while the latter ones are able to represent complex semantic knowledge about
the world, but, they are unaware of the perceptual reality of concepts, which
they are handling.
The rest of this chapter is structured as follows. Section 4.2 describes
the architecture of the proposed approach. In this section, we detail our
approach by outlining its architecture and principles, we explain how beliefs
about the world are generated and evaluated by the robot and we describe
the role of human-robot interaction in the learning process. Validation of
the presented system on colors learning and interpretation, using simulation
facilities, is reported in Section 4.3. Section 4.4 focuses on the implementation
and validation of the proposed approach on a real robot in a real-world
environment. Finally, Section 4.5 discusses the achieved results and outlines
future work.
Figure 4.1 General Bloc-diagram of the proposed curiosity driven architecture (left) and
principle of curiosity-based stimulation-satisfaction mechanism for knowledge acquisition
(right).
78 Autonomous Knowledge Discovery Based on Artificial Curiosity-Driven
Figure 4.2 A Human would describe this Apple as “Red” in spite of the fact, that this is not
the only visible color.
Figure 4.3 A Human would describe this Toy-frog as green in spite of the fact, that this is
not the only visible color.
Figure 4.4 gives, through example, an alternative scheme of the defined notions
and their relationship. It depicts a scenario in which two observations o1 and
o2 are made corresponding to two description U1 and U2 of those observations,
respectively.
On first observation, features i1 and i2 were obtained along with utterances
u1 and u2 , respectively. Likewise for the second observation, features i3 , i4
and i5 were obtained along with utterance u3 . In this example, it is easily
visible that the entire set of features I = {i1 , · · · , i5 } contains two sub-sets
I1 and I2 . Similarly the ensemble of whole utterances {u1 , u2 , u3 } give the
Figure 4.4 Bloc-diagram of relations between observations, features, beliefs and utterances
in sense of terms defined in the text.
4.2 Proposed System and Role of Curiosity 81
wrong), this observation is recorded as new knowledge and a search for the
new interpretation is started.
Using these two ways of interactive learning, the robot’s interpretation of
the world evolves both in amount, covering increasingly more phenomena
as they are encountered, and in quality, shaping the meaning of words
(utterances) to conform with the perceived world.
Figure 4.5 Upper: the WCS color table. lower: the WCS color table interpreted by robot
taught to distinguish warm (marked by red), cool (blue) and neutral (white) colors.
84 Autonomous Knowledge Discovery Based on Artificial Curiosity-Driven
The rate of correctly described objects from the test set was approximately
91% after the robot had fully learned. Figure 4.5 gives the result of interpre-
tation by the system of the colors of the WCS table regarding “Warm” and
“Cool” colors.
Figure 4.6 shows the learning rate versus the increasing number of
exposures of each color. It is pertinent to emphasize the weak number of
learned examples (required examples) leading to a correct recognition rate
Figure 4.6 Evolution of number of correctly described objects with increasing number of
exposures of each color to the simulated robot.
Figure 4.7 Examples of obtained visual colors’ interpretations (lower images) and corre-
sponding original images (upper images) for several testing objects from COIL database.
4.4 Implementation on Real Robot and Validation Results 85
4.4.1 Implementation
The core of the implementation’s architecture is split into five main units:
Communication Unit (CU), Navigation Unit (NU), Low-level Knowledge
Acquisition Unit (LKAU), High-level Knowledge Acquisition Unit (HLAU)
and Behavior Control Unit (BCU). Figure 4.8 illustrates the bloc-diagram
of the implementation’s architecture. The aforementioned units control NAO
robot (symbolized by its sensors, its actuators and its interfaces in Figure 4.8)
1
A video capturing different parts of the experiment may be found online on:
http://youtu.be/W5FD6zXihOo
86 Autonomous Knowledge Discovery Based on Artificial Curiosity-Driven
through its already available hardware and software facilities. In other words,
the above-mentioned architecture controls the whole robot’s behavior.
The purpose of NU is to allow the robot to position itself in space with
respect to objects around it and to use this knowledge to navigate within
the surrounding environment. Capacities needed in this context are obstacle
avoidance and determination of distance to objects. Its sub-unit handling
spatial orientation receives its inputs from the camera and from the LKAU.
To get to the bottom of the obstacle avoidance problem, we have adopted a
technique based on ground color modeling. Inspired by the work presented
in [22], color model of the ground helps the robot to distinguish free-space
from obstacles. The assumption is made that obstacles repose on ground
(i.e. overhanging and floating objects are not taken into account). With this
assumption, the distance of obstacles can be inferred from monocular camera
data. In [23], some aspects of distance estimation from a static monocular
camera have been mentioned, proffering the robot the capacity to infer
distances and sizes of surrounding objects.
The LKAU ensures gathering of visual knowledge, such as detection
of salient objects and their learning (by the sub-unit in charge of salient
object detection) and sub-recognition (see [18, 24]). Those activities are
4.4 Implementation on Real Robot and Validation Results 87
carried out mostly in an “unconscious” manner, that is, they are run as
an automatism in “background” while collecting salient objects and learn-
ing them. The learned knowledge is stored in Long-term Memory for
further use.
The HKAU is the center where the intellectual behavior of the robot is
constructed. Receiving its features from the LKAU (visual features) and from
the CU (linguistic features), this unit processes the belief generation, the most
coherent beliefs emergence and constructs the high-level semantic represen-
tation of acquired visual knowledge. Unlike the LKAU, this unit represents
conscious and intentional cognitive activity. In some way, it operates as a
baby who learns from observation and from verbal interaction with adults
about what he observes developing in this way his own representation and his
own opinion about the observed world [25].
The CU is in charge of robot communications. It includes an output
communication channel and an input communication channel. The output
channel is composed of a Text-To-Speech engine which generates human
voice through loudspeakers. It receives the text from the BCU. The input
channel takes its input from a microphone and through an Automated Speech
Recognition engine (available in NAO) the syntax and semantic analysis
(designed and incorporated in BCU) it provides the BCU labeled chain of
strings representing the heard speech. As it has been mentioned, the syntax
analysis is not available on NAO. Thus it has been incorporated in BCU. To
perform syntax analysis, the TreeTagger tool is used. Developed by the ICL at
University of Stuttgart, the TreeTagger tool is a tool for annotating text with
part-of-speech and lemma information. Figure 4.9 shows, through a simple
example of an English phrase, the operational principle of syntactic analysis
performed by this tool. “Part-of-speech” row gives tokens explanation and the
“Lemma” row shows lemmas output, which is the neutral form of each word in
the phrase. This information along with known grammatical rules for creation
of English phrases may further serve to determine the nature of the phrase as
Figure 4.9 Example of English phrase and the corresponding syntactic analysis output
generated by treetagger.
88 Autonomous Knowledge Discovery Based on Artificial Curiosity-Driven
Figure 4.10 Flow diagram of communication between a robot and a human which is used in
this work.
4.4 Implementation on Real Robot and Validation Results 89
The first validation involving the robot has aimed at verifying the leaning,
color interpretation, interaction with human and description abilities of the
proposed (e.g. investigated) system. To do this, the robot has been asked to
learn a subset of the 25 objects: in terms of associating the name of each
detected object to that object. At the same time, a second learning process has
been performed involving the interaction with the tutor who has successively
pointed the above-learned objects describing (e.g. telling) to the robot the
color of each object. Here below, an example of the Human-Robot interactive
learning is reported:
• Human: [pointing a red aid-kit] “This is a first-aid-kit!”
• Robot: “I will remember that this is a first-aid-kit.”
• Human: “It is red and white.”
• Robot: “OK, the first-aid-kit is red and the white.”
After learning the names and colors of the observed objects, the robot is
asked to describe a number of objects including also some of the already
learned objects but in a different posture (for example the yellow chocolate
90 Autonomous Knowledge Discovery Based on Artificial Curiosity-Driven
box presented in reverse posture, etc.) and a number of still unseen objects
(as for example a red apple or a white teddy-bear). The robot has successfully
described, in coherent linguistics, the presented seen and unseen objects. Here
below is an example of Human-Robot interaction during the recognition phase:
• Human: [pointing the unseen white teddy-bear]: “Describe this!”
• Robot: “It is white!”
• Human: [pointing the already seen, but reversed, yellow chocolate box]:
“Describe this!”
• Robot: “It is yellow!”
• Human: [pointing the unseen apple]: “Describe this!”
• Robot: “It is red!”
Figure 4.12 shows two photographs of the above-reported experimental vali-
dation, where the robot completes its knowledge by interacting with a human
and learning from him. Figure 4.13 shows another two photographs where
the robot describes different objects it could detect within its surrounding
environment.
In another experiment, taking into account the previously learned objects,
among which the robot has detected and has learned a black book (the only
available book in the learning set), the robot has been asked to search for the
book in a new (e.g. unseen) environment where a number of different objects,
including the above-mentioned black book, have been placed with two new
books (unseen) as well as other objects looking similar to a book. The robot’s
behavior and Human-Robot interaction during this experiment is given here
below:
• Human: “Go find the book!”
• Robot: “OK, I am searching for the book!”
Figure 4.12 Tutor pointing an aid-kit detected by robot describing its name and color to the
robot (left-side picture). Pointing, in the same way, another visible objects detected by robot,
tutor describes them to the robot (Right-Side picture).
4.4 Implementation on Real Robot and Validation Results 91
Figure 4.13 Tutor pointing a yellow chocolate box which has been seen, interpreted and
learned (by the robot) in terms of colors then asking the robot to describe the chosen object
(left-side picture). Tutor pointing an unseen white teddy-bear asking the robot to describe the
chosen object (right-side picture).
Figure 4.14 Images from a video sequence showing the robot searching for the book
(left-side picture) and robot’s camera view and visualization of color interpretation of the
searched object (right-side picture).
92 Autonomous Knowledge Discovery Based on Artificial Curiosity-Driven
4.5 Conclusions
This chapter has presented, discussed and validated a cognitive system for
high-level knowledge acquisition from visual perception based on the notion
of artificial curiosity. Driving as well the lower as the higher levels of the
presented cognitive system, the emergent artificial curiosity allows such a
system to learn in an autonomous manner new knowledge about the unknown
surrounding world and to complete (enrich or correct) its knowledge by inter-
acting with a human. Experimental results, performed as well on a simulation
platform as using the NAO robot, show the pertinence of the investigated
concepts as well as the effectiveness of the designed system. Although
it is difficult to make a precise comparison due to different experimental
protocols, the results we obtained show that our system is able to learn
faster and from significantly fewer examples than most of more-or-less similar
implementations.
Based on the results obtained, it is thus justified to say that a robot
endowed with such artificial curiosity-based intelligence will necessarily
include autonomous cognitive capabilities. With respect to this, the fur-
ther perspectives regarding the autonomous cognitive robot presented in
this chapter will focus on integration of the investigated concepts in other
kinds of robots, such as mobile robots. There, it will play the role of an
underlying system for machine cognition and knowledge acquisition. This
knowledge will be subsequently available as the basis for tasks proper for
machine intelligence such as reasoning, decision making and an overall
autonomy.
References
[1] D. Meger, P. E. Forssén, K. Lai, S. Helmer, S. McCann, T. Southey,
M. Baumann, J. J. Little and D. G. Lowe, ‘Curious George: An attentive
semantic robot’, Robot. Auton. Syst., vol. 56, no. 6, pp. 503–511, 2008.
[2] P. Kay, B. Berlin and W. Merrifield, ‘Biocultural Implications of Systems
of Color Naming’, Journal of Linguistic Anthropology, vol. 1, no. 1,
pp. 12–25, 1991.
References 93
Abstract
Remote robot control (telecontrol) includes the solution of the following
routine problems: surveillance of the remote working area, remote operation
of the robot situated in the remote working area, as well as pre-training of the
robot. The current paper describes a new technique for robot control using
intelligent multimodal human-machine interfaces (HMI). The first part of the
paper explains the robot control algorithms, including testing of the results
of learning and of movement reproduction by the robot. The application of
the new training technology is very promising for space robots as well as for
modern assembly plants, including the use of micro-and nano-robots.
5.1 Introduction
The concept of telesensor programming (TSP) and relevant task-oriented
robot control techniques for use in space robotics was first proposed by
G. Herzinger [1].
1
The paper is published with financial support from the Russian Foundation for Basic Research,
projects 14-08-01225-a, 15-07-04415-a, 15-01-02021-a.
hand in space, but also the position of the object (experimental models) models’
characteristic points relative to the camera on the hand.
is defined as a method for storing information not only about the shape of the
EE objects, but also about the shape of the motion trajectory.
The description language used in FSM is a multilevel hierarchical sys-
tem of frames, similar to M. Minski frames [6], containing a description
of the shape elements, metric characteristics and methods and procedures
for working with these objects. MFM, as one of the FSM components,
stores the structure of the shape of motion trajectories demonstrated by the
human operator during the process of training the robot to perform specified
movements [7, 8].
The generalized FSM of the remotely operated robot IE includes:
• Object models, models of the objects’ topology (location) in a particular
IE (MIE);
• Models of different typical motions and topology models (interrelations,
locations) of these movements in a particular IE (MIE).
It is also proposed to store, in the MIE, the coordinates and images of
objects from positions convenient both for remote-camera observation (which
enables the most accurate measurement of the coordinates of the characteristic
features of object images) and for grabbing objects with the robot gripper
(Figure 5.1) [9].
Training of motion can be regarded as a transfer of knowledge of motor,
sensory, and behavioral skills from a human operator to the robot control
system (RCS), which in this case should be a multimodal man-machine
interface (MMI), developed to the greatest possible extent (intelligent) to
provide adequate and effective perception of human actions. Consequently,
Figure 5.1 Images of the Space Station for two positions: “Convenient for observation” and
“Convenient for grabbing” objects with the virtual manipulator.
5.2 Conception and Principles of Motion Modeling 99
Figure 5.2 “Sensitized Glove” with a camera and the process of training the robot by means
of demonstration.
100 Information Technology for Interactive Robot Task Training
modifications or with very minor changes, and may also be repeated several
times in a single operation.
From among the various movements of the robot manipulator, most of
them can be represented as a sequence of a limited number of elementary
motions (motion fragments), for example:
• Transfer motion of the gripper along an arbitrary complex trajectory
g = g(l) from the current position to a certain final position;
• Correction motion, using the sequence of characteristic points (CP) of
the EE objects’ images, as input information;
• Surveillance movement in the process by which the following are
sequentially created: matrices of the gripper position Tb , Tb1 , Tb2 , joint
coordinate vectors gb , gb1 , gb2 , and geometric trajectory g = g(l);
• Movement to a convenient position for surveillance;
• Movement to a convenient position for grabbing;
• Movement for “tracking” the object (approaching the object);
• Movement to grab the object.
In traditional training systems using one or the other method, a sequence of
points of the motion trajectory of the robot gripper is obtained. It can be
represented as a function of some parameter l, which can be considered as the
preliminary result of training the robot to perform the fragment of the gripper
movement from one position to the other:
g = g (l) , lb ≤ l ≤ le
gb = g (lb ) , ge = g (le ) ,
where: lb – parameter of the trajectory in the initial position, l – parameter
of the trajectory in the current position, le – parameter of the trajectory in the
final position.
In this case, the training algorithm for performing motions ensures the
formation of geometric trajectory g(l) and includes the following:
• Formation of a sequence of triplets of the two-dimensional vectors
ximb (1) , ximb (2) , ximb (3) ; ximI (1) , ximI (2) , ximI (3) ; . . . ; xime (1) , xime (2) ,
xime (3) , conforming to the image positions of the 3 CP on the object
during training;
• Formation of the sequence Tb , TI , TII , . . . , Te of the matrices of the glove
position;
• Solution of systems of equations (5.1):
(i) (i) (i)
xim = (xim1 , xim2 ) = k (i) TX (i) , (5.1)
5.2 Conception and Principles of Motion Modeling 101
where: k(i) is a variable scale, defined as: k(i) = f /d(i) -f, where d(i) is the
distance from the point to the TV camera showing the plane; f is the focal
distance
of thelens, Tis a (2x4) matrix made up of the first two rows of matrix:
α Xn
T = , characterizing the rotation and displacement of the system of
0 1
coordinates (CS), in conjunction with the camera on the glove, relative to the
object CS, where a is the direction cosine matrix of the reference CS rotation
angle, Xn the displacement vector of the beginning of the CS and X(i) – the
two-dimensional vectors of the position of the image of the characteristic
points of the object in the image plane.
This data is sufficient to construct a sequence of matrices of the gripper
positions Tb , T1 , TII , . . . , Te during movement. The orientation blocks in these
matrices are matrices αb , α1 , αII , . . . , αe . The block of the gripper pole
position corresponds to the initial position of the gripper. According to this
sequence, the geometric, and, in line with it, the temporal motion trajectory
of the gripper can be built.
When teaching this action, the operator must move his hand with the glove
on it in the manner in which the gripper should move during the process of
the surveillance motion, whereas the position of the operator’s wrist can be
arbitrary and convenient for the operator.
Furthermore, for each case of teaching a new motion, it is necessary to
memorize a new volume of motion information in the form of several sets of
coordinates mentioned above.
When teaching the motions, e.g. IE surveillance, it is necessary to store a
considerable amount of information in the memory of the robot control system
(RCS), including:
• Values of matrix T, which characterize the position and orientation
of the glove in the coordinate system of the operator’s workstation,
corresponding to initial Tb , final Te and several intermediate T1 , TII , . . .
gripper positions, which it must take when performing movements;
• Several images of the object from the glove-mounted TV camera, cor-
responding to the various gripper positions, to control the accuracy of
training;
• Characteristic identification signs, characteristic points (CP) of the dif-
ferent images of the object, at different glove positions during the training
process;
• Coordinates of the CP of the images of the object in the base coordinate
system;
102 Information Technology for Interactive Robot Task Training
l b = le = 0.
5.2 Conception and Principles of Motion Modeling 103
In the simplest case of the formation of l(t), three intervals can be singled
out in it: the “acceleration” interval from the initial velocity (l b ) to some
permissible speed (l d ), the interval of motion at a predetermined speed and
the deceleration interval from the predetermined velocity to zero (l e ).
During acceleration and deceleration a constant acceleration (l d ) must
take place. Its value should be such that the value of the velocity g and
acceleration g vectors can be physically implementable under the existing
restrictions of the control vector (U) of the robot manipulator’s motors.
The values of these limitations can be determined based on the consider-
ation of the dynamic model (R) of the robot manipulator, which connects the
control vector (U) to the motion dynamics vectors (g, g , g ):
U = R g, g , g .
In the case of the motion reproduction transfer of function l = l(t), it is defined
by the following ratio:
|l |
• During the acceleration interval (0 < t = t1 ), where t1 = sign (ld ) ld ;
| d|
• During the interval of motion at a constant velocity - (t1 ≤ t ≤ t2 ), where
|l |t2 sign(l )·t2
t2 = t1 + |lel−l b|
− dl 1 : l (t) = ld (t − t1 ) + lb 2 ld 1 ;
| d|
| d| | d|
• During the deceleration interval (t2 ≤ t ≤ t3 ), where
|l |t2
t3 = 2t1 + |lel−l b|
− dl 1 :
| d|
| d|
sign(ld )t21 sign(ld )ld (t − t2 )
l (t − t1 ) = ld (t − t1 ) + lb + − .
2 2
The reproduction of movement over time by the robot is carried out as per the
implementation of the obtained function l(t) in the motion trajectory g(l):
g = g (l (t)) .
To determine the drives’ control vector U = R (g, g , g ) the substitution of
values g, g , g by the values of function g = g (l (t)) is carried out. This results
in the formation of the control function of the motors of the robot manipulator
over time.
It should be noted that a man performs natural motions with constant accel-
eration, in contrast to the robot manipulator, whose motions are characterized
by a constant rate (speed). Therefore, the robot has to perform its motions as
per its own dynamics, which differ from the dynamic properties of the human
operator.
104 Information Technology for Interactive Robot Task Training
(CP1 ... CP3) on this plane in a variable scale k(i) =f /d(i) -f, inversely
proportional to the distance d(I) from the point to the imaging plane of the
lens, where f is the focal length of the lens.
Let us assume that the CS, associated with the camera lens, and, therefore,
with the glove, is located as shown in Figure 5.3, i.e. axes x1 and x2 of the
CS are located in the image plane, x3 is perpendicular to them and is directed
away from the lens towards the object. In Figure 5.3: x1 , x2 , x3 are the axes of
the coordinate system associated with the object; x(1) , x(2) , x(3) are the vectors
defining the position of characteristic points in the coordinate system of the
camera lens; xim (2) 1 , xim (2) 2 are 2 projections of the vector from the center
of the CCD matrix to the image of point 2 (this can also be shown for points
1 and 3).
Then, distance d(i) is equal to the projection of the i-th CP on the third axis
of the CS associated with the camera: d(i) = xim3 (i) , and the location of the
object X(i) in the image plane will be represented by two-dimensional vectors:
(i) (i) (i)
xim = (xim1 , xim2 ) = k (i) TX (i) , (5.2)
where:
T is a (2×4) matrix made up of the first two rows of matrix
α Xn
T = characterizing the rotation and displacement of the CS asso-
0 1
ciated to a camera on the glove, relative to the CS of the object, where a is a
direction cosine matrix relative to the turning angle of the SC and Xn = (Xn1 ,
Xn2 , Xn3 ) is the displacement vector of the SC’s origin,
(i) (i) (i)
X (i) = (x1 , x2 , x3 , 1).
Then: d(i) = xim3 (i) = T3 . x(i) , where T3 is the third row of matrix T.
It is obvious that matrix T completely determines the spatial position of
the glove in the CS associated to the object, and its elements can be found as
the result of solving the abovementioned navigation problem of determining
the spatial position of the glove during training.
During the CP image processing, vectors xim (i) , i = 1, 2, 3 are determined,
so the left side of Equations (5.2) is known, and these equations represent a
system of six equations concerning 12 unknown elements of matrix T, which
are the three components of vector Xn and nine elements of matrix a.
Since the elements of matrix a are linked by six more equations of
orthogonality and orthonormality, there are a total of 12 equations, that is,
as many as the unknowns. These are obviously sufficient to determine the
desired matrix T.
During the “training” motion of the operator’s hand at a given frequency,
a procedure involving an operation for the selection of the object’s CP image
and an operation for calculating the values of two-dimensional vectors xim (i) ,
i = 1, 2, 3 and their position in the image plane must be performed.
As a result of these actions, a sequence of values of the vector triplets
from the starting one Ximb (i = 1,2,3) to the finishing one xime (i = 1,2,3) :
(ximb (1) , Ximb (2) , Ximb (3) ); (ximI (1) , XimI (2) , XimI (3) ); (ximII (1) , XimII (2) ,
XimII (3) ); . . . (xime (1) , Xime (2) , Xime (3) ) is accumulated in the IMI database,
corresponding to the sequence of the glove’s positions during the movement of
the operator’s hand, which will be later reproduced by the robot. Each element
of this sequence carries enough information to solve the navigation task,
that is, to obtain the sequence Tb , TI , TII , . . . , Te of the matrix values, which
is the result of training.
After training, the robot reproduces a gripper motion based on sequence
Tb , TI , TII , . . . , Te using a motion correction algorithm based on the signals
from the camera, located in the gripper, by solving the so-called “correction
task” of the gripper relative to real OE objects.
110 Information Technology for Interactive Robot Task Training
Let’s consider the task training to perform only the first action, where the oper-
ator freely moves his hand, with sensitized glove on it, to the object (model)
from the initial position at the most convenient for grabbing side and sets it
at a short distance from the object with the desired orientation. Information
about the motion path and the hand position relative to the object, at least
at the end point of the motion, is memorized in the MFM through the IMI
system, which is necessary for adjusting the robot’s gripper position relative
to the object during the motion reproduction. It’s also desirable that at least
1 or 2 CPs of the object’s image get into the camera’s view in the “convenient
for grabbing” gripper position, so that the grabbing process can be supervised.
The training of the grabbing motion is performed along the easiest path, in
order to reduce the guidance inaccuracy. If the gripper is equipped with object
detection sensors, then the movement ends upon receiving signals from these
sensors.
During the training to grab objects, it is necessary to memorize the
transverse dimensions of the object at the spot of grabbing and the gripper
compression force, sufficient for holding the object without damaging it.
This operation can be implemented using additional circuit-torque and tactile
sensors in the robot gripper.
In case of the presence of multiple OE objects, the training implies a more
complex process of identification of the objects’ images and the necessity
to train additional motions, such as obstacle avoidance, changing of altitude
convenient for survey in case of any shading, flashing and interference to
image recognition by the camera on the glove, as well as for the camera in the
robot manipulator’s gripper, during the reproduction of movements.
The motion reproduction of the robot manipulator among the real EE objects
based on the use of the virtual graphical models of the EE and the robot
manipulator with force-torque sensing system was also practiced in the
experimental environment.
Figure 5.4 Functional diagram of robot task training regarding survey motions and object
grabbing motions using THM and RCS.
116 Information Technology for Interactive Robot Task Training
Before starting, the human operator must be able to verify the THM in an
off-line mode. For this purpose, a graphical model (GM) and a special com-
munications module for controlling 6 coordinates were developed. Training
the robot manipulator to execute EE surveillance motions by demonstration
is carried out in the following way (Figure 5.5).
The human operator performs the EE inspection based on his personal
experience in object surveillance. The robot repeats the action, using the
surveillance procedure and shape of the trajectory of the human head move-
ment. In this case, the cursor can first be moved around the obtained panoramic
image, increasing (decreasing) the scale of separate fragments, and then,
after accuracy validation, the actual motion of the robot manipulator can be
executed.
Figure 5.5 Robot task training to execute survey movements, based on the movements of
the operator’s head: Training the robot to execute survey motions to insect surroundings (a);
Training process (b); Reproduction of earlier trained movements (c).
5.3 Algorithms and Models for Teaching Movements 117
Figure 5.7 Using the special glove for training the robot manipulator.
on an additional manipulator, and with the hand he controls the position and
orientation of the main robot gripper (Figure 5.8).
Figure 5.8 Stand for teaching robots to execute motions of surveillance and grabbing objects.
118 Information Technology for Interactive Robot Task Training
Figure 5.9 Training of motion coordination of two robot manipulators by natural movements
of human head and hand.
Figure 5.10 Training with the use of a system for the recognition of hand movements and
gestures without “Sensitized Gloves” against the real background of the operator’s work station.
References 119
5.4 Conclusions
A new information technology approach for training robots (mechatronic
systems) by demonstration of movement is based on the use of a frame-
structured data representation in the models of the shape of the movements
that makes it easy to adjust the movement’s semantics and topology both for
the human operator and for the autonomous sensitized robot.
Algorithms for training by demonstration of natural movements of the
human operator’s hand using a television camera, fixed on the so-called
“sensitized glove”, allow not only the application during the training process
of graphical models of objects in surroundings but also of full-scale models,
which enables the operator the possibility to practice optimal motions of the
remote-controlled robots under real conditions.
It is sufficient to demonstrate the shape of a human hand movement to
the intelligent system of the IMI and to enter it into the MFM, and then this
movement can be executed automatically, for example, by a robot manipulator
with adjustment and navigation among the surrounding objects based on the
signals from the sensors.
References
[1] G. Herzinger, G. Grunwald, B. Brunner and J Heindl, ‘A sensor-based
telerobot system for the space robot experiment ROTEX’, Proc. 2nd
Internat. Symp. on Experimental Robots (ISER). Toulouse. France.
1991. June 25–27.
[2] G. Herzinger, J. Heindl, K. Landzettel and B. Brunner, ‘Multisensory
shared autonomy – a key issue in the space robot technology experiment
ROTEX’, Proc. IEEE Conf. on Intelligent Robots and Systems (IROS).
Raleigh. 1992. July 7–10.
[3] Klaus H. Strobl, Wolfgang Sepp, Eric Wahl, Tim Bodenmüller, Michael
Suppa, Javier F. Seara, Gerd Hirzinger, ‘The DLR Multisensory
Hand-Guided Device: the Laser Stripe Profiler’, ICRA 2004:
1927–1932.
[4] Michael Pardowitz, Steffen Knoop, Rüdiger Dillmann, R. D. Zollner,
‘Incremental Learning of Tasks From User Demonstrations, Past
120 Information Technology for Interactive Robot Task Training
Abstract
This paper presents a multi-agent control architecture for the efficient control
of a multi-wheeled mobile platform. Such control architecture is based on
the decomposition of a platform into a holonic, homogenous, multi-agent
system. The multi-agent system incorporates multiple Q-learning agents,
which permits them to effectively control every wheel relative to other
wheels. The learning process was divided into two steps: module positioning –
where the agents learn to minimize the error of orientation, and cooperative
movement – where the agents learn to adjust the desired velocity in order to
conform to the desired position in formation. From this decomposition, every
module agent will have two control policies for forward and angular velocity,
respectively. Experiments were carried out with a simulation model and the
real robot. Our results indicate a successful application of the purposed control
architecture both in the simulation and in real robot.
6.1 Introduction
An efficient robot control is an important task for the applications of a mobile
robot in production. The important control tasks are power consumption
optimization and optimal trajectory planning. Control subsystems should
Figure 6.1 Production mobile robot: Production mobile platform (a); Driving module (b).
The disadvantage of this technique is its low flexibility and high computational
complexity.
An alternative approach is to use the machine learning theory to obtain
an optimal control policy. The problem of multi-agent control in robotics is
usually considered as a problem of formation control, trajectory planning,
distributed control and others. In this paper we use techniques from multi-
agent systems theory and reinforcement learning to create the desired control
policy.
The content of this paper is the following: Section 6.2 gives a short
introduction to the theory of holonic, homogenous, multi-agent systems
and reinforcement learning. Section 6.3 describes the steering of a mobile
platform in detail. Section 6.4 describes the multi-agent decomposition of a
mobile platform. Using this decomposition, we propose a multi-agent control
architecture based on the model described in Section 6.2. Section 6.5 contains
a detailed description of the multi-agent control architecture. The Conclusion
highlights important aspects of the presented work.
Figure 6.2 Organizational structure of a Holonic Multi-Agent System. Lines indicate the
communication channels.
The initial values of Q-funcions are unknown and equal to zero. The learning
goal is to approximate the Q-function, (e.g. to find true Q-values for each
action in every state using received sequences of rewards).
A model of influence-based multi-agent reinforcement learning depicted
in Figure 6.3.
In this model, a set of body agents with identical policies π acts in
a common, shared environment. The ith body agent Mi in the state si
selects an action ai using current policy π, and then moves to the next
state si . The head agent observes changes resulting from the executed
action and then calculates and assigns a ri to the agent as an evaluative
feedback.
Equation (6.2) is a variation of the Q-learning update rule [21] used to
update the values of the Q-function, and where learning homogeneity and
parallelism are applied. Learning homogeneity refers to all agents building
the same Q-function, and parallelism requires that they can do it in parallel.
The following learning rule executes N times per step for each agent in parallel
over single-shared Q-function:
130 A Multi-Agent Reinforcement Learning Approach for the Efficient Control
Figure 6.3 Model of Influence Based Multi-Agent Reinforcement Learning in the Case of a
Holonic Homogenous Multi-Agent System.
ΔQ (si , ai ) = α ri + γ max Q (si , a) − Q (si , ai ) . (6.2)
a∈α(si )
Figure 6.6 Holonic Decomposition of the Mobile Platform. Dashed lines represent the
boundary of a Multi-Agent System (the Holon). Introduction of the Head Agent Leads to
a reduction of communication costs.
6.4 A Decomposition of Mobile Platform 133
Figure 6.7 Virtual Structure with a Virtual Coordinate Frame composed of Four Modules
with a known Virtual Center.
derr
i = di − diC . (6.3)
Each module’s desired position can be defined relative to the virtual coordinate
frame. Once the desired dynamics of the virtual structure are defined, the
desired motion for each agent can be derived. As a result, path planning
and trajectory generation techniques can be employed for the centroid
while trajectory tracking strategies can be automatically derived for each
module [23].
Figure 6.8 A unified view of the control architecture for a Mobile Platform.
N d d d
is defined by values in Table 6.1 and sh = C, C0 , Ui=1 xi , yi , ϕi is
the state of the head agent describes the virtual coordinate frame.
The learning system is given a positive reward when the robot orientation
is closer to the goal orientation (ϕerr → 0) and is using optimal speed ωopt .
A penalty is received when the orientation of the robot deviates from the goal
orientation or the selected action is not optimal for the given position. The
value of the reward is defined as:
rω = Rω (φterr , ω t ). (6.6)
Where Rω – is a reward function, which is represented by the decision tree
depicted in Figure 6.10. Here, ϕstop represents the value of the angle, where
the robot reduces speed to stop at the correct orientation, ωopt [0.6 .. 0.8]
rad/s, which is the optimal speed to minimize module power consumption.
The parameter ϕstop is used to decrease the search space for the agent. When
the agent angle error becomes smaller than ϕstop , an action that reduces the
speed will receive the highest reward. The parameter ωopt shows the possibility
of power optimization by setting a value function. If the agent angle error is
more than ϕstop and ωoptmin < ω < ω max , then the agent reward will increase.
opt
This coefficient which determines the increase ranges between [0 .. 1]. The
optimization allows the use of the preferred speed with the lowest power
consumption.
6.5.1.1 Simulation
The first task of the robot control is becoming familiar with robot positioning
through simulation. This step is done once for an individual module before
any cooperative simulation sessions. The learned policy is stored and copied
for other modules via knowledge transfer. The topology of the Q-function
trained during 720 epochs is shown in Figure 6.11.
6.5.1.2 Verification
The learning of the agent was executed on the real robot after a simulation with
the same external parameters. The learning process took 1440 iterations. A real
learning process takes more iterations on average because the real system has
noise and sensor errors. Figure 6.13 illustrates the result of execution of a
studied control system used to turn modules to the center, which is on the rear
right side of the images [24].
Figure 6.13 Execution of a Learned Control System to turn modules to the center, which is
placed on the rear right relative to the platform.
6.5.2.1 Simulation
Figure 6.14 shows the experimental results of the cooperative movement after
learning positioning [23]. It takes 11000 epochs on average. The external
parameters of a simulation are:
• Learning rate α = 0,4;
• Discount factor γ = 0,7.
6.5 The Robot Control System Learning 141
During the modules learning, the control system did not use any stabilization
of the driving direction. This is because a virtual environment has an ideal,
flat surface. In the case of the real platform, stabilization will be provided
by internal controllers of the low-level module software. This allows us to
consider only the linear speed control.
6.5.2.2 Verification
The knowledge base of the learned agents was transferred to the agents of
the control system on the real robot. Figure 6.15 demonstrates the process of
the platform moving by the learned system [25]. At first, modules turn in the
driving direction relative to the center of rotation (the circle drawn on white
paper), as shown in screenshots 1–6 in Figure 6.15. Then, the platform starts
driving around the center of rotation in screenshots 7–9 in Figure 6.15. The
stabilization of the real module orientation is based on a low-level controller
with feedback. This controller is provided by the software control system of
the robot. It helps to restrict the intellectual control system by manipulating
the linear speed of modules.
142 A Multi-Agent Reinforcement Learning Approach for the Efficient Control
Figure 6.15 The Experiment of modules turning as in the Car Kinematics Scheme (1–6
screenshots) and movement around a White Beacon (7–9).
Figure 6.16 The Experiment shows that the radius doesn’t change during movement.
The distance to the center of rotation is always the same on the entire trajec-
tory of the platform. This is confirmed by Figure 6.16. Hence, the robot drives
around in a circle where the coordinates of the center and the radius are known.
6.6 Conclusions
This paper focuses on an efficient, flexible, adaptive architecture for the
control of a multi-wheeled, production, mobile robot. The system is based
on a decomposition into a holonic, homogenous, multi-agent system and on
influence-based, multi-agent reinforcement learning.
References 143
References
[1] J. C. Andreas, ‘Energy-Efficient ElectricMotors, Revised and
Expanded’, CRC Press, 1992.
[2] A. T. de Almeida, P. Bertoldi and W. Leonhard, ‘Energy efficiency
improvements in electric motors and drives’, Springer Berlin, 1997.
144 A Multi-Agent Reinforcement Learning Approach for the Efficient Control
Vladivostok, Russia
2 Dipartimento di Ingegneria dell’Informazione, Università Politecnica delle
Abstract
The chapter is devoted to the design of an intelligent neural network-based
control system for underwater robots. A new algorithm for intelligent con-
troller learning is derived using the speed gradient method. The proposed
systems provide robot dynamics close to the reference ones. Simulation
results of neural network control systems for underwater robot dynamics with
parameter and partial structural uncertainty have confirmed the perspectives
and effectiveness of the developed approach.
7.1 Introduction
Underwater Robots (URs) promise great perspectives and have a broad
scope of applications in the area of ocean exploration and exploitation. To
provide exact movement along a prescribed space trajectory, URs need a
high-quality control system. It is well known that URs can be considered as
multi-dimensional nonlinear and uncertain controllable objects. Hence, the
design procedure of URs control laws is a difficult and complex problem
[3, 8].
From
q2 = qd2 − e2 (7.6)
and one has
D(q1 )q̇2 = D(q1 )q̇d2 − D(q1 )ė2 . (7.7)
Using the first term of the dynamics Equation (7.2), one can get the following:
D(q1 )ė2 = D(q1 )q̇d2 + B(q1 , q2 )qd2 −
− B(q1 , q2 )e2 + G(q1 , q2 ) − U, (7.8)
and thus the time derivative of function Q can be written in the following
form:
Q̇ = eT2 (D(q1 )q̇d2 + B(q1 , q2 )qd2 −
1
− B(q1 , q2 )e2 + G(q1 , q2 ) − U ) + eT2 Ḋe2 . (7.9)
2
After mathematical manipulation, one gets
Q̇ = eT2 (D(q1 )q̇d2 + B(q1 , q2 )qd2 + G(q1 , q2 ) − U )−
1
− eT2 B(q1 , q2 )e2 + eT2 Ḋ(q1 )e2 =
2
= eT2 (D(q1 )q̇d2 + B(q1 , q2 )qd2 + G(q1 , q2 ) − U )+
1
+ eT2 ( Ḋ(q1 ) − B(q1 , q2 )e2 ).
2
As known, there is a skew-symmetric matrix in the last term, hence, this term
is equal to zero, and we obtain the following simplified expression:
Q̇ = eT2 (D(q1 )q̇d2 + B(q1 , q2 )qd2 + G(q1 , q2 ) − U ). (7.10)
Our aim is to implement an intelligent UR control [1] based on neural
networks. Without loss of generality of the proposed approach, let’s choose
a two-layer NN (Figure 7.1). Let the hidden and output layers have H and
m neurons, respectively (m is equal to the dimension of e2 ). For the sake
of simplicity, one supposes that only the sum of weighted signals (without
nonlinear transformation) is realized in the neural network output layer. The
input vector has N coordinates.
Let’s define wij as the weight coefficient for the i-th input of the j-th neuron
of the hidden layer. So, these coefficients compose the following matrix
⎡ ⎤
w11 w12 ... w1N
⎢ w21 w22 ... w2N ⎥
w=⎢ ⎣ ...
⎥. (7.11)
... ... ... ⎦
wH1 wH2 ... wHN
7.3 Intelligent NN Controller and Learning Algorithm Derivation 151
As a result of the nonlinear transformation f (w, x), the hidden layer output
vector can be written in the following form:
⎡ ⎤
f1 (w1T x)
f (w, x) = ⎣ ... ⎦, (7.12)
T
fH (wH x)
where wk denotes the k-th raw of matrix w and x is the NN input vector.
Analogously, let’s introduce the matrix W whose element Wli denotes the
transform (weight) coefficient from the i-th neuron of the hidden layer to the
l-th neuron of the output layer.
Once the NN parameters are defined, the underwater robot control signal
(NN output) is computed as follows:
Ẇ = γe2 f T (w.x),
(7.22)
ẇ = γΦW e2 xT (w.x).
Such an integral law of the NN-regulator learning algorithm may cause
unstable regimes in the control system, as it takes place in adaptive systems
[4]. The following robust form of the same algorithm is also used:
D = DRB + DA ,
Figure 7.4 Examples of hidden layer weight coefficients evolution (α = 0.01,γ = 250).
Figure 7.5 Examples of output layer weight coefficients evolution (α = 0.01, γ = 250).
7.5 Modification of NN Control 157
Figure 7.8 Examples of hidden layer weight coefficients evolution (α = 0.01, γ = 200).
There exist different ways to solve it. One of the possible approaches is
derived below:
Figure 7.9 Examples of output layer weight coefficients evolution (α = 0.01, γ = 200).
7.5 Modification of NN Control 159
D0 = DRB0 + DA0 ,
⎡ ⎤ ⎡ ⎤
1000 0 0 1000 0 0
DRB0 =⎣ 0 1000 0 ⎦ , DA0 = ⎣ 0 1100 0 ⎦,
0 0 11000 0 0 9000
⎡ ⎤
210 0 0
B0 = ⎣ 0 200 0 ⎦ ,
0 0 150
T
G0 = 0 0 0 .
and matrix Γ = diag [0.02, 0.02, 0.02].
Note that the matrices D0 , B0 of the nominal dynamics model contain only
diagonal elements which are not equal to zero. This means that the nominal
model is simplified and does not take into account an interaction between
different control channels (of linear and angular velocities). The absence of
these terms in the nominal dynamics results in partial parametric and structural
uncertainty.
Figures 7.11 – 7.18 show the transient processes and control signals
(forces and torque) in the designed system with a modified NN regulator.
The experimental results demonstrated that the robot coordinates converge to
7.5 Modification of NN Control 161
Figure 7.16 Forces and Torque with modified NN control (α = 0.001, γ = 200).
164 Underwater Robot Intelligent Control Based on Multilayer Neural Network
Figure 7.17 Examples of hidden layer weight coefficients evolution (α =0.001, γ =200).
Figure 7.18 Examples of output layer weight coefficients evolution (α =0.001, γ = 200).
References 165
7.6 Conclusions
An approach on how to design an intelligent NN controller for underwater
robots and how to derive its learning algorithm on the basis of a speed
gradient method is proposed and studied in this chapter. The numerical
experiments have shown that high-quality processes can be achieved with the
proposed intelligent NN control. In the study case of producing an UR control
system, the NN learning procedure allows to overcome the parameter and
partial structural uncertainty of the dynamical object. The combination of
the neural network approach with the proposed control, designed using the
nominal model of the underwater robot dynamics, allows to simplify the
control system implementation and to improve the quality of the transient
processes.
Acknowledgement
The work of A.Dyda and D.Oskin was supported by Ministry of Science and
Education of Russian Federation, the State Contract No 02G25.31.0025.
References
[1] A. A. Dyda, ‘Adaptive and neural network control for complex dynamical
objects’, - Vladivostok, Dalnauka. 2007. pp. 149 (in Russian).
[2] A. A. Dyda, D. A. Oskin, ‘Neural network control system for underwater
robots.’ IFAC conference on Control Application in Marine Systems
“CAMS-2004”, - Ancona, Italy, 2004, pp. 427–432.
[3] T. I. Fossen, ‘Marine Control Systems: Guidance, Navigation and Con-
trol of Ships, Rigs and Underwater Vehicles’, Marine Cybernetics AS,
Trodheim, Norway, 2002.
[4] A. A. Fradkov, ‘Adaptive control in large-scale systems’, - M.: Nauka.,
1990, (in Russian).
[5] A. A. Narendra, K. Parthasaraty, ‘Identification and control of dynamical
systems using neural networks’, IEEE Identification and Control of
Dynamical System, Vol.1. No 1. 20, 1990, pp. 1475–1483.
[6] A. Ross, T. Fossen and A. Johansen, ‘Identification of underwater vehicle
hydrodynamic coefficients using free decay tests’, Preprints of Int. Conf.
CAMS-2004, Ancona, Italy, 2004. pp. 363–368.
166 Underwater Robot Intelligent Control Based on Multilayer Neural Network
Abstract
The paper discusses advanced trends in design of modern tactile sensors and
sensor systems for intelligent robots. The main focus is the detection of slip
displacement signals corresponding to object slippage between the fingers of
the robot’s gripper.
It provides information on three approaches for using slip displacement
signals, in particular, for the correction of the clamping force, the identification
of manipulated object mass and the correction of the robot control algorithm.
The study presents the analysis of different methods for the detection of slip
displacement signals, as well as new sensor schemes, mathematical models
and correction methods. Special attention is paid to investigations of sensors
developed by the authors with capacitive, magnetic sensitive elements and
automatic adjustment of clamping force. The new research results on the
determination of object slippage direction based on multi-component capacity
sensors are under consideration when the robot’s gripper collides with the
manipulated object.
8.1 Introduction
Updated intelligent robots pose high-dynamic characteristics and effectively
function under a particular set of conditions. The robot control problem is more
complex in uncertain environments, as robots are usually lacking flexibility.
Supplying robots with effective sensor systems provides essential extensions
of their functional and technological feasibility [11]. For example, a robot may
often encounter a problem of gripping and holding i object doing manipulation
processes with the required clamping force Fi r avoiding its deformation or
mechanical injury, i = 1...n. To successfully solve the current tasks, the robots
should possess the capability to recognize the objects by means of their
own sensory systems. Besides, in some cases, the main parameter due to
which robot can distinguish objects of the same geometric shape is their mass
mi (i = 1...n). The robot sensor system should identify the mass mi of each
i-th manipulated object in order to identify a class (set) an object refers to.
The sensor system should develop the required clamping force Fi r corre-
sponding to mass value mi , as Fi r = f (mi ). Such current data may be applied
when the robot functions in dynamic or random environments. For example,
in a case when the robot should identify unknown parameters for any type of
object and location in robot’s working zone. The visual sensor system may
not always be utilized, in particular, in poor vision conditions. Furthermore,
in cases when the robot manipulates with an object of variable mass mi (t), its
sensor system should provide the appropriate change of clamping force value
Fi r (t) = f [mi (t)] for the gripper fingers. This information can also be used
for the robot control algorithm correction, since the mass of the robot arm’s
last component and its summary inertia moment vary.
Figure 8.1 Grasping and lifting an object with the robot’s arm: Initial positions of the gripper
fingers (1,2) and object (3) (a); Creating the required clamping force Fob by the gripper fingers
during object slippage in the lifting process (b).
Fob or Fie = kFob , where k is a coefficient which impacts the reliability of the
object motion (by the robot arm) in the realisation of the required path, k > 1.
According to Figure 8.2, the robot creates a minimal value of clamping
force Fmin in time moment t1 . Then step by step robot lifts an object vertical
distance Δl and the robot gripper increases the clamping force (F (t 1 ) + ΔF) if
a slip displacement signal appears. The grasping surface of the object is limited
by value lmax . The first series of trial motions is finished in time moment
t2 , when l = l max (Figure 8.1(b)). After that, the robot decompresses the
fingers (t 2 ...t 3 ), moves the gripper (t 3 ...t 4 ) to the initial position (Figure 8.1(a))
and creates (t 4 ...t 5 ) the initial value of the clamping force F(t5 ) = F (t2 ) +
ΔF = F1 for the beginning of a second stage or second series of trial motions.
Some sensor systems based on the slip displacement sensors were con-
sidered in [24, 25], but random robot environments very often requires the
development of new robot sensors and sensor systems for increasing the speed
of operations, the growth of positioning accuracy or the desired path-following
precision.
Thus, the task of the registration of slippage signals between the robot
fingers for manipulated objects is connected with: a) the necessity of the
required force creation being adequate to the object’s mass value; b) the
recognition of objects; c) robot control algorithm correction.
The idea of a trial motion regime comprises the process of an iterative
increase in the compressive force value if the slippage signal is being detected.
The continuous lifting regime provides the simultaneous object lifting process
and increasing clamping force until the slippage signal disappears. The choice
of the slip displacement data acquisition method depends on a robot’s purpose,
170 Advanced Trends in Design of Slip Displacement Sensors for Intelligent Robots
Figure 8.2 Series of trial motions with increasing clamping force F of gripper fingers based
on object slippage.
the salient features of its functioning medium, the requirements of its response
speed and the performance in terms of an error probability.
Figure 8.3 illustrates the main tasks in robotics which can be solved based
on slip displacement signal detection.
8.2 Analysis of Robot Task Solving Based on Slip Displacement 171
Figure 8.3 The algorithm for solving different robot tasks based on slip signal detection.
172 Advanced Trends in Design of Slip Displacement Sensors for Intelligent Robots
the computer identifying the slip displacement signal by comparing the output
signals of both accelerometers.
The interference pattern change detection method. This method involves
the conversion of the intensity changes reflected from the moving surface of
the interference pattern. The intensity variation of the interference pattern is
converted to a numerical code, the auto-correlation function is computed and
it achieves its peak at the slip displacement disappearance.
The configuration change detection in the sensitive elements method.
The essence of the method incorporates the measurement of the varying
parameters when the elastic sensitive element configuration changes. The
sensitive elements made of conductive rubber afford coating of the object
surface protruding above the gripper before the trial motion. When the object is
displacing from the gripper, the configuration changes, the electrical resistance
of such sensitive elements changes accordingly, confirming the existence of
slippage.
The data acquisition by means of the photoelastic effect method. An
instance representing this method may be illustrated by a transducer, in
which, under the applied effort, the deformation of sensitive leather produces
the appearance of voltage in the photoelastic system. The object slippage
results in the change of the sensitive leather deformation being registered by
the electronic visual system. The photosensitive transducer is a device for the
transformation of interference patterns into the form of a numerical signal. The
obtained image is of binary character, each pixel gives one bit of information.
The binary representation of each pixel enables to reduce the processing time.
The data acquisition based on friction detection method. The method
ensures the detection of the moment when the friction between the gripper
fingers and the object to be grasped goes from friction at rest to dynamic
friction.
Method of fixing the sensitive elements on the object. The method is based
on fixing the sensitive elements on the surface of the manipulated objects
before the trial motions with the subsequent monitoring of their displacement
relative to the gripper at slipping.
Method based on recording oscillatory circuit parameters. The method is
based on a change in the oscillatory circuit inductance while the object slips.
The inductive slip sensor with a mobile core, stationary excitation winding
and solenoid winding being one of the oscillatory circuit branches implements
the method. The core may move due to the solenoid winding. The reduction
of the solenoid winding voltage indicates the process of lowering. The core
is lowering under its own weight from the gripper center onto the object
174 Advanced Trends in Design of Slip Displacement Sensors for Intelligent Robots
Figure 8.4 Magnetic SDS: 1– Rod; 2– Head; 3– Permanent magnet; 4– Hall sensor.
J = J0 + χH,
176 Advanced Trends in Design of Slip Displacement Sensors for Intelligent Robots
1
= μ0 JT 2 + π1 arctg 4lc ,
2πBmes
JT = .
μ0 π + 2arctg 4lc
8.4 Mathematical Model of Magnetic Slip Displacement Sensor 177
Bmes Xp + c/2 Xp − c/2
By (p) = − c arctg − arctg −
π + 2arctg 4l l − Yp l − Yp
Xp + c/2 Xp − c/2
arctg − arctg . (8.2)
−l − Yp −l − Yp
f1 − f or
By ;
x∈[−20;20]mm,y=l+1mm
f2 − f or
By ;
x∈[−20;20]mm,y=l+5mm
f3 − f or
By .
x∈[−20;20]mm,y=l+20mm
Figure 8.6 Simulation results for By (P) based on the mathematical model (8.2).
0, 01
Uout (YP ) = 2, 5 + 7, 4 × 10−3 arctg +
0, 014 − YP
0, 01
arctg . (8.4)
0, 014 + YP
The comparative results for dependences Uout (Yp ), UE (Yp ) and UR (Yp )
are presented in Figure 8.7, where Uout (Yp ) was calculated using MM (8.4),
UE (Yp ) are the experimental results according to [7] and UR (Yp ) is a nonlinear
regressive model according to [8].
The comparative analysis (Figure 8.7) of the developed mathematical
model Uout (Yp ) with the experimental results UE (Yp ) confirms the correctness
and adequacy of the synthesized models (8.1)–(8.4).
8.5 Advanced Approaches for Increasing the Efficiency 179
Figure 8.8 The ball as sensitive element of SDS: 1– Finger of robot’s gripper; 2– Cavity for
SDS instalation; 3– Guides; 4– Sensitive element (a ball); 5– Spring; 6– Conductive rubber; 7,
8– Fiber optic light guides; 9– a Sleeve; 10– Light; 11– Photodetector; 13–Cover; 14–Screw;
15–Hole.
Figure 8.9 Light-reflecting surface of the sensitive ball with reflecting and absorbing
portions (12) for light signal.
Figure 8.10 Capacitated SDS for the detection of object slippage in different directions:
1– Main cavity of robot’s gripper; 2– Additional cavity; 3– Gripper’s finger; 4– Rod; 5– Tip;
6– Elastic working surface; 7– Spring; 8– Resilient element; 9, 10– Capacitor plates.
The SDS is placed on at least one of the gripper fingers (Figure 8.10). The
recording element consists of four capacitors distributed across the conical
surface of the additional cavity (2). One plate (9) of each capacitor is located
on the surface of the rod (4) and the second plate (10) is placed on the inner
surface of the cavity (2).
The intelligent sensor system (Figure 8.11) provides an identification of
signals corresponding to object slippage direction {N, NE, E, SE, S, SW, W,
NW} in the gripper in the cases of contacting obstacles.
Figure 8.11 Intelligent sensor system for identification of object slippage direction:
3– Gripper’s finger; 4– Rod; 9, 10– Capacitor plates; 11– Converter “capacitance-voltage”;
12– Delay element; 13, 18, 23– Adders; 14, 15, 21, 26– Threshold elements; 16– Multi-Inputs
element OR; 17– Computer information-control system; 19, 20, 24, 25– Channels for sensor
information processing; 22, 27– Elements NOT; 28–39– Elements AND.
182 Advanced Trends in Design of Slip Displacement Sensors for Intelligent Robots
Lb 1
=
Ladd 5
Table 8.1 The base of production rules “IF-THEN” for indetification of the slip displacement
direction
Number of Antecedent Consequent
Production Rule U1 U2 U3 U4 Direction of Slippage
1 > = < = N
2 > > < < NE
3 = > = < E
4 < > > < SE
5 < = > = S
6 < < > > SW
7 = < = > W
8 > < < > NW
184 Advanced Trends in Design of Slip Displacement Sensors for Intelligent Robots
Figure 8.12 Self-adjusting gripper of an intelligent robot with angle movement of clamp-
ing rollers: 1, 2– Finger; 3, 4– Guide groove; 5, 6– Roller; 7, 8– Roller axis; 9, 15,
20– Spring; 10– Object; 11, 18– Elastic working surface; 12– Clamping force sensor; 13,
14– Electroconductive contacts; 16, 19– Fixator; 17– Stock; 21– Adjusting screw;
22– Deepening; 23– Finger’s drive.
Figure 8.15 Intelligent robot with 4 degrees of freedom for experemental investigations
of SDS.
• the time of the sliding process including the moments between the
emergence of sliding and its disappearance;
• the minimal object displacement detected by the slip signal.
The problem of raising the sensors response in measuring the slip displacement
signals is tackled by improving their configuration and using the measuring
circuit designs with high resolving power.
8.7 Conclusions
The slip displacement signal detection method considered in the present paper
furnishes an explanation of the main detection principles and allows robot
sensing systems to obtain broad capabilities. The authors developed a wide
variety of SDS schemes and mathematical models for capacitative, magnetic
and light-reflecting sensitive elements with improved characteristics (accuracy
time response, sensitivity). It is very important that the developed multi-
component capacity sensor allows identifying the direction of object slippage
based on slip displacement signal detection which can appear in the case
of intelligent robot collisions with any obstacle in a dynamic environment.
The design of self-clamping grippers is also a very appropriate direction for
intelligent robot development.
The results of the research are applicable in the automatic adjust-
ment of the clamping force of robot’s gripper and robot motion correction
algorithms in real time. The methods introduced by the authors may be
also used in random operational conditions, within problems of automatic
assembly, sorting, pattern and image recognition in the working zones of
robots. The proposed sensors and models can be used for the synthesis of
intelligent robot control systems [2, 16] with new features and for solving
orientation and control tasks during intelligent robot contacts with obstacles.
References
[1] Ravinder S. Dahiya, Giorgio Metta, Maurizio Valle and Giulio Sandini.
Tactile Sensing From Humans to Humanoids, volume 26 of Issue 1,
pages 1–20. IEEE Transactions on Robotics, 2010.
[2] A. A. Kargin. Introduction to Intelligent Machines. Book 1: Intelligent
Regulators. Nord-Press, DonNU, Donetsk, 2010.
References 189
[25] M. Ueda, K. Iwata and H. Shingu. Tactile sensors for an industrial robot
to detect a slip. In 2-nd Int. Symp. on Industrial Robots, pages 63–76,
Chicago, USA, 1972.
[26] Y. M. Zaporozhets. Qualitative analysis of the characteristics of direct
permanent magnets in magnetic systems with a gap. Technical electro-
dynamics, (No. 3):19–24, 1980.
[27] Y. M. Zaporozhets, Y. P. Kondratenko and O. S. Shyshkin. Three-
dimensional mathematical model for calculating the magnetic induction
in magnetic-sensitive system of slip displacement sensor. Technical
electrodynamics, (No. 5):76–79, 2008.
[28] Y. M. Zaporozhets, Y. P. Kondratenko and O. S. Shyshkin. Mathematical
model of slip displacement sensor with registration of transversal con-
stituents of magnetic field of sensing element. Technical electrodynamics,
(No. 4):67–72, 2012.
9
Distributed Data Acquisition and Control
Systems for a Sized Autonomous Vehicle
Abstract
In this paper, we present an autonomous car with distributed data processing.
The car is controlled by a multitude of independent sensors. For lane detection,
a camera is used, which detects the lane marks using a Hough transformation.
Once the camera detects these, one of them is selected to be followed by the
car. This lane is verified by the other sensors of the car. These sensors check
the route for obstructions or allow the car to scan a parking space and to park
on the roadside if the gap is large enough. The car is built on a scale of 1:10
and shows excellent results on a test track.
9.1 Introduction
In modern times, the question of safe traveling becomes more and more
important. Most accidents are caused by human failure, so that in many sectors
of industry the issue of “autonomous driving” is of increasing interest. An
autonomous car will not have problems like being in a bad shape that day
or tiredness and will suffer less from reduced visibility due to environmental
influences. A car with laser sensors to detect objects on the road, sensors that
measure the grip of the road, that calculate speed based on the signals of these
sensors and with a fixed reaction time will reduce the number of accidents and
related costs.
types of lane marks exist. From white lines, as in the test track, to pil-
lars and a missing center line, every type of lane marks is expected to
show up.
In order to simplify the lane detection on the test track, it is assumed, that
at all times only one type of lane marks exists. The road has a fixed width and
the radius of the curves measure at least 1 meter. The test track has no slope,
but a flat surface.
data analysis that is positioned next to the sensors. The front boards provide
infrared sensors for object tracking in the distance.
The infrared sensors of the side board have the task of finding a parking
space and transmitting the information via CAN bus to the main controller,
which undertakes the control of parking supported by the information it gets
from the sensors in the front and back of the car.
The rear board is equipped with infrared sensors, too. It serves the
back of the vehicle only. That guarantees a safe distance to all objects in
the back. The microcontrollers are responsible for the data processing of
each board and send the information to the main controller via CAN bus.
Each of the microcontrollers reacts on the incoming input signals of the
corresponding sensors according to its implemented control. The infrared
sensors are distributed alongside the car as you can see in Figure 9.3.
The motherboard with the integrated main controller is the main access
point of the car. It provides the CAN bus connection, the power supply for
the other boards and external sensors of the car. But primarily it’s the central
communications point of the car and manages the information that comes from
the peripheral boards, including the data from the external sensors, the control
signals for the engine and the servo for the starring angle.
The motherboard gets its power supply from three 5 V batteries. With these
three batteries, the model car is able to drive about one hour autonomously.
The main task for the main controller is the control of the vehicle. It
calculates the speed of the car and the starring angle based on a mathematical
model of the car and the information of the sensors. The external engine driver
sets the speed via PWM. The starring angle of the car is adjusted by the front
wheels. An additional servo controls the wheel’s angle.
The camera and its lane detection is the most important component of the
vehicle. It is installed in the middle of the front of the car, see Figure 9.4.
The viewing angle is important for the position of the camera. If the viewing
angle is too small, the pictures of the camera show a near area in front of the
car only, but not the area in the middle distance. If the viewing angle is too
big, the camera shows a big area in front of the car indicating near and far
distances, but the information of the road is so condensed, that an efficient lane
detection isn’t possible. The angle depends also on the height of the camera
and the numerical aperture of the lens. The higher the camera is positioned,
the smaller the viewing angle. For this project, the camera has a height of
30 cm and a viewing angle of 35 degrees. The height and the angle of the
camera are based on experimental research.
Figure 9.5 shows the reduced signal flow of the vehicle. The information
from the infrared sensors is sent to a small microcontroller, as it is visualized
by the spotted lines. In reality, each sensor has its own microcontroller, but to
reduce the complexity of the graphic, they were shown as one. The camera
has its own microcontroller. This controller must be able to accomplish
the necessary calculations for lane detection in time. For the control of the
vehicle by the main controller, it is necessary that all information from all
other controllers are actualized in one calculation step, this is needed for
the mathematical model of the car. The main controller gathers the analyzed
data provided by the smaller microcontrollers, the data from the camera
about the driving lane and the information from other sensors like gyroscope
and accelerometer for its calculation. The essential signal flow of all these
components to the main controller is visualized by the solid lines in Figure 9.5.
After its calculation, the main controller sends control signals to the engine
and the servo, which controls the starring angle of the car.
Incremental encoders on the rear wheels detect the actual speed and
calculate the path the vehicle has traveled during the last calculation step
of the mathematical model. The sensors send the data via CAN bus to the
main controller. The vehicle is front-engined, so traction of the rear wheels is
ensured. Potential error in measurement through spinning is avoided.
There are two modules that do not communicate via CAN bus with the
main controller: the first one is the camera, ensuring that the vehicle keeps
the track, the second is a sensor module, which includes the gyroscope
and accelerometer. Both modules do not have a CAN interface, but they
communicate via an USART interface with the main microcontroller.
In the future, the focus will be on an interactive network of several inde-
pendent vehicles based on radio transmission. This will allow all vehicles to
communicate with each other and share information like traction and behavior
of the road, actual position from GPS, or speed. The radio transmission is
carried out with the industry standard called “Zigbee”. An XBEE module of
the company “Digi” undertakes the radio transmission. The module uses an
UART interface for the communication with the main microcontroller on the
vehicle. Via this interface, the car will get information from other cars nearby.
A detailed overview of the data processing system, including the XBEE
module, is shown in Figure 9.6.
9.4 Lane Detection 199
9.4.2 Hough-Transformation
The Hough-transformation is an algorithm to detect lines or circles in images,
which in this case means that it investigates the binary image from the In-Range
filter in order to find the lane marks.
The Hessian normal form converts individual pixels, so that they can be
recognized as lines in the Hough space. In this state, space lines are expressed
by the distance to the point of origin and the angle to one of the axes. Due to
the fact that the exact angle of the marks is unknown, the distance to the point
of origin is calculated based on Equation (9.1), utilizing the most probable
angles:
r = x · cos (a) + y · sin (a) . (9.1)
The intersection of the sinusoidals provides an angle and the distance of
the straight line from the origin of coordinates. These parameters create a
new line, so that the majority of the pixels can be detected. Furthermore, the
function from the OpenCV-library returns the start and the endpoint of each
Hough-line. As Figure 9.8 shows, the lines of the Hough-transformation are
precisely mapped on the lane marks of the road.
9.4.4 Polynomial
To describe the lane marks more efficiently, a second-degree polynomial is
used. The coefficients of the parable are derived by the least-squares method.
A polynomial of a higher degree isn’t needed, because the effort to calculate
the coefficients is too high to make sense in this context, for the speed of the
image processing is one of the critical points of the project. Furthermore, the
area of the road, which is pictured by the camera, is too small. The road is
unable to clone the typical form of a third-degree polynomial.
As visible in Figure 9.10, the parables derived from the sorted points are
mapped precisely on the lane marks of the road. The algorithm to calculate
the coefficients derived from the points of the lane marks is handwritten.
of two opponent points of the two parables is taken. According to 9.2, the
average for the x- and y-coordinates is calculated.
xm x1 1 x2 x1
= + − . (9.2)
ym y1 2 y2 y1
derived from the change of the difference between the third and second image,
that have been taken before the current one, and the difference between the
second and the first image before the current one.
The critical values for the difference also depend on this calculation. That
means that in curvas, the critical values are higher. If not, only the last three
images are used for the calculation, in order to reduce the noise of the driving
lane. However, in this case, the reaction time of the algorithm is lesser.
The reaction time also depends on the fps (frames per second) of the
camera. For this project, a camera with 100 fps is used and the last fifteen
driving lanes are stored. The number of stored driving lanes for 100 fps is
based on experimental research.
Figure 9.10 shows the driving lane in red color. The four edge points mark
the edge points of the rectangle.
finds a potential stop line in the right area with a correct angle, the algorithm
checks the next two characteristics of a stop line: the length and the width of
the stop line.
The length of the stop line is easy to check. The stop line must be as long
as the road is wide, so the algorithm only needs to check the endpoints of the
line. On the left side, the endpoint of the stop line must lie on the middle road
marking. On the right side, the stop line borders on the left road marking from
the crossing road. The stop line and the road marking differ in just one point:
the width.
Since it is not possible to perceive the differences of the width in each
situation, the stop line has no defined end point on this side. So, the algorithm
checks if the end point of the potential stop line lies on or above the right road
marking. It is hard to measure the width of a line in an image that has constant
width and length in reality. The width of the line in the image in pixels depends
on the camera position in relation to the line, the numerical aperture of the
camera lens and the resolution of the camera. So, because in this project the
position of the camera changes from time to time, measuring the width is not
reliable to perceive the stop line. Therefore, the width is not used as a criterion
for stop lines.
Figure 9.12 shows a typical crossing situation. The left image visualizes
the basic situation and the middle image shows the search area as a rectangle.
Here you can see that the stop line on the left side is not covered by the research
area so the algorithm doesn’t recognize the line as a stop line. On the right
image, the stop line ends correctly on the middle road marking. The line in
the image shows that the algorithm has found a stop line. Due to the left road
marking from the crossing road, the line ends outside the real stop line.
lateral deviation is meters and degrees for the course angle. Course angle
means the angle of the driving lane which is calculated by the camera.
The lateral deviation is the distance of the car’s center of gravity to the
driving lane when they are at the same level. Since the lateral deviation
is needed in meters, the algorithm has to convert the pixel coordinates
from the image into meters in the real world. The course angle can be
calculated from the pixel coordinates in the image, but this method is
error-prone.
There are two different methods to convert the pixels into meters.
Pixels can be converted via Equations (9.3) and (9.4).
h 2α
x(u, v) = · cos (γ̄ − α) + u + l, (9.3)
tan θ̄ − α + u n−12α n−1
h 2α
x(u, v) = · cos (γ̄ − α) + u + l. (9.4)
tan 2α
θ̄ − α + u n−1 n −1
In the equations, x and y are the coordinates in meters. γ stands for the drift
angle of the camera in the plane area and θ stands for the pitch angle of the
camera. α is the numerical aperture of the camera, u and v are the coordinates
of one pixel in the image.
Using this equation, the complete image can be converted into
real-world coordinates. The drawback of this method is that all parameters
of the camera have to be known exactly; every difference between the
numerical aperture in the equation and the exact physical aperture of the
camera lens can cause massive failure in the calculation. Furthermore, this
method needs more calculation time on the target hardware. A big plus of
this method is that the camera can be re-positioned during experimental
research.
The second method is to store references to some pixels in lookup tables.
For these pixels, the corresponding values in meters can be calculated or can be
measured. This method expends much less calculation time but is also much
less precise. With this method, the camera cannot be re-positioned during
experiment research. Every time the camera is re-positioned the reference
tables must be re-calculated.
The method to prefer depends on the project requirements regarding
accuracy and the projects hardware. For this project, the second method is
used. To meet the demands on accuracy, for each tenth pixel of the camera, a
reference is stored.
206 Distributed Data Acquisition and Control Systems
Because of the equation in the state space, a controller can be designed using
tools such as Matlab Simulink or Scilab X-cos.
Figure 9.13 Driving along a set path: Track model (a); Lateral deviation and heading
angle (b).
9.7 Conclusions 207
In order to keep the vehicle on the track, the conditions such as heading
angle, the yaw rate and lateral deviation must be known. A gyroscope is
used to detect the yaw rate of the vehicle. The lateral deviation and the
course angle are calculated from the camera. The camera sends the lateral
deviation and the curse angle. Until the next image is analyzed, the coordinates
on the microcontroller stay the same as before. Between two pictures the
lock angle and the transverse deviation were recalculated after each motion.
This is possible because the velocity and yaw rate are known at any time.
Figure 9.13(b) illustrates the relationship of lateral deviation (ΔY) and heading
angle (α).
9.6 Results
This section gives an overview of the project results.
The autonomous car was built with the hardware suggested before.
Experiments on scaled test roads show that the car can drive autonomously.
However, the tests also showed the limitations of this prototype. The effort
for the image processing was undervalued. The on-board processor of the
camera isn’t able to accomplish the necessary calculations in time. In terms
of reaction to this fact, the maximum speed of the car has to be very slow. If
it isn’t, the control of the vehicle gets unstable with a more or less random
driving path. In addition, the car has problems with too sharp curves. The
process to divide the image isn’t dynamic, so in curves the preset section
becomes incorrect and the algorithm isn’t able to calculate a correct driving
path. Thanks to its laser sensors, the car is able to avoid collisions with
baffles.
To improve the performance of the car, the hardware for the image
processing has to be improved. The image processing works stable. Problems
derive from the calculation algorithm of the driving path. At this point in time,
the algorithm doesn’t contain the necessary interrupts for every situation on
the road, but this drawback will be corrected in the second prototype.
9.7 Conclusions
In this chapter, an autonomous vehicle with distributed data acquisition and
control systems has been presented. For control, the vehicle has a number
of independent sensors. The main sensor is a camera with a lane tracking
algorithm, which contains edge detection and Hough transformation. The lane
is verified by laser sensors in the front and side of the vehicle. It is planned
208 Distributed Data Acquisition and Control Systems
References
[1] J. Canny, ‘A Computational Approach to Edge Detection’, IEEE 1986.
[2] A. Erhardt, ‘Einführung in die Digitale Bildverarbeitung’, Offenburg
2008.
[3] W. Heiden, ‘Kanten in Bildern - Filterung und Kantenerkennung’,
St. Augustin 2009.
[4] B. Jähne, ‘Digitale Bildverarbeitung und Bildgewinnung’, Berlin 2013.
[5] M. Jauernig, ‘Einsatz von Algorithmen der Photogrammmetrie und
Bildverarbeitung zur Einblendung spezifischer Lichtraumprofile in
Videosequenzen’, Hochschule Leipzig 2006.
[6] A. Kant, ‘Bildverarbeitungsmodul zur Fahrspurerkennung für ein
autonomes Fahrzeug’, Hochschule Hamburg 2007.
[7] L. F. Kirk, ‘Schätzung der Brennweite mit Hilfe der Hough-
Transformation’, Fachhochschule, Köln 2011.
[8] N. Kruse, ‘Kameragestützte Fahrspurerkennung für autonome Modell-
fahrzeuge’, Universität Hamburg 2008.
[9] G. Linß, ‘Praktische Ausbildung und Training Qualitätsmanagement
Objekterkennung mit Hough-Transformation’, Technische Universität
Ilmenau.
[10] R. Maini, H. Aggarwal, ‘Study and Comparison of Various Image Edge
Detection Techniques’, Punjabi University, India.
[11] P. Schöley, Kantendetektoren, Technische Universität Dresden, 2011.
[12] J. Unger, ‘Untersuchung von Linien und Kantenextraktionsalgorith-
men im Rahmen der Verifikation von Ackerlandobjekten’, Universität
Hannover, 2009.
[13] C. Wagner, Kantenextraktion – Klassische Verfahren, Universität, Ulm,
2006.
[14] G. M. Wagner, G. M., ‘Bestimmung der Kameraverzerrung mit Hilfe
der Hough-Transformation’, Fachhochschule Köln, 2011.
[15] L. Bergen, H. Burkhardt, Morphological image processing.
[16] J. Wohlfeil, ‘Detection and tracking of vehicles with a moving
camera’ Humbolt-Universität zu Berlin Institut für Informatik;
Deutsches Zentrum für Luft- und Raumfahrt Institut für Verkehrs-
forschung, 2006.
References 209
Abstract
The authors examine the up-to-date relationship between the theory of poly-
metric measurements and the state of the art in intelligent system sensing.
The chapter contains commentaries about: concepts and axioms of polymetric
measurements theory, corresponding monitoring information systems used in
different technologies and some prospects for polymetric sensing in intelligent
systems and robots. The application of the described concepts in technological
processes ready to be controlled by intelligent systems is illustrated.
Figure 10.1 The main idea of the replacement of the distributed multi-Sensor system by a
polymetric perceptive agent.
216 Polymetric Sensing in Intelligent Systems
Figure 10.2 TDR Coaxial probe immersed into the liquid and the corresponding polymetric
signal.
10.3 Practical Example of Polymetric Sensing 217
Figure 10.3 Time diagrams for polymetric signal formation using additional reference time
intervals.
to count the delays between two sounding pulses τ ZZ and between the first
sounding pulse and the reflected pulse τ ZS (in terms of the number of ADC
readings).
The delay τ ZZ , expressed in ADC readings count, corresponds to the time
delay TT S (in seconds). The next equation should be used to calculate the time
scale of the transformed and digitized signal:
TT S
KDT = s/reading. (10.8)
τZZ
It is possible to calculate the time value of the delay between the pulses t0
(see Figure 10.2):
KDT τZS τZS
t0 = = s. (10.9)
KT S f1 τZZ
Finally, to calculate the distance to the surface of the liquid, it is necessary
to use the equation:
c τZS
L0 = √ + b1 , (10.10)
2 ε0 f1 τZZ
where b1 (zero shift) is calculated using information on the PCB layout and
generator parameters or during a simple calibration procedure.
Figure 10.4 Disposition of the measuring probe in the tank, position of the reflector and
corresponding signals for the cases with air and vapour.
10.4 Efficiency of Industrial Polymetric Systems 221
The delays between the sounding pulse and the pulse reflected from the
special reflector τ R and between the sounding pulse and the pulse reflected
from the surface of the liquid τ ZS for cases with air and vapour are different.
The dielectric constant ε0 can be calculated using the known distance LR .
Therefore, the distance to the surface of the liquid L0 can be calculated using
the corrected dielectric constant value:
(LR − b1 )τZS
L0 = + b1 . (10.11)
τZR
As it can be seen from the equation, the result of the measurement depends on
the time intervals between the sounding and reflected pulses and the reference
distance between the generator and the reflector.
The above-described example of polymetric signal formation showed
that a single signal carries information about the time scale of this signal,
the velocity of propagation of electromagnetic waves in dielectrics and the
linear dimensions of these layers. This list of measurable parameters can be
easily continued by using additional information in the existing signals and
building new «hyper-signals» [9] for measuring some other characteristics of
controllable objects (e.g. on the basis of the spectral analysis of these signals
the controllable liquid can be classified and some quality characteristics of the
liquid can be calculated).
Figure 10.5 Example of the general structure of lascos hardware components and elements.
10.4 Efficiency of Industrial Polymetric Systems 223
cargo quantity and quality monitoring and control (6); a set of polymetric
sensors for liquefied LPG or LNG cargo quantity and quality monitoring and
control (7); switchboards of the subsystem for actuating devices and operating
mechanisms control (8); a basic electronic block of the subsystem for liquid,
liquefied and loose cargo monitoring and control (9); a block with the sensors
for real-time monitoring of ship dynamic parameters (10).
The structure of the software part of the typical sensory intelligent
LASCOS is presented in Figure 10.6.
It consists of three main elements: a sensory monitoring agency (SMA)
which includes three other sensory monitoring agencies – SSM (sea state,
e.g. wind and wave model parameters), SPM (ship parameters) and NEM
(navigation environment parameters); an information environment agency
(INE) including fuzzy holonic models of ship state (VSM) and weather
conditions (WCM), and also data (DB) and knowledge (KB) bases; and
last but not least – an operator interface agency (OPIA) which provides the
decision-making person (DMP) with necessary visual and digital information.
Unfortunately, until now, the above-mentioned challenges have not been
combined together in an integrated model, which applies cutting-edge and
novel simulating techniques. Agent-based computations are adaptive to
information changes and disruptions, exhibit intelligence and are inherently
distributed [4]. Holonic agents inherently may help design and operational
Figure 10.6 The general structure of lascos software elements and functions.
224 Polymetric Sensing in Intelligent Systems
Each tank is designated for the particular cargo type and equipped with
the required sensors. For example, for the measurement of the diesel oil
parameters, the corresponding tanks are equipped with the level sensors
(level sensors with the separation level measurement feature) and temperature
sensors. The total number of the tanks, corresponding sensors for the tanks
of a typical supply vessel, are shown in Table 10.1 (in case of the traditional
sensory system).
All the information acquired from the sensors must be pre-processed
for the final calculation of the required cargo and ship parameters in the
computing system. Each of the sensors requires power and communication
lines, acquisition devices and/or interface transformers (e.g. current loop into
RS-485 MODBUS) and so on.
In contrast to the classical systems, the polymetric sensory system requires
only one sensor for the measurement of all required cargo parameters in the
tank. Therefore, if we assume that traditional and polymetric sensory systems
are equivalent in the measurement information quality and reliability (systems
are interchangeable without any loss of measurement information quality),
it is obvious that polymetric system has the advantage in the measurement
channels number.
The cost criterion can be used for the comparison of the efficiency
of traditional and polymetric sensory systems. Denoting the cost of one
Table 10.1 Quantity and Sensor Types for the Traditional Cargo Sensory System (Example)
Total
Measureable Tanks Sensors Sensors
Cargo Type Parameters Number Number/Tank Number
Diesel oil Level in the tank, 6 3 18
presence of water in the
tank, temperature
LPG Level in the tank, 1 3 3
quantity of the liquid
and vapor gas,
temperature
Ballast water Level in the tank 6 1 6
Drinking Level in the tank, 6 2 12
water presence of other
liquids in the tank
Bulk cargo Level in the tank, 2 2 4
quality parameter
(e.g. moisture content)
Total 21 43
230 Polymetric Sensing in Intelligent Systems
pumps, etc. The main window consists of the main menu, the toolbar,
the information panel and also the technological layout containing control
elements.
It is possible to start, change parameters or stop technological processes by
clicking a particular control element. The user interface enables the operator
to efficiently supervise and control every detail of any technological process.
All information concerning ship safety and operation monitoring and control,
event and alarm management, database management or message control is
structured in functional windows.
components depending on time and real temperature deviation during the pro-
duction process); real-time remote control of aggressive chemicals at a nuclear
power station water purification shop (quantitative and qualitative control of
the components depending on time and real temperature deviation during
the production process); water parameters control in the primary coolant
circuit at a nuclear power station in normal and post-emergency operation
state (fluidized bed-level control, pressure and temperature monitoring – all
in the conditions of increased radioactivity).
10.5 Conclusions
This paper has been intended as a general presentation of a rapidly developing
area of the promising transition from traditional on-board monitoring systems
to intelligent sensory decision support and control systems based on neoteric
polymetric measuring, data mining and holonic agent techniques. The area is
especially attractive to those researchers who are attempting to develop the
most effective intelligent systems by squeezing the maximum information
from the simplest and most reliable sensors by means of sophisticated and
effective algorithms.
In order for monitoring and control systems to become intelligent, not only
for exhibition demonstrations and show presentations, but for real industrial
applications, one needs to implement the leading-edge solutions for each and
every component of such a system.
Combining polymetric sensing, data mining and holonic agencies tech-
niques into one incorporated approach seems to be rather prospective if more
and more research will develop appropriate theory models and integrate them
into the practice as well.
References
[1] K. Hossine, M. Milton and B. Nimmo, ‘Metrology for 2020s’, Middle-
sex, UK: National Physical Laboratory, 2012, p. 28.
[2] S. Russell and P. Norvig, ‘Artificial Intelligence: A Modern Approach’,
Upper Saddle River, NJ: Prentice Hall, 2003.
[3] L. Monostori, J. Vancza and S. Kumara, ‘Agent-Based Systems for
manufacturing’, Annals of the CIRP, no. 55, 2 2006.
[4] P. Leitao, ‘Agent–based distributed manufacturing control: A state-
of-the-art survey’, Engineering Applications of Artificial Intelligence,
no. 22, pp. 979–991, 2009.
References 233
Abstract
An intelligent femtocell-based sensor network is proposed for home monitor-
ing of elderly or people with chronic diseases. The femtocell is defined as a
small sensor network which is placed into the patient’s house and consists of
both mobile and fixed sensors disposed on three layers. The first layer contains
body sensors attached to the patient that monitor different health parameters,
patient location, position and possible falls. The second layer is dedicated for
ambient sensors and routing inside the cell. The third layer contains emer-
gency ambient sensors that cover burglary events or toxic gas concentration,
distributed by necessities. Cell implementation is based on the IRIS family
of motes running the embedded software for resource-constrained devices,
TinyOS. In order to reduce energy consumption and radiation level, adaptive
rates of acquisition and communication are used. Experimental results within
the system architecture are presented for a detailed analysis and validation.
11.1 Introduction
Recent developments in computing and communication systems applied to
healthcare technology give us the possibility to implement a wide range
of home-monitoring solutions for elderly or people with chronic diseases
[1]. Thus, people may perform their daily activities while being constantly
under the supervision of the medical personnel. The indoor environment is
optimized such that the possibility of injury is minimal. Alarm triggers and
smart algorithms sent data to the right intervention units in regard to the
detected emergency [2].
When living inside closed spaces, small variables may be significant to
the entire person’s well-being. Therefore, the quality of the air, temperature,
humidity or the amount of light inside the house may be important parameters
[3]. Reduced costs, size and weight and energy-efficient operation of the
monitoring nodes, together with the more versatile wireless communications,
make the daily usage of the systems monitoring health parameters more
convenient. By wearing them, patients are free to move at their own will
inside the monitored perimeter, practically forgetting their presence. The goal
is to design the entire system operation for a long period of time without human
intervention and at the same time, triggering as few false alarms as possible.
Many studies investigated the feasibility of using several sensors placed
on different parts of the body for continuous monitoring [4]. Home care for the
elderly and chronic disease persons becomes an economic and social necessity.
With a growing population of ageing people and the health care prices rising
all over the world, we expect a great demand for home care systems [5, 6].
An Internet-based topology is proposed in [7] for the remote home-monitoring
applications that use a broker server, managed by a service provider. The
security risks from the home PC are transferred to the broker server and
removed, as the broker server is located between the remote-monitoring
devices and the patient’s house. An early prototype of a mobile health service
platform that was based on Body Area Networks is MobiHealth [8]. The
most important requirements of the developer for an e-health application are
size and power consumption, as considered in [9]. Also, in [10], a thorough
comprehensive study of the energy conservation challenges in wireless sensor
networks is carried out.
In [11], a wireless body area network providing long-term health moni-
toring of patients under natural physiological states without constraining their
normal activities is presented.
Integrating the body sensors with the existing ambient monitoring network
in order to provide a complete view of the monitored parameters is one of the
issues discussed in this paper. Providing a localization system and a basic
algorithm for event identification is also part of our strategy to fulfill all
possible user requests. Caregivers also value information about the quality
11.2 Network Architecture and Femtocell Structure 237
of air inside the living area. Many false health problems are usually related to
the lack of oxygen or high levels of CO or CO2 .
The chapter is organized as follows: Section 2 provides an overall view
on the proposed system architecture and detailed insight into the operation
requirements for each of the three layers for body, ambient and emergency
monitoring. Section 3 introduces the main criteria for efficient data collection
and a proposal for an adaptive data rate algorithm for both the body sensor
network and the ambient sensor network. This has the aim of reducing the
amount of data generated within the networks, considering processing, stor-
age, and energy requirements. Implementation details and experimental results
are evaluated in Section 4, where the path is set for long-term deployment and
validation of the system. Section 5 concludes the chapter and highlights the
main directions for future work.
While IEEE 802.15.4 and ZigBee enable large dense networks with complex
mesh topologies, the use of Bluetooth can become an advantage in applications
with higher data-rate requirements and low latency.
The layer characteristics and functionalities are further elaborated upon.
The main events that should be detected by the BSN cover fall detection,
activity recognition and variations in investigated parameters corresponding
to alert and alarm levels.
Figure 11.2 Data and information flow within and outside the femtocell.
11.2 Network Architecture and Femtocell Structure 241
body sensor will be always connected to the closest router. By using fixed
environmental sensors with own ID and previously known positions, we can
determine which room is presently used by the inhabitant. In order to have
an accurate localization of the patient, an attenuation map of the sensors from
each room must be created. Considering that patient is localization in a certain
room of the home, by the closest ASN, is not accurate this could happen due
to the following scenario: lets suppose that we have an ASN located in the
bedroom, situated in the left side of the room. In the right part of the room,
we have the door to the living room, and near this door we have the ASN for
the living room. If the patient is located in the bedroom, but very close to the
door, it will have as closest ASN the one from the living room, but he/she is
situated in the bedroom. In order to avoid this localization error, we introduce
the attenuation map of the sensors. Every ASN that localizes the BSN on the
patient will transmit an attenuation factor. This way, using the attenuation
factors from each sensor, we can localize the patient very accurately. In our
example, if the bedroom ASN has a 10% factor, and the living room ASN has
a 90% factor, using the attenuation map, we localize the patient as being in
the bedroom, but very close to the door between the two rooms.
The position of a hybrid femtocell in the large wireless sensor network
system is presented in Figure 11.3. Its main purpose is to monitor and interpret
data, sending specific alarms when required. The communication between the
femtocells and the network is based on internet. The same method is used for
can be reliably evaluated locally and relayed to the gateway as small packets
of timely information. The module can be either powered by mains or draw
power from the energy source of the host node. Also, it can operate as a
stand-alone device through ModBus serial communication with other infor-
mation processing systems.
In case an alert is triggered, the information is automatically processed
and the alarm is sent to the specific intervention factor (hospital, police, fire
department, etc.). These have the option of remotely accesing the fem-tocell
mangement node in order to verify that the alarm is valid and act by dispatching
an intervention team to solve the issue. Subsequently, all the alarms generated
over time are classified and stored in a central database for reporting purposes.
experiment, the sensor situated on the knee has ID104, and the other one,
placed on the left hip, ID103. In order to overcome the hardware limitations,
in a tridimensional representation, the representation of the 3 axis using those
two accelerometers is the following:
• X axis: front back, X axis of node 104;
• Y axis: left right, X axis of node 103;
• Z axis: bottom up, Y axis of both nodes.
The following activities were executed during the experiment: standing, sitting
on a chair, standing again and slow walking from bedroom to living. Data
acquisition has been performed using MOTE-VIEW [13]. This is an interface,
client layer, between a user and a deployed network of wireless sensors.
Besides this main function, the user can change or update the individual
node firmware, switch from low-power to high-power mode and set the radio
transmit power. Collected data is stored in a local database and can be accessed
remotely by authorized third parties.
Multiple experiments have been conducted in order to determine
the trust values. Therefore, in the X and Y axis charts presented in
Figures 11.7 and 11.8, the readings outside the green lines represent that an
event occurred, obtained by thresholding. The following events have been
monitored during our experiment in this order: standing, sitting on a chair,
standing again and slow walking from bedroom to living. The used sensors
are ADXL202E [14].
These are low-cost, low-power, complete 2-axis accelerometer with a
digital output, all on a single monolithic IC. They are an improved version
of the ADXL202AQC/JQC. The ADXL202E will measure accelerations
with a full-scale range of 2 g. The ADXL202E can measure both dynamic
acceleration (e.g., vibration) and static acceleration (e.g., gravity). The outputs
are analog voltage or digital signals whose duty cycles (ratio of pulse width
to period) are proportional to acceleration. The duty cycle outputs can be
directly measured by a micro- processor counter, without an A/D converter
or glue logic. The duty cycle period is adjustable from 0.5 ms to 10 ms via a
single resistor (RSET).
The radio base station is made up of an IRIS radio/processor board con-
nected to a MIB520 USB interface board via the 51-pin expansion connector.
The interface board uses a FTDI chip and provides two virtual COM ports
to the host system. COMx is used for programming the connected mote and
COMx+1 is used by middleware applications to read serial data. The used
network protocol is called XMesh, owned by Crossbow, based on the standard
802.15.4. The Crossbow sensor networks can run different power strategies,
each of these strategies being a trade-off between power, data rates and
latency.
XMesh-HP [15] is the best approach for systems that have continuous
power, offering the highest message rate, usually proportional to the band
rate of the radio. Radios and processors are continually powered, consuming
248 Design and Implementation of Wireless Sensor Network Based
Because of their small size, nodes can be easily concealed into the
background, interfering as little as possible with the user’s day-to-day routine.
We have also the possibility to set the sampling rate at a suitable level in order
to achieve low-power consumption and by this a long operating range without
human intervention [16].
Our infrastructure also offers routing facilities increasing the reliability
of our network by self-configuring into a multihop communication system
whenever direct links are not possible. After experimenting with different
topologies, we achieved a working test scenario which consisted in a four-
level multihop communication network which is more than we expect to be
necessary in any of our deployment locations.
Extensive experimental work has been carried out for the ambient sensor
layer of the system based on MTS400 IRIS sensor nodes. One of the reasons
for choosing this specific board has been that it provided the needed sensors
in order to gather a variety of environmental parameters, like temperature,
humidity, relative pressure and ambient light. The experimental deployment
consists of three measurement nodes organized in a true mesh topology in a
testing indoor environment. These aim at modeling a real implementation in
the patient’s home and were taken over the course of a week-long deployment.
In order to assure accounting to uneven sampling from the sensor nodes, we
use as reference time MATLAB serial time units which are converted from
conventional time stamps entries into the MOTE-VIEW database of the form
dd.mm.yyyy HH:MM:SS.
In Figure 11.9(a), the evolution of the humidity parameter measured by
indoor deployed sensors can be seen. The differences account for node place-
ment in the different rooms and exposure to windows and doors. Subsequent
processing can lead to computing average values and to other correlations with
ambient and body parameters and an intelligent information system which
can associate variations in ambient humidity and temperature to influences
on chronic disease. Figure 11.9(b) illustrates temperature variations obtained
from the ambient sensor network. These reflect the circadian evolution of the
measured parameter and show the importance of correct node placement and
data aggregation within the sensor network.
Barometric pressure (Figure 11.10(a)) is also observed by the sensor
network over the testing period. This is the parameter that is least influenced
by node placement and more by general weather trends. As differences
between individual sensor node values of a few percentage points, these can be
attributed to sensing element calibration or local temperature compensation.
Aggregating data also in this case can lead to higher-quality measurements.
250 Design and Implementation of Wireless Sensor Network Based
Ambiental light values, suitable for feeding data to an intelligent mood lighting
system, are shown in Figure 11.10(b). The light sensor saturates in full daylight
at around 1850 lux and quickly responds to variations in the measured light.
The most important period of the day are dawn and dusk where the information
provided can assure a smooth transition from artificial to natural light and
reverse.
Data flow coming from the ambient sensor network is processed in multiple
steps: at the source node, in the network e.g. through aggregation or sensor
fusion, and at the home server or gateway level. Each steps converts raw data
to higher-level pieces of information which can be more efficiently operated
with and become meaninful through correlation and interactive visualization.
Efficient management of this infomation is critical to correct operation of the
home-monitoring system. Alerts and alarms have to be reliable and build trust
among the end user leading to widespread acceptance whilst assuring a high
level of integrity, security and user privacy.
Table 11.4 summarizes the aggregated experimental deployment values
for the three nodes over the investigated period. Minimum, maximum, average
and the standard deviations of the collected time series for each of the measured
ambiental parameters are listed.
Making effective use of the large quantities of data generated by the layers
of the femtocell structure, represented by the individual sensor networks,
poses a significant challenge. One idea is to apply computational intelligence
11.5 Conclusion
The chapter introduced a system architecture composed of a smart hybrid
sensor network for indoor monitoring using a multilayer femtocell for ubiq-
uitous intelligent home monitoring. The main components of the system are
three low-level wireless sensor networks: body, ambient and emergency, along
with a central coordination entity named the femtocell management node.
This also acts as gateway towards the internet and the interested stakeholders
in an ambient-assisted living scenario. It has been argued that efficient data
collection and processing strategies along with robust networking protocols
can enable seemless integration into the patient’s home and bring added value
to home care whilst reducing overall medical and assitance costs. Recent
advances in miniaturization of discrete electronic components and systems,
along with enhanced computing and communication capabilities of intelligent
home device offer a good opportunity in this application area. This can be
exploited for both reasearch and development in the field of ambient-assisted
living to increase quality of life while dealing with increased medical costs.
The main experimental focus was on the body sensor network layer and
the ambient sensor layer and experimental deployment and implementation
has been illustrated. First, body-worn wireless accelerometers were used
to detect and classify human activity based on time-domain thresholding.
Second, extensive results from a medium-term deployment of an ambient
sensor network were illustrated and discussed. This had, as a main purpose,
the collection of ambient parameters, like temperature, humidity, barometric
pressure and ambient light while observing network protocol behaviour. The
main conclusion is that wireless sensor network systems and protocols offer
a reliable option for deployment in home monitoring, given the specific
challenges of indoor environments.
Plans for future development have been established on three main paths.
One direction includes extending the system with more measured parameters
through additional sensor integration with the wireless nodes. The focus here
254 Design and Implementation of Wireless Sensor Network Based
would be on the body sensor network side where a deep insight into the
patient’s well-being and health status can be gained. Also, while raw data and
machine-learning algorithms can provide high-confidence recommendations
and alerts to the caregivers, data visualization in the home and for the
patient should not be neglected. This can be done by developing adaptive
and intuitive interfaces for the patient or elderly person which enhance
acceptance of the system. The quality and accuracy of the expected results has
to be increased by integrating state-of-the-art sensing, signal processing and
embedded computing hardware along with the implementation of advanced
methods for experimental data processing.
References
[1] A. Măciucă, D. Popescu, M. Struu, and G. Stamatescu. Wireless
sensor network based on multilevel femtocells for home monitoring.
In 7th IEEE International Conference on Intelligent Data Acquisition
and Advanced Computing Systems: Technology and Applications,
pages 499–503, 2013.
[2] M. Fahim, I. Fatima, Sungyoung Lee, and Young-Koo Lee. Daily life
activity tracking application for smart homes using android smart-
phone. In Advanced Communication Technology (ICACT), 2012 14th
International Conference on, pages 241–245, 2012.
[3] M. Smolen, K. Czopek, and P. Augustyniak. Non-invasive sensors based
human state in nightlong sleep analysis for home-care. In Computing in
Cardiology, 2010, pages 45–48, 2010.
[4] Yibin Hou, Na Li, and Zhangqin Huang. Triaxial accelerometer-based
real time fall event detection. In Information Society (i-Society), 2012
International Conference on, pages 386–390, 2012.
[5] Andrew D. Jurik and Alfred C. Weaver. Remote medical monitoring.
Computer, 41 (4):96–99, 2008.
[6] P. Campbell. Population projections: States, 1995–2025. Technical
report, U.S. Census Bureau, 1997.
[7] Chao-Hung Lin, Shuenn-Tsong Young, and Te-Son Kuo. A remote
data access architecture for home-monitoring health-care applications.
Medical Engineering and Physics, 29(2):199–204, 2007.
[8] Aart Van Halteren, Dimitri Konstantas, Richard Bults, Katarzyna Wac,
Nicolai Dokovsky, George Koprinkov, Val Jones, and Ing Widya. Mobi-
health: ambulant patient monitoring over next generation public wireless
References 255
of Bucharest, Romania
Corresponding author: M.C. Caraivan <[email protected]>
Abstract
This paper is following the development of real-time applications for marine
operations focusing on modern modelling and simulation methods and
presents a common framework model for multi-purpose underwater sensors
used for offshore exploration. It is addressing deployment challenges of
underwater sensor networks called by the authors “Safe-Nets” by using
Remotely Operated Vehicles (ROV).
12.1 Introduction
The natural disaster following the explosion of BP Deepwater Horizon
offshore oil-drilling rig in the Gulf of Mexico has raised questions more
than ever about the safety of mankind’s offshore oil-quests. For three months
in 2010, almost 5 million barrels of crude oil formed the largest accidental
marine oil spill in the history of the petroleum industry. The frequency of
maritime disasters and their effects appear to have dramatically increased
during the last century [1], and this draws considerable attention from decision
makers in communities and governments. Disaster management requires the
collaboration of several management organizations resulting in heterogeneous
systems. Interoperability of these systems is fundamental in order to assure
effective collaboration between different organizations.
Research efforts in the exploration of offshore resources have increased
more and more during the last decades, thus contributing to greater global
interest in the area of underwater technologies. Underwater sensor networks
are going to become in the nearby future the background infrastructure for
applications which will enable geological prospection, pollution monitor-
ing, and oceanographic data collection. Furthermore, these data collection
networks could in fact improve offshore exploration control by replacing
the on-site instrumentation data systems used today in the oil-industry
nearby well heads or in well-control operations, e.g. using underwater
webcams which can provide important visual data aid for surveys or for
offshore drilling explorations. These facts lead to the idea of deploying
multi-purpose underwater sensor networks along-side with oil companies’
offshore operations. The study is trying to show the collateral benefits of
deploying such underwater sensor networks and we address state-of-the-art
ideas and possible implementations of different applications like military
surveillance of coastal areas, assisting navigation [2] or disaster prevention
systems – including earthquakes and tsunami detection warning alarms in
advance – all in order to overcome the biggest challenge of development: the
cost of implementation.
It is instructive to compare current terrestrial sensor network practices
to underwater approaches: terrestrial networks emphasize low-cost nodes
(around a maximum of US$100), dense deployments (at most a few 100m
apart) and multi-hop short-range communication. By comparison, typical
underwater wireless communications today are expensive (US$10.000 per
node or even more), sparsely deployed (a few nodes, placed kilometres apart),
typically communicating directly to a “base-station” over long-distance ranges
rather than with each other. We seek to reverse the design points which make
land networks so practical and easy to expand and develop, so underwater
sensor nodes that can be inexpensive, densely deployed, and communicating
peer-to-peer [3].
Multiple Unmanned orAutonomous Underwater Vehicles (UUVs,AUVs),
equipped with underwater sensors, will also find application in exploration
12.1 Introduction 259
multiple layers and drawers for components which can be used for different
purposes, but mainly for underwater data collection and monitoring. This
development using Enterprise Architecture Principles is sustainable through
time, as it is backed up by different solutions to our research challenges,
such as power supply problem, fouling corrosion, self-configuration, self-
troubleshooting protocols, communication protocols and hardware methods.
Two-thirds of the surface of Earth is covered by water and as history
proved it, there is a constantly increasing number of ideas to use this space.
One of the most recent is perhaps moving entire buildings of servers - Google’s
Data-Centres [8] overseas, literally, because of their cooling needs which
nowadays are tremendous. These produce a heat footprint clearly visible even
from satellites and by transferring them to the offshore environment, their
overheating problems would have cheaper cooling methods which could be
satisfied by the ocean’s seawater almost constant temperature.Also, we discuss
the electrical power supply possibilities further in the following chapter.
generators, wind turbines, gas pressure turbines. We can overcome this design
issue with cable connections to jackets or to autonomous buoys with solar
panels which are currently undergoing research [9, 10].
Industrial applications such as oil-fields and production lines use extensive
instrumentation, sometimes with the need of a video-feedback from the
underwater operations site. Considering the depths at which these cameras
should operate, there is also an imperative need for proper lighting of the
area; therefore we can anticipate that these nodes will be tethered in order to
have a power source at hand.
Battery power problems which in our case can be overcome not only by
sleep-awake energy efficient protocols [11–13], but also by having connectiv-
ity at hand to other future system types of producing electricity from renewable
resources, like wave energy converter units according to the European project
Aquatic Renewable Energy Technologies (Aqua-RET) [14]:
• Attenuator-type Figure 12.1: Pelamis Wave Energy Converter [15];
• Axial symmetric absorption points as in Figure 12.2: WaveBob [16],
AquaBuoy, OE Buoys [17] or Powerbuoy [18];
• Wave-level oscillation converters: completely submerged Waveroller or
surface Oyster [19];
• Overtopping devices Figure 12.3: Wave Dragon [20];
12.2.2 Communications
Until now, there were several attempts to deploy underwater sensors that
record data during their mission, but they were always recovered afterwards.
This did not give the flexibility needed for real-time monitoring situations
like surveillance or environmental and seismic monitoring. The recorded
264 Common Framework Model for Multi-Purpose Underwater Data Collection
data could not be accessed until the instruments were recovered. It was also
impossible to detect failures before the retrieval and this could easily lead to
the complete failure of a mission. Also, the amount of data stored was limited
by the capacity of the devices on-board the sensors (flash memories, hard
disks).
Two possible implementations are buoys with high-speed RF-based com-
munications, or wired connections to some sensor nodes. The communication
bandwidth can be provided also by satellite connections which are usually
present on offshore facilities. If linked to an autonomous buoy, the device
provides GPS telemetry and has communication capabilities of its own.
Therefore, once the information gets to the surface, radio communications
are considered to be already provided as standard. Regarding underwater
communications, usually the typical physical layer technology implies acous-
tic communications. Radio waves have long-distance propagation issues
through sea water and can only be done at extra low frequencies, below
300 Hz [22]. This requires large antennae and high transmissions power,
which we would prefer avoiding. Optical waves do not suffer from such
high attenuation but are affected by scattering. Moreover, transmission of
optical signals requires high precision in pointing the narrow laser beams.
The primary advantage of this type of data transmission is the higher the-
oretical rate of transmission, while the disadvantages are the range and the
line-of-sight operation needed. We did not consider this as a feasible solution
due to marine snow, non-uniform illumination issues and other possible
interferences.
We do not intend to mix different communication protocols with different
physical layers, but we analyze the compatibility of each with existing
underwater acoustic communications, state-of-the-art protocols and routing
algorithms. Our approach will be a hybrid system, like the one in Figure 12.6
that will incorporate both tethered sensors and wireless acoustic where abso-
lutely no other solution can be implemented (e.g.: a group of bottom sea floor
anchored sensor nodes are implemented nearby an oil pipe, interconnected to
one or more underwater “sinks”, which are in charge of relaying data from
the ocean bottom network the a surface station [23].
Regarding the propagation of acoustic waves in the frequency gamma
we are interested in, for the multi-level communication between Safe-Net
sensor nodes, we are looking into already known models [24]. One of the
major problems related to the fluid dynamics are the non-linear movement
equations, which imply the fact that there isn’t a general exact solution.
Acoustics represent the first order of approximation in which the non-linear
12.2 Research Challenges 265
Figure 12.6 Possible underwater sensor network deployment nearby Jack-up rig.
effects are neglected [25]. Acoustic waves propagate because of the medium
compressibility and the acoustic pressure or the sound pressure represents the
local deviation of the pressure whose root cause can be traced back to a sound
wave generated against the local environment. In air, the sound pressure can
be measured using a microphone, while in water it can be measured using a
hydrophone.
Considering the case of acoustic waves propagation in real fluids for our
mathematic general formalism, we have made the following assumptions:
gravity forces can be neglected, so equilibrium pressure and density get
uniform values all over the fluid’s volume (p0 and ρ0 ); the dissipative effects
such as viscosity and thermic conductibility are negligible; the medium is
homogenous, isotropic and has perfect elasticity, as well as the fluid particles
266 Common Framework Model for Multi-Purpose Underwater Data Collection
∂p 1 ∂2p
p = p0 + |ρ = ρ0 (ρ − ρ0 ) + |ρ = ρ0 (ρ − ρ0 )2 + . . . , (12.1)
∂ρ 2 ∂ρ2
where the partial derivatives are constant for the adiabatic process around
the ρ0 equilibrium density of the fluid.
If the density fluctuations are small, meaning ρ̄ ρ0 , then the high-order
terms can be reduced and the adiabatic state equation becomes linear:
ρ − ρ0
p − p0 = K . (12.2)
ρ0
The pressure generated by the sound p (12.4) is directly related with the
particle movement and the amplitude ξ through equation (12.3):
v v p p
ξ= = = = , (12.3)
2πf ω Zω 2πf Z
aZ Pac Z
p = ρc2πf ξ = ρcωξ = Zωξ = 2πfξZ = = Zv = c ρE = ,
ω A
(12.4)
where the symbols together with the I.S. measurement units are presented
in the following table:
The fundamental attenuation describes the power loss of a tone at a
frequency f, during its movement across a distance d. The first level of our
summary description takes into consideration this loss which occurs on the
transmission distance d. The second level calculates the specific loss of one
location caused by reflexions and refractions of upper and lower surfaces,
i.e. sea surface and bottom and also, the sound speed variations due to depth
differences. The result is a better prediction model of a specific transmitter.
The third level addresses the apparently random power shifts of the signal
received, by considering an average during a period of time. These changes
are due to slow variations of the propagation environment, e.g. tidal waves.
All these phenomena are relevant for determining the transmission power
needs in order to accomplish an efficient and successful underwater commu-
nication. We can also think at a separate model which could address much
faster changes of the instantaneous signal power at any given time, but at a far
smaller scale. The Signal Noise Ratio for different transmission distances as a
frequency function can be viewed in Figure 12.8. The sound absorption limits
the bandwidth which can be used for transmission and becomes dependent on
the distance:
By evaluating the entity A(d,f) N(f) as a function of ideal propagation of
the attenuation A(d,f) and as a consequence of tipical spectral power of the
background noise N(f), which drops 18dB per decade, we find the combined
12.2.3 Maintenance
Ocean can be a very harsh environment and underwater sensors are prone to
failures because of fouling and corrosion. The sensor’s construction method
could include one miniaturized copper-alloy anode for anti-corrosion, as well
as one miniaturized aluminum-alloy anode which could fight fouling. Modern
anti-fouling systems already installed on rigs use microprocessor controlled
anodes and the current flowing to each anti-fouling and anti-corrosion anode
is quite low and the technology could be adapted by miniaturization of the
existing anodes. Although we are considering the environmental impact of
deploying such a high number of underwater devices, our primary concerns
are the feasibility and the durability of the network and how we can address
these factors in order to be able to expand our network through time and its
enlargement to be backwards compatible to already deployed nodes. Besides
the communication protocols being backwards compatible, underwater Safe-
Net nodes must possess self-configuration capabilities, i.e. must be able to
coordinate their operation, location or movement and data communication
handshake protocols by themselves. So, we state the obvious, that this can only
be possible if the Safe-Net nodes are resistant enough in the salt, corrosive
water of the sea.
aerial drones in the U.S.A., which could take high-resolution pictures of the
agricultural terrain and the algorithm was pointed to data correlation between
meteorological stations on the ground by matching the pictures with the
low-resolution ones taken from satellites. The purpose was to introduce a
new methodology to transform low-resolution remote sensing date about soil
moisture to higher resolution information that contains better knowledge for
use in hydrologic studies or water management decision making. The goal
of the study was aiming to obtain a high-resolution data set with the help
of a combination of ground measurements from instrumentation stations and
low-altitude remote sensing, typically images obtained from a UAV. The study
introduces optimal trajectories and launching points of UAV remote sensors in
order to solve the problem of maximum terrain coverage using least hardware
means, also expensive in their case.
We have taken further this study by matching the agricultural terrain
with our underwater environment and making an analogy between the fixed
instrumentation systems on ground, the meteorological stations and all the
fixed offshore structures already put in place through-out the sea. The mobile
drones are represented by remotely operated vehicles or by autonomous
underwater vehicles which can have data collection sensors on-board and can
be used as mobile network nodes. The optimisation of the best distribution
pattern of the nodes in the underwater environment can be extrapolated only
by neglecting the environment constants, which weren’t taken into account
by the study [33]. This issue is further to be investigated.
v, · = v, · L2 (D) . (12.9)
When D does not depend on time, the actuator (D,g) is said to be fixed or
stationary. Otherwise, it is a moving or mobile actuator denoted by (Dt ,gt ),
where D(t) and g(t) are, respectively, the geometrical support and the spatial
distribution of the actuator at time t, as in Figure 12.11:
Figure 12.11 Illustration of the geometrical support and spatial distribution of an actuator.
12.4 ROV
A remotely operated vehicle (ROV) is a non-autonomous underwater robot.
They are commonly used in deep-water industries such as offshore hydro-
carbon extraction. A ROV may sometimes be called a remotely operated
underwater vehicle to distinguish it from remote control vehicles operating
on land or in the air. ROVs are unoccupied, highly manoeuvrable and
operated by a person aboard a vessel by means of commands sent through a
tether.
They are linked to the ship by this tether (sometimes referred to as an umbil-
ical cable), which is a group of cables that carry electrical power, video and
data signals back and forth between the operator and the vehicle. The ROVs
are used in offshore oilfield production sites, underwater pipelines inspection,
welding operations, subsea BOP (Blow-Out Preventer) manipulation as well
as other tasks:
• Seabed Mining – deposits of interest: gas hydrates, manganese nodules,
metals and diamonds;
• Aggregates Industry – used to monitor the action and effectiveness of
suction pipes during extraction;
• Cable and Node placements – 4D or time lapse Seismic investigation of
crowded offshore oilfields;
278 Common Framework Model for Multi-Purpose Underwater Data Collection
In the Black Sea area, operating in Romania’s territorial sea coast line,
we identified 4 working-class ROVs, out of which 2 are manufactured by
PerrySlingsby U.K.: 1 Triton XLX and 1 Triton XLR - first prototype of its
kind, which led to our models used in simulation.
underwater sensors Safe-Net surrounding areas of offshore oil and gas drilling
operations.
skill sets needed: 3D Studio Max modelling and “.lua” script programming
skills.
In order to safely deploy our Safe-Nets’ sensor balls into the water and fix
them to jack-up rigs metallic structures or to any other offshore constructions,
we first try to develop models of those structures and include them into a
standard fly-alone ROV simulation scenario. This is a two-step process as any
object’s model has to be created in 3D Studio Max software and afterwards
it can be programmatically be inserted into the simulation scenario. The
simulation scenarios are initialized by a series of Lua scripts, which is very
similar to C++ programming language and The VMAX Scenario Creation is
open source. The scripts are plain text files that can be edited using many
programs, including Microsoft Windows Notepad. The file names end with
.lua extension and are recommended to be opened with jEdit editor. This is
also an open-source editor which requires the installation of Java Runtime
Environment (JRE).
We have altered the simulation scenarios as it can be seen in Figure 12.18
and Figure 12.19 in order to obtain a better model of the Black Sea floor
through-out Romania’s coast line, which usually contains more sand because
of the Danube sediments coming from The Danube Delta. Geologists working
on-board the Romanian jack-ups considered the sea-floor in the VMAX ROV
Simulator very much alike with the one in the geological and oil-petroleum
interest zones up to 150-160 miles out in the sea. Throughout these zones
the water depth doesn’t exceed 80-90m, which is the limit at which drilling
jack-up rigs can operate (legs have 118m in length).
The simulator which is open-source was the starting base for a scenario
where we translated the needs of the ROV in terms of sensor handling, tether
positioning and pilot techniques combined with the specifications of the sea-
floor where the Safe-Nets will be deployed. The scenarios are initialized by
a series of .Lua scripts and the typical hierarchical file layout is presented in
Figure 12.20.
The resources are divided into two large classes of information: Scenarios-
related data and Assets. The former contains among others: Bathymetry,
Lua, Manipulators, Tooling, TMS (Tether Management System), Vehicles,
Components and IP (Internet Protocol communications between assets).
Bathymetry directory contains terrain information about a specific loca-
tion, where we could alter the sand properties on the sea floor. The terrain
stored here may be used across several scenarios. We could add a graphic
asset by using the template for the bathymetry part. The collision geometry
can be later generated based on the modelled geometry. We remind that the
284 Common Framework Model for Multi-Purpose Underwater Data Collection
currentDirectionTable = { 0 },
currentSpeedTable = { 1 },
depthTable = { 0 }}
The bathymetry template uses triangle mesh collision for the terrain. This will
provide collisions that are contoured to the bathymetry model
The Manipulators directory contains sub-directories for each arm and each
sub-director contains a model file with a .Lua script function used to create
the manipulator and add it to the ROV. We are looking forward to creating a
new manipulator usable for each case of sensor deployment.
The Tooling directory has either categories of tools or some uncategorized
ones, each having a model file name “.ive” or “.flt” and a Lua script file with
the code to create that specific tool [41].
Whereas the typical training scenarios include mainly a fly-around and
getting used to the ROV commands for the pilot and assistant pilot, we have
used the auxiliary commands in order to simulate the planting of the Safe-
Net around a jacket or a buoy for example. As far as the training scenario
is concerned, we covered the basics for a pilot to get around a jacket safely,
carrying some sort of object in the Titan4 manipulator robotic arm, without
dropping it, or without having the ROV’s tether tangling with the jacket
metallic structure. The tether contains hundreds of fibre-optic cables covered
with a Kevlar reinforcement, but it is recommended that no more than 4 total
3600 spins are made in one direction, clockwise or counter-clockwise, even
having this strengthened cover, in order to avoid any loss of communication
between the control console and the ROV itself. Any interaction between the
tether and the metallic structure could represent a potential threat to the ROV’s
integrity.
ensure the sealing of the device. The upper and lower hemispheres close on
top of an O-ring seal which can be lubricated additionally with water repellent
grease. Also, we have designed a unidirectional valve which can be used for
a vacuum pump to clear out the air inside. The vacuum strengthens the seal
against water pressure. In Figure 12.22, we present a few prototypes which
we tried to model and simulate:
Within the same common modular framework, we have thought at a multi-
deployment method for 3 or more sensors at the same time. Actually, the
following ideas were issued because of the repeated fail trials with an ROV
to grab and hold a Safe-Net sensor long enough in order to place it in a hook
coming from an autonomous buoy above the sea surface, affected by wave
length and height. Because of the real difficulties encountered, especially when
inserting higher waves into the scenario, we have thought of a way to get the
job done more quickly (Figure 12.23):
Moreover, the spherical model framework of the sensor, the basic node
of the Safe-Net, will prove to be very difficult to handle using the simple
manipulator, as it tends to slip, and the objective is to carry it without drop-
ping. Therefore, we have designed a “cup-holder” shape for grabbing more
easily the sphere and if it contains a cable connection, it should not be tampered
by the grabber, as it can be seen in Figure 12.24:
12.7 Conclusions
Most of the state-of-the-art solutions regarding underwater sensor networks
rely on specific task-oriented sensors, which are developed and launched
with different means and no standardization. The studies we found usually
use power from batteries and all sorts of resilient algorithms in order to
minimize battery draining and use sleep-awake states of the nodes, which
finally are recovered from water in order to retrieve data collections. Our
approach is trying to regulate the ways of deploying and fixing the sensors
towards offshore structures and moreover to offer solutions to more than one
application task. This may seem as a general approach, but this is needed
in order to avoid launching different technology nodes which afterwards
will not be able to communicate with each other. Development of a virtual
environment-based training system for ROV pilots could be the starting point
for deploying underwater sensor networks worldwide, as these are the people
who will actually be in the position to implement it.
This chapter investigates the main challenges for the development of an
efficient common framework for multi-purpose underwater data collection
References
[1] K. Eshghi and R. C. Larson, ‘Disasters: lessons from the past 105 years’,
Disaster Prevention and Management, Vol. 17, pp.61–82, 2008.
[2] D. Green, ‘Acoustic modems, navigation aids and networks for undersea
operations’, IEEE Oceans Conference Proceedings, pp.1–6, Sydney,
Australia, May 2010.
[3] J. Heidemann, Y. Li and A. Syed, ‘Underwater Sensor Networking:
Research Challenges and Potential Applications’, USC Information
Sciences Institute, USC/ISI Technical Report ISI-TR-2005–603, 2005.
[4] T. Melodia, Ian F. Akyildiz and D. Pompili, ‘Challenges for Effi-
cient Communication in Underwater Acoustic Sensor Networks’, ACM
Sigbed Review, vol.1, no.2, 2004.
[5] A. Cerpa and et al., ‘Habitat monitoring: Application driver for wireless
communications technology’, ACM SIGCOMM Workshop on Data
Communications in Latin America and the Caribbean, San Jose, Costa
Rica, 2001.
[6] D. Whang, N. Xu and S. Rangwala, ‘Development of an embedded
sensing system for structural health monitoring’, International Workshop
on Smart Materials and Structures Technology, pp. 68–71, 2004.
[7] D. Steere, A. Baptista and D. McNamee, ‘Research challenges in envi-
ronmental observation and forecasting systems’, 6th ACM International
Conference on Mobile Computing and Networking, Boston, MA, USA,
2000.
292 Common Framework Model for Multi-Purpose Underwater Data Collection
Abstract
Machine-to-machine communication (M2M) is one of the major innovations
in the ICT sector. Especially in agricultural business with heterogeneous
machinery, diverse process partners and high machine operating costs, M2M
offers large potential in process optimization. Within this paper, a concept for
process optimization in agricultural business using M2M technologies is pre-
sented using three application scenarios. Within that concept, standardization
and communication as well as security aspects are discussed. Furthermore, cor-
responding business models building on the presented scenarios are discussed
and results from economic analysis are presented.
13.1 Introduction
Machine-to-machine communication (M2M) currently is one of the major
innovations in the ICT sector. The agricultural sector is characterized by
heterogeneous machinery, diverse process partners and high operational
The data is visualized within the portal and helps the farmer to optimize
business processes to meet documentation requirements or to build data-based
business models. Especially when it comes to complex and detailed records
of many synchronized machines, the system shows its advantages.
Communication between machines takes place either directly from
machine to machine or via a mobile communication network (e.g. GSM or
UMTS). Within agricultural processes operating in rural areas, the availability
of mobile communication networks is not always given. There are two
strategies to increase the availability of network coverage:
• National roaming SIM cards;
• Femtocells.
With national roaming SIM cards being able to roam into all available
networks, the availability of mobile network coverage can be increased, while
with standard SIM cards only one network can be used in the home country
[10]. National roaming SIM cards are operating in a country different from
their home location (e.g. a spanish SIM card operating in Germany). The SIM
card can roam into all available networks as long as issuing provider and
network operator signed a roaming agreement. Although network coverage
can be increased, a communication channel cannot be guaranteed.
With femtocells [2], dedicated base station is placed on the field where
machines are operating. The concept is presented in Figure 13.3. Machines
communicate to the base station e.g. via WLAN or GSM/UMTS, while base-
station is connected to the portal by GSM/UMTS or satellite connection.
The location of the femtocell base-station should be chosen in a way that
coverage is given at every location within the corresponding area either via the
femtocell or via direct connection to a mobile network. This strategy enables
communication even without network coverage by the operator. However, the
implementation effort is significantly higher than in case of using national
roaming SIM cards.
portal to give the production manager the opportunity to optimize the process
in near real-time. Before starting the process, a production plan is prepared
by the production manager either manually with support by the system or
automatically by the system. Within the plan, machines are allocated with
time and position data. When the system registers a plan deviation, the plan is
updated either manually or automatically. This approach allows reducing idle
times saving costs and resources.
Table 13.1 Revenue, costs and business potential for partners along M2M value chain
Revenue Cost
Development Development Business
Partner Role (Per Unit) (Per Unit) Potential
Module Manufacturer Constant Declining +
manufacturer of black-box
Machine Manufacturer Progressive Declining ++
manufacturer of machines
Mobile Data transport, Constant Declining +
network SIM
operator management
3rd party Software Constant/ Depending on +
software developer, progressive business model
provider application (depending on
provider business
model)
Portal Portal operator Progressive Declining ++
provider
The module manufacturer produces the black-boxes (see Figure 13.2) built
into the machines or used to retrofit older machines. Revenues for module
manufacturers mostly come from black-box sales. Costs per unit are expected
to decline with increasing number of sold units.
The machine manufacturer’s revenues come from machine sales as well
as services delivery and savings due to remote software updates. The cost of
development is expected to be declining with the increasing number of sold
units.
The mobile network operator’s role is to deliver data through a mobile
network. SIM card management may also be done by the network operator
but can also be done by an independent partner. Revenues consist of fees
for data traffic as well as service fees for SIM card supply and management.
Additional costs for extra data volume over an existing network are very low.
Third-party software providers can be part of the value chain; however,
this is not compulsory. They either supply an application bringing additional
functions to the machinery or implement an own business model based on the
data held in the portal.
The software is sold through the portal and is delivered to the machinery by
the remote software update process described above. The revenues develop-
ment per unit depends on the employed business model. When only software is
sold, revenues per unit are constant. With additional business models, revenues
may also develop progressively.
304 M2M in Agriculture – Business Models and Security Issues
Variable costs for the business cases can be calculated using Equation
system (13.1).
c1 = a11 · b1 + a12 · b2 + . . . + a1n · bn
.. (13.1)
.
cm = am1 · b1 + am2 · b2 + amn · bn ,
The system of linear equations yields relation matrix A=(aij ) and transfer
price vector B=(bj ). The vector C=(ci ) of variable costs can be represented
by Equation (13.2).
C = A · B. (13.2)
Using the vector D=(di ) consisting of sales prices of finally receiving services
Equation (13.3) leads to the vector M=(mi ) of gross margin per unit of all
business cases, i.e. finally receiving services.
M = D − A · B. (13.3)
Figure 13.7 exemplifies the input matrix A and vectors B and D with estimated
quantities. In matrix A, the rows indicate the business cases PT (row 1), ODA
(row 2) and RSU (row 3). Columns represent delivering services indicated
as white circles in Figure 13.6. The elements of vector B represent transfer
prices of delivering services. Elements of the vector D represent sales prices
of the three business cases.
The results of economic analysis are shown in Figure 13.8. Elements of
the calculated vector C indicate variable costs of the three business cases. It
Figure 13.7 Relation matrix A, transfer price vector B and sales price vector D.
Figure 13.8 Vector of variable costs C and vector of marginal return per unit M.
13.7 Communication Security 307
can be seen that the marginal return per unit is positive for all three business
cases with the highest marginal return for business case Process Transparency.
13.7.1 CA
The central instance of the security structure is a CA (Certificate Authority)
which provides services like issuing certificates (by providing a CSR (Cer-
tificate Signing Request)), revoking certificates, checking certificates if they
are rejected (through CRLs/OCSP). During the key creation process, a CSR
is being created which will be passed to the PKI. The CSR is signed and the
certificate is sent back to the device (machine).
13.8 Resume
This paper presents a concept for the optimization of process information
chain to improve efficiency in agricultural harvesting process. Machine-to-
machine communication plays a central role to synchronize data between
diverse process partners.
The information gathered by sensors at agricultural machines plays
the central role to build new business models. Business model analysis
shows that all parties along the value chain gain good business potential.
It has been shown that the three described business models can be operated
with positive marginal return per unit under the assumptions made in the
project.
However, security issues and business models play an important role
for a successful system operation. With the described security measures,
system operation can be done ensuring confidentiality, integrity as well as
availability.
310 M2M in Agriculture – Business Models and Security Issues
13.9 Acknowledgement
The presented work was done in the research project “M2M-Teledesk” funded
by the state government of North-Rhine-Westfalia and European Union Fund
for regional development (EUROPÄISCHE UNION - Europäischer Fonds
für regionale Entwicklung - Investition in unsere Zukunft). Project partners
of M2M-Teledesk are University of Applied Sciences and Arts, Dortmund,
VIVAI Software AG, Dortmund and Claas Selbstfahrende Erntemaschinen
GmbH, Harsewinkel.
References
[1] K. Moummadi, R. Abidar and H. Medromi, ‘Generic model based on
constraint programming and multi-agent system for M2M services and
agricultural decision support’, In Multimedia Computing and Systems
(ICMCS), 2011:1–6.
[2] G. Wu, S. Talwar, K. Johnsson, N. Himayat and K. Johnson, ‘M2M: From
Mobile to Embedded Internet’, In IEEE Communications Magazine,
2011:36–42.
[3] Y. Daesub, C. Jongwoo, K. Hyunsuk and K. Juwan, ‘Future Automotive
Insurance System based on Telematics Technology’, In 10th Interna-
tional Conference on Advanced Communication Technology (ICACT),
2008:679–681.
[4] A. Juliandri, M. Musida and Supriyadi, ‘Positioning cloud computing in
machine to machine business models’, In Cloud Computing and Social
Networking (ICCCSN), 2012:1–4.
[5] V. Goncalves and P. Dobbelaere, ‘Business Scenarios for Machine-to-
Machine Mobile Applications’, In Mobile Business and 2010 Ninth
Global Mobility Roundtable (ICMB-GMR), 2010.
312 M2M in Agriculture – Business Models and Security Issues
313
314 Index
Ship scale model 49, 53, 61 trajectory 100, 124, 153, 184
simulation 46, 83, 123, 153, 177 transfer function 30, 47, 57
Simulink 45, 206 transient process 155, 160
slider module 8, 11, 15, 19 trial motion 168, 173, 187
slip displacement 168, 172 two way HTTPS-conenction 307
slippage registration 168
software 53, 86, 141, 286, 300 U
space robot 95, 104 UART 198
spectral density 33, 44 uncertain dynamics 147
speed control 55, 141 uncertainty 77, 147, 160
speed gradient 147, 165 underwater component 257
stability diagrams 61, 67 Underwater robot 149, 156, 165, 277
Standardization 296 underwater sensors 257, 268, 282
static characteristic 174, 178 USART 198
stroboscopic transformation 218
structural identification 27, 33, 47 V
system architecture 51, 235, 245 value chain 302, 309
variable costs 305
T VB. NET 54
tactile sensor 112, 167 vector 28, 36, 107, 153, 306
task training by demonstration 95, 99, vehicle 124, 133, 193, 207, 257
113, 115 virtual coordinate frame 132, 135
TDR 216 Visual saliency 73
teaching 101, 106, 148 VMAX 259, 278, 283
Telecontrol 95, 104, 113
telescopic module 8, 11, 17 W
testing 22, 49, 59, 88, 194 weight coefficient 148, 156, 164
tests 3, 49, 62, 214 wireless sensor network 235, 242, 253
topology 3, 16, 97, 138, 236 WLAN security 300, 308
towing tank 49, 70
towing tank tests 49, 62 X
training 64, 95, 102, 110, 290 XBEE 198
training by demonstration 95, 102, 119
Editor’s Biographies
Richard J. Duro received the B.Sc., M.Sc., and Ph.D. degrees in physics
from the University of Santiago de Compostela, Spain, in 1988, 1989, and
1992, respectively.
He is currently a Full Professor in the Department of Computer Science
and head of the Integrated Group for Engineering Research at the University
of A Coruña, Coruña, Spain. His research interests include higher order neural
network structures, signal processing, and autonomous and evolutionary
robotics.
317
Author’s Biographies
Francisco Bellas received the B.Sc. and M.Sc. degrees in physics from the
University of Santiago de Compostela, Spain, in 1999 and 2001, respectively,
and the Ph.D. degree in computer science from the University of A Coruña,
Coruña, Spain, in 2003.
He is currently a Profesor Contratado Doctor at the University of A
Coruña. He is a member of the Integrated Group for Engineering Research
at the University of A Coruña. His current research interests are related
to evolutionary algorithms applied to artificial neural networks, multiagent
systems, and robotics.
Mitru Corneliu Caraivan has received his B.Sc. degree in Automatic Control
and Computers in 2009, from Politehnica University of Bucharest, Romania.
Following the master thesis in Fakultät für Inginieurwissenshaften, Univer-
sity of Duisburg-Essen, Germany, he earned the Ph.D. degree in Systems
Engineering in 2013 from Politehnica University of Bucharest. Since 2009
he is part-time Assistant Professor in the Faculty of Applied Sciences and
Engineering, Ovidius University of Constanta, Romania. His main research
interests focus on offshore oil and gas automatic control systems on drilling
and exploration rigs, programmable logic controller networks, instrumentation
data sensors and actuators, systems redundancy and reliability. Since 2010,
he has gained experience on offshore jack-up rigs as IT/Electronics Engineer
focusing on specific industry equipment: NOV Amphion Systems, Cameron
(former TTS-Sense) X-COM Cyber-Chairs 5th gen, drilling equipment instru-
mentation, PLCs, SBCs, fire and gas alarm systems, satellite communications
and networking solutions.
Valentin Dache has received his B.Sc. degree in Automatic Control and
Computers in 2008, from the Politehnica University of Bucharest, Romania.
Following the master thesis in the same university, he is currently a Ph.D.
student in Systems Engineering, having an on-going research which focuses
on intelligent buildings management system. Other interests include the field
319
320 Author’s Biographies
Álvaro Deibe Díaz received the M.S. degree in Industrial Engineering in 1994
from the University of Vigo, Spain, and a Ph.D. in Industrial Engineering in
2010 in the University of A Coruña. He is currently Titular de Universidad in
the Department of Mathematical and Representation Methods at the same
University. He is a member of the Integrated Group for Engineering Research
at the University of A Coruña. His research interests include Automation and
Embedded Systems.
Andrés Faíña received the M.Sc. and Ph.D. degrees in industrial engineering
from the University of A Coruña, Spain, in 2006 and 2011, respectively.
He is currently a Postdoctoral Researcher at the IT University of
Copenhagen, Denmark. His interests include modular and self-reconfigurable
robotics, evolutionary robotics, mobile robotics, and electronic and mechani-
cal design.
Boris Gordeev received the PhD degree from Leningrad Institute of High-
precision Mechanics and Optics, Leningrad, USSR, in 1987 and Doctor of
Science degree from Institute of Electrodynamics of National Academy of
Sciences of Ukraine in Elements and Devices of Computer and Control
Systems in 2011.
He is currently a Professor of Marine Instrumentation Department at
National University of Shipbuilding, Ukraine. His current research interest is
related to polymetric signals generation and processing for measuring systems,
322 Author’s Biographies
Andrei Maciuca obtained his PhD in 2014 from the Department of Automatic
Control and Industrial Informatics, University “Politehnica” of Bucharest with
a thesis on “Sensor Networks for Home Monitoring of Elderly and Chronic
Patients”.
Author’s Biographies 323
Dominik Maximilián Ramík received his PhD in 2012 in signal and image
processing from University of Paris-Est, France. Member of LISSI laboratory
of University Paris-Est Creteil (UPEC), his current research topic con-
cerns processing of complex images using bio-inspired artificial intelligence
approaches and consequent extraction of semantic information with use in
mobile robotics control and industrial processes supervision.
Christian Rusch studied Mechatronics between 1999 and 2005 and grad-
uated from the Technical University of Braunschweig. Then he started
his scientific career as Research Scientist at the Technical University of
Berlin. The Title of his PhD Thesis is “Analysis of data security of self-
configuring radio networks on mobile working machines illustrated by the
process documentation”. He finished the PhD (Dr. –Ing.) in 2012. In 2011
he moved in to the industry and worked for CLAAS as project manager with
focus on wireless communication for mobile working machines. Since 2014 he
is system engineer in CLAAS Electronic Systems. Furthermore, he is project
leader of the standardization group “Wireless ISOBUS Communication”.
Christophe Sabourin received his PhD in Robotics and Control from Uni-
versity of Orleans (France) in November 2004. Since 2005, he has been
a researcher and a staff member of Images, Signals and Intelligent Sys-
tems Laboratory (LISSI/EA 3956) of University Paris-Est Creteil (UPEC).
326 Author’s Biographies
Daniel Souto received the M.Sc. degree in industrial engineering from the
University of A Coruña, Coruña, Spain, in 2007. He is working towards
the Ph.D. degree in the Department of Industrial Engineering at the same
university.
He is currently a Researcher at the Integrated Group for Engineering
Research. His research activities are related to automatic design and mechan-
ical design of robots.
Mircea Strutu obtained his PhD in 2014 from the Department of Automatic
Control and Industrial Informatics, University “Politehnica” of Bucharest with
a thesis on “Wireless Networks for Environmental Monitoring and Alerting
based on Mobile Agents”.
Yuriy Zhukov received the PhD and Doctor of Science degrees from
Nikolayev Shipbuilding Institute (now NUOS), Ukraine, in 1981 and 1994,
respectively.
He is currently a Professor, Chief of Marine Instrumentation Department
and Neoteric Naval Engineering Institute at National University of Shipbuild-
ing, Ukraine. He is the author of more than 250 scientific publications in fields
of ship design and its operational safety, dynamic systems behavior monitoring
328 Author’s Biographies
and control, decision making support systems and artificial intelligence, etc.
His current research interest is related to intelligent multi-agent sensory
systems application in the above mentioned fields.
Alexey Zivenko received the PhD degree in Computer Systems and Compo-
nents from Petro Mohyla Black Sea State University, Ukraine, in 2013.
He is currently an Associate Professor of Marine Instrumentation Depart-
ment at National University of Shipbuilding, Ukraine. His current research
interests are polymetric measurements, non-destructive assay of liquids, intel-
lectual measurement systems and robotics, data acquisition systems anddata
mining. He is the author of more than 20 scientific publications in the above
mentioned fields.