Robotic Systems
Robotic Systems
Preface IX
Part 1 Applications 1
Over the last few decades the focus of robotics research has moved beyond the
traditional area of industrial applications to newer areas, including healthcare and
domestic applications. These newer applications have given a strong impetus to the
development of advanced sensors, control strategies and algorithms. The first section
of this book contains advanced applications of robotics in surgery, rehabilitation,
modular robotics among others. This is followed by sections on control and sensors,
while the fourth section is devoted to robot algorithms.
I would like to thank all the authors for entrusting us with their work and specially the
editorial members of InTech publishing.
Applications
1
1. Introduction
The trend in surgical robots is moving from traditional master-slave robots to miniaturized
devices for screening and simple surgical operations (Cuschieri, A. 2005). For example,
capsule endoscopy (Moglia, A. 2007) has been conducted worldwide over the last five years
with successful outcomes. To enhance the dexterity of commercial endoscopic capsules,
capsule locomotion has been investigated using legged capsules (Quirini, M. 2008) and
capsules driven by external magnetic fields (Sendoh, M. 2003; Ciuti, G. 2010; Carpi, F. 2009).
Endoscopic capsules with miniaturized arms have also been studied to determine their
potential for use in biopsy (Park, S.-K. 2008). Furthermore, new surgical procedures known
as natural orifice transluminal endoscopic surgery (NOTES) and Single Port Access surgery
are accelerating the development of innovative endoscopic devices (Giday, S. 2006; Bardaro,
S.J. 2006). These advanced surgical devices show potential for the future development of
minimally invasive and endoluminal surgery. However, the implementable functions in
such devices are generally limited owing to space constraints. Moreover, advanced capsules
or endoscopes with miniaturized arms have rather poor dexterity because the diameter of
such arms must be small (i.e. a few millimeters), which results in a small force being
generated at the tip.
A modular surgical robotic system known as the ARES (Assembling Reconfigurable
Endoluminal Surgical system) system has been proposed based on the aforementioned
motivations (Harada, K. 2009; Harada, K. 2010; Menciassi, A. 2010). The ARES system is
designed for screening and interventions in the gastrointestinal (GI) tracts to overcome the
intrinsic limitations of single-capsules or endoscopic devices. In the proposed system,
4 Robotic Systems – Applications, Control and Programming
miniaturized robotic modules are ingested and assembled in the stomach cavity. The
assembled robot can then change its configuration according to the target location and task.
Modular surgical robots are interesting owing to their potential for application as self-
reconfigurable modular robots and innovative surgical robots. Many self-reconfigurable
modular robots have been investigated worldwide (Yim, M. 2007; Murata, S. 2007) with the
goal of developing systems that are robust and adaptive to the working environment. Most
of these robots have been designed for autonomous exploration or surveillance tasks in
unstructured environments; therefore, there are no strict constraints regarding the number
of modules, modular size or working space. Because the ARES has specific applications and
is used in the GI tract environment, it raises many issues that have not been discussed in
depth in the modular robotic field. Modular miniaturization down to the ingestible size is
one of the most challenging goals. In addition, a new interface must be developed so that
surgeons can intuitively maneuver the modular surgical robot.
The purpose of this paper is to clarify the advantages of the modular approach in surgical
applications, as well as to present proof of concept of the modular robotic surgical system.
The current paper is organized as follows: Section 2 describes the design of the ARES
system. Section 3 details the design and prototyping of robotic modules, including the
experimental results. Section 4 describes a reconfigurable master device designed for the
robotic modules, and its preliminary evaluation is reported.
according to preoperative planning by repeated docking and undocking of the modules (the
undocking mechanism and electrical contacts between modules are necessary for
reconfiguration, but they have not been implemented in the presented design). The robotic
modules are controlled via wireless bidirectional communication with a master device
operated by the surgeon, while the progress in procedure is observed using intraoperative
imaging devices such as fluoroscopy and cameras mounted on the modules. After the
surgical tasks are completed, the robot reconfigures itself to a snake-like shape to pass
through the pyloric sphincter and travel to examine the small intestine and the colon, or it
completely disassembles itself into individual modules so that it can be brought out without
external aid. One of the modules can bring a biopsy tissue sample out of the body for
detailed examination after the procedure is complete.
several functional modules can be used simultaneously, the modular robot may
perform rather complicated tasks that a single endoscopic capsule or an endoscopic
device is not capable of conducting. For example, if more than two camera modules are
employed, the surgeon can conduct tasks while observing the site from different
directions.
iii. Surgical tools of relatively large diameter can be brought into the body cavity.
Conventionally, small surgical forceps that can pass through an endoscopic channel of a
few millimeters have been used for endoluminal surgery. Conversely, surgical devices
that have the same diameter as an endoscope can be used in the modular surgical
system. Consequently, the force generated at the tip of the devices would be rather
large, and the performance of the functional devices would be high.
iv. The surgical system is more adaptive to the given environment and robust to failures.
Accordingly, it is not necessary for the surgical robot to equip all modules that might be
necessary in the body because the surgeons can decide whether to add modules with
different functionalities, even during the surgical operation. After use, the modules can
be detached and discarded if they are not necessary in the following procedures.
Similarly, a module can be easily replaced with a new one in case of malfunction.
As these advantages suggest, a modular surgical robot would be capable of achieving rather
complicated tasks that have not been performed using existing endoluminal surgical
devices. These advantages are valid for modular robots that work in any body cavity with a
small entrance and exit. Moreover, this approach may be introduced to NOTES or Single
Port Access surgery, in which surgical devices must reach the abdominal cavity through a
small incision.
In Section 3, several robotic modules are proposed, and the performance of these modules is
reported to show the feasibility of the proposed surgical system.
(i) (iii)
(ii) (iv)
3. Robotic modules
3.1 Design and prototyping of the robotic modules
Figure 3 shows the design and prototypes of the Structural Module and the Biopsy Module
(Harada, K. 2009, Harada, K. 2010). The Structural Module has two degrees of freedom (±90°
of bending and 360° of rotation). The Structural Module contains a Li-Po battery (20 mAh,
LP2-FR, Plantraco Ltd., Canada), two brushless DC geared motors that are 4 mm in
diameter and 17.4 mm in length (SBL04-0829PG337, Namiki Precision Jewel Co. Ltd., Japan)
and a custom-made motor control board capable of wireless control (Susilo, E. 2009). The
stall torque of the selected geared motor is 10.6 mNm and the speed is 112 rpm when
controlled by the developed controller. The bending mechanism is composed of a worm and
a spur gear (9:1 gear reduction), whereas the rotation mechanism is composed of two spur
gears (no gear reduction). All gears (DIDEL SA, Switzerland) were made of nylon, and they
were machined to be implemented in the small space of the capsule. Two permanent
magnets (Q-05-1.5-01-N, Webcraft GMbH, Switzerland) were attached at each end of the
module to help with self-alignment and modular docking. The module is 15.4 mm in
diameter and 36.5 mm in length; it requires further miniaturization before clinical
application. The casing of the prototype was made of acrylic plastic and fabricated by 3D
rapid prototyping (Invison XT 3-D Modeler, 3D systems, Inc., USA). The total weight is 5.6 g.
Assuming that the module would weigh 10 g with the metal chassis and gears, the
maximum torque required for lifting two connected modules is 5.4 mNm for both the
bending DOF and rotation DOF. Assuming that the gear transmission efficiency for the
bending mechanism is 30%, the stall torque for the bending DOF is 28.6 mNm. On the other
hand, the stall torque for the rotation DOF is 8.5 mNm when the transmission efficiency for
the rotation mechanism is 80%. The torque was designed to have sufficient force for surgical
operation, but the transmission efficiency of the miniaturized plastic gears was much
smaller than the theoretical value as explained in the next subsection.
• Controller
The aforementioned brushless DC motor came with a dedicated motor driving board
(SSD04, Namiki Precision Jewel Co., Ltd., 19.6 mm × 34.4 mm × 3 mm). This board only
allows driving of one motor; hence, two boards are required for a robotic module with 2
DOFs. Because there was not sufficient space for the boards in the robotic module, a custom
made high density control board was designed and developed in-house. This control board
consisted of one CC2430 microcontroller (Texas Instrument, USA) as the main wireless
controller and three sets of A3901 dual bridge motor drivers (Allegro MicroSystem, Inc.,
USA). The fabricated board is 9.6 mm in diameter, 2.5 mm in thickness and 0.37 g in weight,
which is compatible with swallowing. The A3901 motor driver chip was originally intended
for a brushed DC motor, but a software commutation algorithm was implemented to control
a brushless DC motor as well. An IEEE 802.15.4 wireless personal area network (WPAN)
was introduced as an embedded feature (radio peripheral) of the microcontroller. The
implemented algorithm enables control of the selected brushless DC motor in Back Electro-
Motive Force (BEMF) feedback mode or slow speed stepping mode. When the stepping
mode is selected, the motor can be driven with a resolution of 0.178º.
For the modular approach, each control board shall be equipped with a wired locating
system for intra-modular communication in addition to the wireless communication. Aside
from wireless networking, the wired locating system, which is not implemented in the
presented design, would be useful for identification of the sequence of the docked modules
8 Robotic Systems – Applications, Control and Programming
in real time. The wired locating system is composed of three lines, one for serial multidrop
communication, one for a peripheral locator and one as a ground reference. When the
modules are firmly connected, the intra-modular communication can be switched from
wireless to wired to save power while maintaining the predefined network addresses. When
one module is detached intentionally or by mistake, it will switch back to wireless mode.
• Battery
The battery capacity carried by each module may differ from one to another (e.g. from 10
mAh to 50 mAh) depending on the available space inside the module. For the current
design, a 20 mAh Li-Po battery was selected. Continuous driving of the selected motor on its
maximum speed using a 20 mAh Li-Po battery was found to last up to 17 minutes. A
module does not withdraw power continuously because the actuation mechanisms can
maintain their position when there is no current to the motor owing to its high gear
reduction (337:1). A module consumes power during actuation, but its power use is very
low in stand-by mode.
• Biopsy Module
The Biopsy Module is a Functional Module that can be used to conduct diagnosis. The
grasping mechanism has a worm and two spur gears, which allows wide opening of the
grasping parts. The grasping parts can be hidden in the casing at the maximum opening to
prevent tissue damage during ingestion. The motor and other components used for the
Biopsy Module are the same as for the Structural Module. The brushless DC geared motors
(SBL04-0829PG337, Namiki Precision Jewel Co. Ltd., Japan), the control board, a worm gear
and two spur gears (9:1 gear reduction) were implemented in the Biopsy Module. A
permanent magnet (Q-05-1.5-01-N, Webcraft GMbH, Switzerland) was placed at one side to
be connected to another Structural Module.
Fig. 3. Design and prototypes of the structural module (left) and the biopsy module (right)
Modular Robotic Approach in Surgical Applications
– Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery – 9
The Biopsy Module can generate a force of 7.1 N at its tip, and can also open the grasping
parts to a width of 19 mm with an opening angle of 90 degrees. These values are much
larger than those of conventional endoscopic forceps, which are 2-4 mm in diameter. As a
demonstration, Figure 3 shows the Biopsy Module holding a coin weighing 7.5 g.
In conventional endoscopy, forceps are inserted through endoscopic channels that are
parallel to the direction of the endoscopic view, which often results in the forceps hiding the
target. Conversely, the Biopsy Module can be positioned at any angle relative to the
endoscopic view owing to the modular approach, thereby allowing adequate approach to
the target.
90 180
Measured Angle (deg.)
45 135
0 90
-45 45
-90 0
-90 -45 0 45 90 45 90 135 180
Commanded Angle (deg.) Commanded Angle (deg.)
Fig. 4. Bending angle measurement (left), rotation angle measurement (middle), and torque
measurement (right) (Menciassi, A. 2010)
In addition to the angle measurements, both bending and rotation torque were measured.
The torque was measured by connecting cylindrical parts with permanent magnets at both
ends until the bending/rotational motion stopped. The length and weight of each cylinder
was designed in advance, and several types of cylinders were prepared. The measured
bending torque was 6.5 mNm and the rotation torque was 2.2 mNm. The figure also shows
one module lifting up two modules attached to its bending mechanism as a demonstration.
The performance in terms of precision and generated torque, which are very important for
reconfiguration and surgical tasks, was sufficient; however, the precision was limited owing
10 Robotic Systems – Applications, Control and Programming
to the aforementioned fabrication problems. The thin walls of the casing made of acrylic
plastic were easily deformed, which caused friction between the parts. The casing made of
metal or PEEK and tailor-made metal gears with high precision will improve the mechanism
rigidity and performance, thus producing the optimal stability.
Fig. 6. Robotic modules (top line) and the reconfigurable master device (bottom line): one
module (left), assembled modules (middle) and prototypes (right)
12 Robotic Systems – Applications, Control and Programming
The advantages of the proposed master device include intuitive manipulation. For example,
the rotational movement of a structural module used to twist the arm is limited to ± 180°,
and the master module also has this limitation. This helps surgeons intuitively understand
the range of the motion and the reachable working space of the modules. Using a
conventional master manipulator or an external console, it is possible that the slave
manipulator cannot move owing to its mechanical constraints, while the master manipulator
can still move. However, using the proposed master device, the surgeon can intuitively
understand the mechanical constraints by manipulating the master device during
practice/training. Furthermore, the position of the master arm can indicate where the
robotic modules are, even if they are outside of the camera module's view. These
characteristics increase the safety of the operation. This feature is important because the
entire robotic system is placed inside the body. In other surgical robotic systems, the
position or shape of the robotic arms is not important as they are placed outside of the body
and can be seen during operation. Unlike other master devices, it is also possible for two or
more surgeons to move the reconfigurable master device together at the same time using
multi arms with redundant DOFs.
4.2 Evaluation
A simulation-based evaluation setup was selected to simplify the preliminary evaluation of
the feasibility of the reconfigurable master device. The authors previously developed the
Slave Simulator to evaluate workloads for a master-slave surgical robot (Kawamura, K.
2006). The Slave Simulator can show the motion of the slave robot in CG (Computer
Graphics), while the master input device is controlled by an operator. Because the simulator
can virtually change the parameters of the slave robot or its control, it is easy to evaluate the
parameters as well as the operability of the master device. This Slave Simulator was
appropriately modified for the ARES system. The modified Slave Simulator presents the CG
models of the robotic modules to the operator. The dimension and DOFs of each module in
CG were determined based on the design of the robotic modules. The angle of each joint is
given by the signal from the potentiometers of the reconfigurable master device, and the
slave modules in CG move in real time to reproduce the configuration of the master device.
This Slave Simulator is capable of altering joint positions and the number of joints of the
slave arms in CG so that the workspace of the reconfigurable master device can be
reproduced in a virtual environment for several types of topologies. The simulator is
composed of a 3D viewer that uses OpenGL and a physical calculation function. This
function was implemented to detect a collision between the CG modules and an object
placed in the workspace.
To simplify the experiments to evaluate the feasibility of the proposed master device and
usefulness of the developed simulator, only one arm of the reconfigurable master device
was used. Three topologies that consist of one Biopsy Module and one or two Structural
Module(s) were selected as illustrated in Fig.7. Topology I consists of a Structural Module
and a Biopsy Module, and the base is fixed so that the arm appears with an angle of 45
degrees. One Structural Module is added to Topology I to configure Topology II, and
Topology III is identical to Topology II, but placed at 0 degrees. Both Topology II and
Topology III have redundant DOFs. The projection of the workspace of each arm and the
shared workspace are depicted in Fig.8. A target object on which the arm works in the
experiments must be placed in this shared area, which makes it easy to compare topologies.
Modular Robotic Approach in Surgical Applications
– Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery – 13
A bar was selected as the target object instead of a sphere because the height of the collision
point is different for each topology when the object appears in the same position in the 2D
plane.
The experiment was designed so that a bar appears at random in the shared workspace. The
bar represents a target area at which the Biopsy Module needs to collect tissue samples, and
this experiment is a simple example to select one topology among three choices given that
the arm can reach the target. We assumed that this choice may vary depending on the user,
and this experiment was designed to determine if the reconfigurability of the master device,
i.e. customization of the robot, provides advantages and improves performance.
Combination
Master
device
Simulator
that the arm could reach it. After finishing the experiment, the subjects were asked to fill in a
questionnaire (described below) for each topology. The subjects were also asked which
topology they preferred.
4.3 Results
The time spent conducting the given task, the workload score evaluated using the NASA
TLX questionnaire and the preference of the topology determined by the subjects are
Modular Robotic Approach in Surgical Applications
– Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery – 15
summarized in Table 1. For each item, a smaller value indicates a more favorable evaluation
by the subject.
Considering the results of the time and workload score, Topology II was most difficult. The
difference between Topology I and III was interesting. Two of the subjects ((b) and (c))
preferred Topology I, which did not have a redundant DOF. Conversely, three of the
subjects ((a), (d) and (e)) preferred Topology III because they could select the path to reach
the target owing to the redundant DOF. The average scores of the NASA TLX parameters
shown in Fig.10 suggest that the Physical Demand workload was high for Topology I, while
the Effort workload was high for Topology III.
The two subjects who preferred Topology I rather than Topology III claimed that it was not
easy to determine where the bar was located when Topology III was used owing to the lack
of 3D perception. In addition, they reported that the bar seemed to be placed far from the
base. However, the bar appeared randomly, but in the same area; therefore, the bar that
appeared in the experiment that employed Topology III was not placed farther from the
base when compared to the experiments that used Topology I or Topology II. Accordingly,
these two subjects may have had difficulty obtaining 3D perception from the gridded plane.
In Topology III, the arm was partially out of view in the initial position; thus, the operator
needed to obtain 3D perception by seeing the grids. It is often said that most surgeons can
obtain 3D perception even if they use a 2D camera, and our preliminary experimental results
imply that this ability might differ by individual. Some people appear to obtain 3D perception
primarily by seeing the relative positions between the target and the tool they move.
Redundant DOFs may also be preferred by operators with better 3D perception capability.
Although the experiments were preliminary, there must be other factors that accounted for
the preference of the user. Indeed, it is likely that the preferable topology varies depending
on the user, and the developed simulator would be useful to evaluate these variations. The
proposed reconfigurable master device will enable individual surgeons to customize the
robot and interface as they prefer.
Subject
Topology Average
a b c d e
Time (s) I 5.7 4.1 4.0 5.8 5.0 4.9
II 7.6 6.2 4.8 5.5 6.7 6.1
III 4.9 4.3 5.6 4.4 4.7 4.8
Work Load: I 30.7 11.3 28.3 32.0 73.3 35.1
NASA-TLX
Score II 47.6 26.7 28.0 53.0 68.3 44.7
III 37.0 5.0 24.3 14.3 61.3 28.4
Preference I 3 1 1 3 3 2.2
II 2 3 2 2 2 2.2
III 1 2 3 1 1 1.6
Mental
Demand
15
10 Physical
Frustration
Demand
5 Topology I
0 Topology II
Topology III
Temporal
Effort
Demand
Performance
5. Conclusion
A modular robot was proposed for endoluminal surgery. The design, prototyping and
evaluation of the modules were reported. Although there are some issues related to the
fabrication problems, the results of the performance tests show the feasibility of the
modular surgical system. A reconfigurable master device has also been proposed, and its
feasibility was evaluated by simulation-based experiments. The preliminary results
showed that the preferred topology may vary depending on the user. Moreover, the
reconfigurable master device would enable each surgeon to customize the surgical system
according to his/her own preferences. Development of the robotic modules and the
reconfigurable master device provided proof of concept of the modular robotic system for
endoluminal surgery, suggesting that the modular approach has great potential for
surgical applications.
6. Acknowledgments
This study was supported in part by the European Commission in the framework of the
ARES Project (Assembling Reconfigurable Endoluminal Surgical System, NEST-2003-1-
ADVENTURE/15653), by the European Union Institute in Japan at Waseda University (EUIJ
Waseda, http://www.euij-waseda.jp/eng/) within the framework of its Research
Scholarship Programme, and by the Global COE Program "Global Robot Academia" from
the Ministry of Education, Culture, Sports, Science and Technology of Japan. The authors
are grateful to Mr. Nicodemo Funaro for manufacturing the prototypes and Ms. Sara
Condino for her invaluable technical support.
7. References
Bardaro, S. J. & Swanström, L. (2006). Development of advanced endoscopes for Natural
Orifice Transluminal Endoscopic Surgery (NOTES). In: Minim. Invasive Ther. Allied
Technol., 15(6), pp. 378–383.
Modular Robotic Approach in Surgical Applications
– Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery – 17
1. Introduction
Target point manipulation inside a deformable object by a robotic system is necessary in
many medical and industrial applications such as breast biopsy, drug injection, suturing,
precise machining of deformable objects etc. However, this is a challenging problem because
of the difficulty of imposing the motion of the internal target point by a finite number of
actuation points located at the boundary of the deformable object. In addition, there exist
several other important manipulative operations that deal with deformable objects such as
whole body manipulation [1], shape changing [2], biomanipulation [3] and tumor
manipulation [4] that have practical applications. The main focus of this chapter is the target
point manipulation inside a deformable object. For instance, a positioning operation called
linking in the manufacturing of seamless garments [5] requires manipulation of internal
points of deformable objects. Mating of a flexible part in electric industry also results in the
positioning of mated points on the object. In many cases these points cannot be manipulated
directly since the points of interest in a mating part is inaccessible because of contact with a
mated part. Additionally, in medical field, many diagnostic and therapeutic procedures
require accurate needle targeting. In case of needle breast biopsy [4] and prostate cancer
brachytherapy [6], needles are used to access a designated area to remove a small amount of
tissue or to implant radio-active seed at the targeted area. The deformation causes the target
to move away from its original location. To clarify the situation we present a schematic of
needle insertion for breast biopsy procedure as shown in Figure 1. When tip of the needle
reaches the interface between two different types of tissue, its further insertion will push the
tissue, instead of piercing it, causing unwanted deformations. These deformations move the
target away from its original location as shown in Figure 1(b). In this case, we cannot
manipulate the targeted area directly because it is internal to the organ. It must be
manipulated by controlling some other points where forces can be applied as shown in
Figure 1(c). Therefore, in some cases one would need to move the positioned points to the
desired locations of these deformable objects (e.g., mating two deformable parts for sewing
seamlessly) while in other cases one may need to preserve the original target location (e.g.,
guiding the tumor to fall into the path of needle insertion). In either of these situations, the
ability of a robotic system to control the target of the deformable object becomes important,
which is the focus of this chapter.
To control the position of the internal target point inside a deformable object requires
appropriate contact locations on the surface of the object. Therefore, we address the issue of
20 Robotic Systems – Applications, Control and Programming
determining the optimal contact locations for manipulating a deformable object such that
the internal target point can be positioned to the desired location by three robotic fingers
using minimum applied forces. A position-based PI controller is developed to control the
motion of the robotic fingers such that the robotic fingers apply minimum force on the
surface of the object to position the internal target point to the desired location. However,
the controller for target position control is non-collocated since the internal target point is
not directly actuated by the robotic fingers. It is known in the literature that non-collocated
control of a deformable object is not passive, which may lead to instability [7]. In order to
protect the object and the robotic fingers from physical damage and also in order to
diminish the deterioration of performance caused by unwanted oscillation, it is
indispensable to build stable interaction between the robotic fingers and the object. Here we
consider that the plant (i.e., the deformable object) is passive and does not generate any
energy. So, in order to have stable interaction, it is essential that the controller for the robotic
fingers must be stable. Thus, we present a new passivity-based non-collocated controller for
the robotic fingers to ensure safe and accurate position control of the internal target point.
The passivity theory states that a system is passive if the energy flowing in exceeds the
energy flowing out. Creating a passive interface adds the required damping force to make
the output energy lower than the input energy. To this end we develop a passivity observer
(PO) and a passivity controller (PC) based on [8] for individual robotic finger where PO
monitors the net energy flow out of the system and PC will supply the necessary damping
force to make PO positive. Our approach extends the concept of PO and PC in [8] to multi-
point contacts with the deformable object.
Fig. 1. Schematics of needle breast biopsy procedure: (a) needle insertion, (b) target
movement, and (c) target manipulation
The remainder of this chapter is organized as follows: we discuss various issues and prior
research in Section 2. The problem description is stated in Section 3. Section 4 outlines the
mathematical modelling of the deformable object. A framework for optimal contact
locations is presented in Section 5. The control methods are discussed in Section 6. The
effectiveness of the derived control law is demonstrated by simulation in Section 7. Finally,
the contributions of this work and the future directions are discussed in Section 8.
and Isto [12] presented a motion planner for manipulating deformable linear objects using
two cooperating robotic arms to tie self-knots and knots around simple static objects. Zhang
et al. [13] presented a microrobotic system that is capable of picking up and releasing
operation of microobjects. Sun et al. [14] presented a cooperation task of controlling the
reference motion and the deformation when handling a deformable object by two
manipulators. In [15], Tavasoli et al. presented two-time scale control design for trajectory
tracking of two cooperating planar rigid robots moving a flexible beam. However, to the
best of our knowledge the works on manipulating an internal target point inside a
deformable object are rare [4, 5]. Mallapragada et al. [4] developed an external robotic
system to position the tumor in image-guided breast biopsy procedures. In their work, three
linear actuators manipulate the tissue phantom externally to position an embedded target
inline with the needle during insertion. In [5] Hirai et al. developed a robust control law for
manipulation of 2D deformable parts using tactile and vision feedback to control the motion
of the deformable object with respect to the position of selected reference points. These
works are very important to ours present application, but they did not address the optimal
locations of the contact points for effecting the desired motion.
A wide variety of modeling approaches have been presented in the literature dealing with
computer simulation of deformable objects [16]. These are mainly derived from physically-
based models to produce physically valid behaviors. Mass-spring models are one of the
most common forms of deformable objects. A general mass-spring model consists of a set of
point masses connected to its neighbors by massless springs. Mass-spring models have been
used extensively in facial animation [17], cloth motion [18] and surgical simulation [19].
Howard and Bekey [20] developed a generalized method to model an elastic object with the
connections of springs and dampers. Finite element models have been used in the computer
simulation to model facial tissue and predict surgical outcomes [21, 22]. However, the works
on controlling an internal point in a deformable object are not attempted.
In order to manipulate the target point to the desired location, we must know the
appropriate contact locations for effecting the desired motion. There can be an infinite
number of possible ways of choosing the contact location based on the object shapes and
task to be performed. Appropriate selection of the contact points is an important issue for
performing certain tasks. The determination of optimal contact points for rigid object was
extensively studied by many researchers with various stability criteria. Salisbury [23] and
Kerr [24] discussed that a stable grasp was achieved if and only if the grasp matrix is full
row rank. Abel et al. [25] modelled the contact interaction by point contact with Coulomb
friction and they stated that optimal grasp has minimum dependency on frictional forces.
Cutkosky [26] discussed that the size and shape of the object has less effect on the choice of
grasp than by the tasks to be performed after examining a variety of human grasps. Ferrari
et al. [27] defined grasp quality to minimize either the maximum value or sum of the finger
forces as optimality criteria. Garg and Dutta [28] shown that the internal forces required for
grasping deformable objects vary with size of object and finger contact angle. In [29],
Watanabe and Yoshikawa investigated optimal contact points on an arbitrary shaped object
in 3D using the concept of required external force set. Ding et al. proposed an algorithm for
computing form closure grasp on a 3D polyhedral object using local search strategy in [30].
In [31, 32], various concepts and methodologies of robot grasping of rigid objects were
reviewed. Cornella et al. [33] presented a mathematical approach to obtain optimal solution
of contact points using the dual theorem of nonlinear programming. Saut et al. [34]
22 Robotic Systems – Applications, Control and Programming
presented a method for solving the grasping force optimization problem of multi-fingered
dexterous hand by minimizing a cost function. All these works are based on grasp of rigid
objects.
There are also a few works based on deformable object grasping. Like Gopalakrishnan and
Goldberg [35] proposed a framework for grasping deformable parts in assembly lines based
on form closure properties for grasping rigid parts. Foresti and Pellegrino [36] described an
automatic way of handling deformable objects using vision technique. The vision system
worked along with a hierarchical self-organizing neural network to select proper grasping
points in 2D. Wakamatsu et al. [37] analyzed grasping of deformable objects and introduced
bounded force closure. However, position control of an internal target point in a deformable
object by multi-fingered gripper has not been attempted. In our work, we address the issue
of determining the optimal contact locations for manipulating a deformable object such that
the internal target point can be positioned to the desired location by three robotic fingers
using minimum applied forces.
The idea of passivity can be used to guarantee the stable interaction without exact
knowledge of model information. Anderson and Spong [38] published the first solid result
by passivation of the system using scattering theory. A passivity based impedance control
strategy for robotic grasping and manipulation was presented by Stramigioli et al. [39].
Recently, Hannaford and Ryu [40] proposed a time-domain passivity control based on the
energy consumption principle. The proposed algorithm did not require any knowledge
about the dynamics of the system. They presented a PO and a PC to ensure stability under a
wide variety of operating conditions. The PO can measure energy flow in and out of one or
more subsystems in real-time by confining their analysis to system with very fast sampling
rate. Meanwhile the PC, which is an adaptive dissipation element, absorbs exactly net
energy output measured by the PO at each time sample. In [41], a model independent
passivity-based approach to guarantee stability of a flexible manipulator with a non-
collocated sensor-actuator pair is presented. This technique uses an active damping element
to dissipate energy when the system becomes active. In our work we use the similar concept
of PO and PC to ensure stable interaction between the robotic fingers and the deformable
object. Our work also extends the concept of PO and PC for multi-point contact with the
deformable object.
3. Problem description
Consider a case in which multiple robotic fingers are manipulating a deformable object in a
2D plane to move an internal target point to a desired location. Before we discuss the design
of the control law, we present a result from [42] to determine the number of actuation points
required to position the target at an arbitrary location in a 2D plane. The following
definitions are given according to the convention in [42].
Manipulation points: are defined as the points that can be manipulated directly by robotic
fingers. In our case, the manipulation points are the points where the external robotic fingers
apply forces on the deformable object.
Positioned points: are defined as the points that should be positioned indirectly by
controlling manipulation points appropriately. In our case, the target is the position point.
The control law to be designed is non-collocated since the internal target point is not directly
actuated by the robotic fingers. The following result is useful in determining the number of
actuation points required to accurately position the target at the desired location.
Target Point Manipulation Inside a Deformable Object 23
Result [42]: The number of manipulated points must be greater than or equal to that of the
positioned points in order to realize any arbitrary displacement.
In our present case, we assume that the number of positioned points is one, since we are
trying to control the position of the target. Hence, ideally the number of contact points
would also be one. But practically, we assume that there are two constraints: (1) we do not
want to apply shear force on the deformable object to avoid the damage to the surface, and
(2) we can only apply control force directed into the deformable object. We cannot pull the
surface since the robotic fingers are not attached to the surface. Thus we need to control the
position of the target by applying only unidirectional compressive force.
In this context, there exists a theorem on the force direction closure in mechanics that helps
us determining the equivalent number of compressive forces that can replace one
unconstrained force in a 2D plane.
Theorem [43]: A set of wrenches w can generate force in any direction if and only if there
exists a three-tuple of wrenches { w 1 , w 2 , w 3 } whose respective force directions f1 , f2 , f3
satisfy:
i. Two of the three directions f1 , f2 , f3 are independent
ii. A strictly positive combination of the three directions is zero.
α f1 + β f2 + γ f3 = 0 (1)
where α , β , and γ are constants. The ramification of this theorem for our problem is that
we need three control forces distributed around the object such that the end points of their
direction vectors draw a non-zero triangle that includes their common origin point. With
such an arrangement we can realize any arbitrary displacement of the target point. Thus the
problem can be stated as:
Problem statement: Given the number of actuation points, the initial target and its desired
locations, find appropriate contact locations and control action such that the target point is
positioned to its desired location by controlling the boundary points of the object with
minimum force.
∂U
wi = − (2)
∂pi
3
w = f i n i ( ri ) , (∀w ∈ ℜ2 ) ( ∃f i ≥ 0, 1 ≤ i ≤ 3) (3)
i =1
Target Point Manipulation Inside a Deformable Object 25
where, n i ( ri ) is the unit inner normal of i-th contact and f i denotes the i-th finger’s force.
We assume that the contact forces should exist in the friction cone to manipulate objects
without slip of the fingertip. Now we need to find three distinct points, r1 (θ 1 ) , r2 (θ 2 ) , and
r3 (θ 3 ) , on the boundary of the object such that Equation (3) is satisfied. Here, θ 1 , θ 2 , and θ 3
are the three contact point locations measured anti-clockwise with respect to the x axis as
shown in Figure 4. In addition, we assume that the normal forces have to be non-negative to
avoid separation and slippage at the contact points, i.e.,
f i ≥ 0 , i = 1, 2, 3 (4)
A unique solution to realizable grasping may not always exist. Therefore, we develop an
optimization technique that minimizes the total force applied on to the object to obtain a
particular solution. The optimal locations of the contact points would be the solution of the
following optimization problem.
min fT f
sunject to
3
w = f i n i ( ri )
i =1
Once we get the optimal contact locations, all three robotic fingers can be located at their
respective positions to effect the desired motion at those contact points.
p = [ x y ]T (7)
where, x and y are the position coordinates of point A in the global reference frame w . The
desired target position is represented by point B whose position vector is given by
pd = [ xd y d ]T (8)
where, xd and y d are the desired target position coordinates. The target position error, e, is
given by
e = pd − p (9)
Once the optimal contact locations are determined from Equation (6), the planner generates
the desired reference locations for these contact points by projecting the error vector
between the desired and the actual target locations in the direction of the applied forces,
which is given by
ei* = e ⋅ n i (10)
where,
n i = [nxi nyi ]T (11)
All robotic fingers are controlled by their individual controllers using the following
proportional-integral (PI) control law
f i = K Pi e i* + K Ii ei* dt , i = 1, 2, 3 (12)
where, K Pi , and K Ii are the proportional and integral gains. Note that in the control law
(12), mechanical properties of the deformable object are not required. Forces applied by the
fingers on the surface of the deformable object are calculated by projecting the error vector
in the direction of the applied forces. But the Equation (12) does not guarantee that the
system will be stable. Thus a passivity-based control approach based on energy monitoring
is developed to guarantee the stability of the system.
Target Point Manipulation Inside a Deformable Object 27
f = [ f1 f2 f 3 ]T (13)
v = [ v1 v2 v3 ]T (14)
The desired target velocity is obtained by differentiating (8) with respect to time and is
given by
p d = [ x d y d ]T (15)
where, x d and y d are the desired target velocities, respectively. The desired target velocity
along the direction of actuation of the i-th robotic finger is given by
vdi = p d ⋅ n i (16)
The trajectory generator essentially computes the desired target velocity along the direction
of actuation of the robotic fingers. If the direction of actuation of the robotic fingers, n i , and
desired target velocity, p d , are known with respect to a global reference frame then the
trajectory generator computes the desired target velocity along the direction of actuation of
the fingers using Equation (16).
28 Robotic Systems – Applications, Control and Programming
The connections between the trajectory generator and the controller, which traditionally
consist of a one-way command information flow, are modified by the addition of a virtual
feedback of the conjugate variable [41]. For the system shown in Figure 5, output of the
trajectory generator is the desired target velocity, vdi , along direction of i-th finger and
output of the controller is calculated from Equation (12).
Fig. 5. Network representation of the control system. α 1i and α 2 i are the adjustable damping
elements at each port, i=1,2,3
For both connections, virtual feedback is the force applied by the robotic fingers. Integral of
the inner product between trajectory generator output ( vdi ) and its conjugate variable ( f i )
defines “virtual input energy.” The virtual input energy is generated to give a command to
the controller, which transmits the input energy to the plant through the controller in the
form of “real output energy.” Real output energy is the physical energy that enters to the
plant (deformable object) at the point where the robotic finger is in contact with the object.
Therefore, the plant is a three-port system since three fingers manipulate the object. The
conjugate pair that represents the power flow is f i , vi (the force and the velocity of i-th
finger, respectively). The reason for defining virtual input energy is to transfer the source of
energy from the controllers to the trajectory generator. Thus the controllers can be
represented as two-ports which characterize energy exchange between the trajectory
generator and the plant. Note that the conjugate variables that define power flow are
discrete time values and so the analysis is confined to systems having a sampling rate
substantially faster than the system dynamics.
For regulating the target position during manipulation, vdi = 0 . Hence the trajectory
generator is passive since it does not generate energy. However, for target tracking, vdi ≠ 0
Target Point Manipulation Inside a Deformable Object 29
and f i ≠ 0 . Therefore the trajectory generator is not passive because it has a velocity source
as a power source. It is shown that even if the system has an active term, stability is
guaranteed as long as the active term is not dependent on the system states [45]. Therefore,
passivity of the plant and controllers is sufficient to ensure system stability.
Here, we consider that the plant is passive. Now we design a PO for sufficiently small time-
step ΔT as:
k
Ei (t k ) = ΔT ( f i (t j )vdi (t j ) − f i (t j )vi (t j )) (17)
j =0
k k
ΔT f i (t j )vdi (t j ) ≥ ΔT f i (t j )vi (t j ) , ∀tk ≥ 0 , i = 1, 2, 3 (18)
j =0 j =0
where, E1Ti ( k ) and E2Ti ( k ) are the energy flowing in and out at the trajectory side of the
controller port, respectively, whereas E1Pi ( k ) and E2Pi ( k ) are the energy flowing in and out at
the plant side of the controller port, respectively. So the time domain passivity condition is
given by
where, the last two terms are the energy dissipated at the previous time step. α 1i ( k − 1) and
α 2 i ( k − 1) are the damping coefficient calculated based on PO discussed below.
Passivity Controller (PC)
In order to dissipate excess energy of the controlled system, a damping force should be
applied to its moving parts depending on the causality of the port. As it is well known, such
a force is a function of the system's velocities giving the physical damping action on the
system. Mathematically, the damping force is given by
fd = α v (25)
where α is the adjustable damping factor and v is the velocity. From this simple
observation, it seems necessary to measure and use the velocities of the robotic fingers in the
control algorithm in order to enhance the performance by means of controlling the damping
forces acting on the systems. On the other hand, velocities measurements are not always
available and in these cases position measurements can be used to estimate velocities and
therefore to inject damping.
When the observed energy becomes negative, the damping coefficient is computed using
the following relation (which obeys the constitutive Equation (25)). Therefore, the algorithm
used for a 2-port network with impedance causality (i.e., velocity input, force output) at
each port is given by the following steps:
1. Two series PCs are designed for several cases as given below:
Case 1: If Ei ( k ) ≥ 0 , i.e., if the output energy is less than the input energy, there is no
need to activate any PCs.
Case 2: If Ei ( k ) < 0 , i.e., if the output energy is more than the input energy, i.e.,
E2Pi ( k ) > E1Ti ( k ) , then we need to activate only the plant side PC.
α 1i ( k ) = 0
(26)
α 2 i ( k ) = −Ei ( k ) / vi ( k )2
Case 3: Similarly, if Ei ( k ) < 0 , E2Ti ( k ) > E1Pi ( k ) , then we need to activate only the trajectory
side PC.
α 1i ( k ) = −Ei ( k ) / vdi ( k )2
(27)
α 2i (k) = 0
2. The contributions of PCs are converted into power variables as
f it ( k ) = α 1i ( k )vdi ( k )
(28)
f i p ( k ) = α 2 i ( k )vi ( k )
where, f it ( k ) and f i p ( k ) are the PCs’ outputs at trajectory and plant sides of the controller
ports, respectively. f iT ( k ) and f iP ( k ) are the modified outputs at trajectory and plant sides
of the controller ports, respectively.
2 1
3
2
3 (b)
(a)
2
1
1
2
3
3
(c) (d)
Fig. 6. Optimal contact locations ( θ 1 , θ 2 , θ 3 ): (a) 59.98o, 204.9o, 244.9o, (b) 14.96o, 159.9o,
199.9o, (c) 7.54o, 182.54o, 327.54o, and (d) 48.59o, 88.59o, 234.39o
32 Robotic Systems – Applications, Control and Programming
2
2 1
1
3 3
(a) (b)
1
2
2 1
3 3
(c) (d)
Fig. 7. Optimal contact locations ( θ 1 , θ 2 , θ 3 ): (a) 0o, 170o, 253.8o, (b) 29.07o, 116.93o, 233.86o,
(c) 0o, 175o, 320o, and (d) 76.93o, 116.93o, 261.94o
1
2 1
3 3
(a) (b)
1
2
3
(c) (d)
Fig. 8. Optimal contact locations ( θ 1 , θ 2 , θ 3 ): (a) 25.18o, 199.48o, 262.22o, (b) 0o, 175o, 262.62o,
(c) 141.05o, 303.66o, 343.66o and (d) 96.37o, 169.35o, 288.29o
Target Point Manipulation Inside a Deformable Object 33
Task 2:
In Task 2, we present a target positioning operation when the robotic fingers are not located
at their optimal contact locations. For instance, we choose that the robotic fingers are located
at 0, 120, and 240 degrees with respect to the x-axis as shown in Figure 9. We assume that
the initial position of the target is at the center of the section of the deformable object, i.e., (0,
0) mm. The goal is to position the target at the desired location (5, 5) mm with a smooth
0.06
y
0.04
2
0.02
1
y (m)
0
x
-0.02
-0.04
3
-0.06
-0.04 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 0.04
x (m)
Fig. 9. Deformable object with contact points located at 0, 120 and 240 degrees with respect
to x-axis
4
y (mm)
1
desired
actual
0
0 1 2 3 4 5 6
x (mm)
Fig. 10. The desired (red dashed) and the actual (blue solid) straight lines when robotic
fingers are located at 0, 120, and 240 degrees with respect to x-axis
34 Robotic Systems – Applications, Control and Programming
straight line trajectory. In this simulation, we choose KPi =1000 and KIi =1000, i=1,2,3. Figure
10 shows the actual and desired position trajectories of the target point. It is noticed that
there is some error present in the tracking of the desired trajectory. Robotic fingers forces
generated by the PI controller are presented in Figure 11 and the POs are falling to negative
as shown in Figure 12. Negative values of POs signify that the interaction between the
robotic fingers and the deformable object is not stable.
1.8
f1
1.6 f2
1.4 f3
1.2
Controller forces (N)
0.8
0.6
0.4
0.2
0 1 2 3 4 5 6 7 8 9 10
time (sec.)
Fig. 11. Controller forces when robotic fingers are located at 0, 120, and 240 degrees with
respect to x-axis
0.01
E 1 (Nm)
-0.01
0 1 2 3 4 5 6 7 8 9 10
0.01
0
E2 (Nm)
-0.01
-0.02
0 1 2 3 4 5 6 7 8 9 10
0.05
E3 (Nm)
-0.05
0 1 2 3 4 5 6 7 8 9 10
time (sec.)
Fig. 12. POs when robotic fingers are located at 0, 120, and 240 degrees with respect to x-axis
Target Point Manipulation Inside a Deformable Object 35
Task 3:
In Task 3, we consider the same task as discussed above under Task 2 but the robotic fingers
are positioned at their optimal contact locations (Figure 8(a)) and the target is following the
desired straight line trajectory. In this case, PCs are not turned on while performing the task.
A simple position based PI controller generates the control command based on the error
between the desired and the actual location of the target. Figure 13 shows that the target
4
y (mm)
1
desired
actual
0
0 1 2 3 4 5 6
x (mm)
Fig. 13. The desired (red dashed) and the actual (blue solid) straight lines when PCs are not
turned on
0.8
f1
0.7 f2
f3
0.6
0.5
Controller forces (N)
0.4
0.3
0.2
0.1
-0.1
0 1 2 3 4 5 6 7 8 9 10
time (sec.)
0.01
E1 (Nm)
-0.01
0 1 2 3 4 5 6 7 8 9 10
-3
x 10
5
E2 (Nm)
-5
0 1 2 3 4 5 6 7 8 9 10
-3
x 10
5
E3 (Nm)
-5
0 1 2 3 4 5 6 7 8 9 10
time (sec.)
(a)
0.01
E1 (Nm)
-0.01
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-6
x 10
1
E2 (Nm)
-1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-7
x 10
5
E3 (Nm)
-5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
time (sec.)
(b)
Fig. 15. (a) POs for three robotic fingers when PCs are not turned on, (b) a magnified version
of (a) for first few seconds
Target Point Manipulation Inside a Deformable Object 37
tracked the desired position trajectory. Robotic fingers forces generated by the PI
controller are presented in Figure 14. Force values in Figure 14 are quite less than those in
Figure 11 because of the optimal contact location of the robotic fingers. However, the POs
for robotic fingers 2 and 3 are become negative as shown in Figure 15. Negative values of
the POs signify that the output energy of the 2-port network is greater than the input
energy. Since the plant is considered to be passive, the only source of generating extra
energy is the controller that makes the whole system unstable. So we must engage
passivity controller to modify the controller output by dissipating the extra amount of
energy.
Task 4:
In Task 4, the PCs are turned on and the robotic fingers are commanded to effect the desired
motion of the target. The PCs are activated when the POs cross zero from a positive value.
The required damping forces are generated to dissipate only the excess amount of energy
generated by the controller. In this case, the target tracks the desired straight line trajectory
well with the POs remaining positive. Figure 16 represents the actual and the desired
trajectories of the target when PCs are turned on. For this case, the PCs on the plant side are
only activated whereas the PCs on the trajectory side remain idle. Figure 17 shows the PCs
forces generated at the plant side when the POs cross zero. The POs become positive during
interaction between the robotic fingers and the object as shown in Figure 18. Hence, the
stability of the overall system is guaranteed. The PCs on the trajectory side are shown in
Figure 19, which are all zeros. The modified controller outputs to move the target point are
shown in Figure 20.
4
y (mm)
1
desired
actual
0
0 1 2 3 4 5 6
x (mm)
Fig. 16. The desired (red dashed) and the actual (blue solid) straight lines when PCs are
turned on
38 Robotic Systems – Applications, Control and Programming
-0.1
0 1 2 3 4 5 6 7 8 9 10
0.4
fp2 (N)
0.2
0
0 1 2 3 4 5 6 7 8 9 10
0.4
fp3 (N)
0.2
0
0 1 2 3 4 5 6 7 8 9 10
time (sec.)
Fig. 17. Required forces supplied by PCs at the plant side when PCs are turned on
0.01
E1 (Nm)
-0.01
0 1 2 3 4 5 6 7 8 9 10
0.2
0.1
E2 (Nm)
-0.1
0 1 2 3 4 5 6 7 8 9 10
0.02
0.01
E3 (Nm)
-0.01
0 1 2 3 4 5 6 7 8 9 10
time (sec.)
Fig. 18. POs for three robotic fingers when PCs are turned on
Target Point Manipulation Inside a Deformable Object 39
ft1 (N)
0
-1
0 1 2 3 4 5 6 7 8 9 10
1
ft2 (N)
-1
0 1 2 3 4 5 6 7 8 9 10
1
ft3 (N)
-1
0 1 2 3 4 5 6 7 8 9 10
time (sec.)
Fig. 19. PCs forces at the trajectory side when PCs are turned on
0.8
fP
1
0.7
fP
2
0.6 fP
3
Modified controller forces (N)
0.5
0.4
0.3
0.2
0.1
-0.1
0 1 2 3 4 5 6 7 8 9 10
time (sec.)
9. References
[1] D. Sun and Y. H. Liu, Modeling and impedance control of a two-manipulator system
handling a flexible beam, ASME Journal of Dynamic Systems, Measurement, and
Control, vol. 119, pp. 736-742, 1997.
[2] P. Dang, F. L. Lewis, K. Subbarao and H. Stephanou, Shape control of flexible structure
using potential field method, 17th IEEE International Conference on Control
Applications, Texas, USA, pp. 540-546, 2008.
[3] X. Liu, K. Kim, Y. Zhang and Y. Sun, Nanonewton force sensing and control in
microrobotic cell manipulation, The International Journal of Robotics Research, vol.
28, issue 8, pp. 1065-1076, 2009.
[4] V. G. Mallapragada, N. Sarkar and T. K. Podder, Robot-assisted real-time tumor
manipulation for breast biopsy, IEEE Transactions on Robotics, vol. 25, issue 2, pp.
316-324, 2009.
[5] S. Hirai and T. Wada, Indirect simultaneous positioning of deformable objects with multi
pinching fingers based on uncertain model, Robotica, Millennium Issue on
Grasping and Manipulation, vol. 18, pp. 3-11, 2000.
[6] S. Nath, Z. Chen, N. Yue, S. Trumpore and R. Peschel, Dosimetric effects of needle
divergence in prostate seed implant using 125I and 103Pd radioactive seeds,
Medical Physics, vol. 27, pp. 1058-1066, 2000.
[7] A. Albu-Schaffer, C. Ott and G. Hirzinger, Constructive energy shaping based
impedance control for a class of underactuated Euler-Lagrange systems, IEEE
International Conference on Robotics and Automation, pp. 1387-1393, 2005.
[8] B. Hannaford and R. Jee-Hwan, Time-domain passivity control of haptic interfaces, IEEE
Transactions on Robotics and Automation, vol. 18, pp. 1-10, 2002.
[9] S. Ali A. Moosavian and R. Rastegari, Multiple-arm space free-flying robots for
manipulating objects with force tracking restrictions, Robotics and Autonomous
System, vol. 54, issue 10, pp. 779-788, 2006.
[10] Z. Li, S. S. Ge and Z. Wang, Robust adaptive control of coordinated multiple mobile
manipulators, Mechatronics, vol. 18, issues 5-6, pp. 239-250, 2008.
Target Point Manipulation Inside a Deformable Object 41
[29] T. Watanabe and T. Yoshikawa, Grasping optimization using a required external force
set, IEEE Transactions on Automation Science and Engineering, Vol. 4, pp. 52-66,
2007.
[30] D. Ding, Y. H. Liu and S. Wang, Compuattion of 3D form-closure grasps, IEEE
Transactions on Robotics and Autonamtion, Vol. 17, pp. 515-522, 2001.
[31] A. Bicchi and V. Kumar, Robotic grasping and contact: a review, Proc. IEEE
International Conference on Robotics and Automation, pp. 348-353, 2000.
[32] M. T. Mason, Mechanics of robotic manipulation, The MIT Press.
[33] J. Cornella, R. Suarez, R. Carloni and C. Melchiorri, Dual programming based approach
for optimal grasping force distribution, Journal of Mechatronics, Vol. 18, pp. 348-
356, 2008.
[34] P. Saut, C. Remond, V. Perdereau and M. Drouin, Online computation of grasping force
in multi-fingered hands, Proc. IEEE/RSJ International Conference on Intelligent
Robots and Systems, pp. 1223-1228, 2005.
[35] K. Gopalakrishanan and K. Goldberg, D-space and deform closure grasps of deformable
parts, International Journal of Robotics Research, vol. 24, pp. 889-910, 2005.
[36] G. L. Foresti and F. A. Pellegrino, Automatic visual recognition of deformable objects for
grasping and manipulation, IEEE Transaction on Systems, Man, and Cybernetics:
Applications and Reviews, vol. 34, pp. 325-333, 2004.
[37] H. Wakamatsu, S. Hirai and K. Iwata, Static analysis of deformable object grasping
based on bounded force closure, IEEE International Conference on Robotics and
Automation, vol.4, pp. 3324-3329, 1996.
[38] R. J. Anderson and M. W. Spong, Asymptotic stability for force reflecting teleoperators
with time delays, in, 1989. IEEE International Conference on Robotics and
Automation, vol.3, pp. 1618-1625, 1989.
[39] S. Stramigioli, C. Melchiorri and S. Andreotti, A passivity-based control scheme for
robotic grasping and manipulation, The 38th Conference on Decision and Control,
Phoenix, Arizona, USA, 1999.
[40] B. Hannaford and J.-H. Ryu, Time domain passivity control of haptic interfaces, IEEE
International Conference on Robotics and Automation, vol.2, pp. 1863-1869, 2001.
[41] J.-H. Ryu, D.-S. Kwon and B. Hannaford, Control of a flexible manipulator with
noncollocated feedback: time-domain passivity approach, IEEE Transactions on
Robotics, vol. 20, pp. 776-780, 2004.
[42] T. Wada, S. Hirai, S. Kawamura and N. Kamiji, Robust manipulation of deformable
objects by a simple PID feedback, IEEE International Conference on Robotics and
Automation, vol.1, pp. 85-90, 2001.
[43] V. D. Nguyen, Constructing force-closure grasps, IEEE International Conference on
Robotics and Automation, pp. 1368-1373, 1986.
[44] J. Ponce and B. Faverjon, On computing three-finger force-closure grasps of polygonal
objects, Fifth International Conference on Advanced Robotics, vol.2, pp. 1018-1023,
1991.
[45] J. E. Colgate and N. Hogan, Robust control of dynamically interacting systems,
International Journal of Control, vol. 48, pp. 65 - 88, 1988.
[46] J. J.-H. Ryu and C. Preusche, Stable bilateral control of teleoperators under time-varying
communication delay: time domain passivity approach, IEEE International
Conference on Robotics and Automation, pp. 3508-3513, 2007.
3
1. Introduction
Assistive robots, with which users can interact directly, have attracted worldwide attention.
They can assist people with disabilities and older persons in the activities of daily living.
Assistive robots could be employed for improving quality of life as they can be adjusted
according to demographic changes. There are several crucial issues to be considered with
regard to these robots, such as customizing them according to the specific culture of the
users as well as ensuring cost-effectiveness (Mann, 2005).
In Korea, the official number of registered people with disabilities due to illnesses, injuries,
and the natural aging process has already exceeded two million (Employment Development
Institute, 2009). More than one-third of these disabled people are the elderly. Moreover, due to
longer life spans and a decline in birthrate, the elderly make up over 10% of the population in
Korea. As a result, effective caregiving with restricted resources is an urgent problem.
In order to achieve efficient caregiving for people with disabilities and elderly persons,
caregivers should physically interact with the people. For example, caregivers have to assist
people in performing the routine activities of their daily lives, such as eating, changing
clothes, changing their posture, moving from one location to another, and bathing. Among
these activities, eating meals is one of the most essential daily activities. In this regard,
caregivers must interact with people frequently to assist with food selection, feeding
interval, etc. Existing robotic technologies can be utilized to take over the functions of the
caregivers. Thus, assistive robots represent one of the solutions by which disabled or elderly
people can receive support for performing the activities of daily life.
The design of assistive robots to help with self-feeding depends strongly on the specific
culture of the user. Korean food consists chiefly of boiled rice, soup, and side dishes such as
Kimchi. The procedure of having a meal is as follows: the user eats the boiled rice first and
then the side dishes. These steps are performed repetitively. In comparison with foreign
boiled rice, Korean boiled rice sticks together very well after cooking. Handling this sticky
boiled rice can be problematic. In addition, Korean soup includes meat, noodles, and
various vegetables, thus the existing feeding robots find it difficult to handle Korean foods.
Various assistive robots have been developed since the late 1980s, as shown in Fig. 1.
Handy1 (Topping & Smith, 1999) is an assistive robot for daily activities such as eating,
drinking, washing, shaving, teeth cleaning, and applying make-up. Handy1 consists of a
five-DOF (degree-of-freedom) robot, a gripper, and a tray unit. The major function of
Handy1 is to help with eating. Handy1 allows a user to select food from any part of the tray.
A cup is attached to enable users to drink water with their meal. The walled columns of a
44 Robotic Systems – Applications, Control and Programming
food dish serve an important purpose: the food can be scooped on to the dish without any
resultant mixing of food items.
The Mealtime Partner Dining System (Mealtime Partners) is positioned in front of a user’s
mouth. Three bowls can rotate in front of the mouth. The spoon picks up the food and then
moves a short distance toward the preset location of the mouth. This system reduces the
chances of the spoon slipping on wet food because the underside of the spoon is wiped off
after scooping. Because of the way the system is positioned, the user does not need to lean
toward the feeder. In some systems, a beverage straw is located beside the spoon
(Pourmohammadali, 2007). Other systems are designed for multiple users (Guglielmelli,
2009).
Most feeding systems scoop the food with a spoon. Those systems are not suitable for use in
the case of boiled rice, which is a staple food in Korea. In addition, some systems have a
single dish, and thus different types of food might be mixed during scooping. My Spoon
uses the grasping function to pick up food, but this system has difficulty serving Korean rice
due to its fixed grasping strength and the grasping openness of the gripper. As a result, My
Spoon’s gripper sometimes gets a lot of rice attached to its surface. The previously
mentioned self-feeding robotic systems also have difficulty scooping this staple Korean
food.
Feeding robots enable users to enjoy food independently during mealtimes. After preparing
food, users can choose when they want to eat the desired food. We developed an assistive
robot for self-feeding by taking into consideration the feedback of user candidates and
clinical experts. We evaluated the self-feeding robot by performing a series of user tests. The
overall process, i.e., formulating a concept, design, and evaluation involves feedback from
users and clinical experts. The development process is performed on the basis of the
philosophy of participatory action design (Ding et al., 2007).
In this paper, we introduce a newly designed self-feeding robot that will be suitable in the
case of Korean food, including sticky rice, and we report the results of tests that were
performed with several disabled people. In Section 2, we will present an overview of the
new self-feeding robot for Korean food. Basic operational procedures of the self-feeding
robot will be presented in Section 3. Section 4 contains the results and discussions of tests
performed with users with spinal cord injuries. Finally, we will present the conclusion in
Section 5.
2.1 Users
The primary users of self-feeding robots are people with physical disabilities who have
difficulty moving their upper limbs. Such people include those suffering from high-level
spinal cord injuries, cerebral palsy, and muscular diseases. For example, people with
cervical level-4 spinal cord injuries have difficulty moving their upper limbs and retain full
movement only above their necks. Some people afflicted with cerebral palsy cannot move
their arms and hands, and they often have difficulty moving their necks. When the spoon of
a self-feeding robot approaches such a user’s mouth, that person has a hard time putting the
food in his or her mouth. People with muscular diseases have weak muscle movements.
Even though they can move their hands, they have limited motor functions in their elbows
and shoulder joints. We can also include senior citizens who have difficulties with the motor
46 Robotic Systems – Applications, Control and Programming
functions of their upper limbs, e.g., the fragile elderly, among the abovementioned disabled
people. It is clear that the number of overall target users of self-feeding robots will be
growing in the near future.
Fig. 2. Korean food on a standard food container. (From the lower left-hand side going
counterclockwise: rice, soup, and side dishes.)
Secondly, the specialists and the user candidates believe that the feeding systems are
designed more for western-style food. Those systems are not suitable for Korean food,
which includes boiled rice, soup, and side dishes. A user eats one of the side dishes and then
the boiled rice in turn. These steps are performed repetitively during mealtime. In
comparison with foreign boiled rice, Korean boiled rice sticks together very well after
cooking. One of the problems of self-feeding systems is handling this sticky boiled rice. In
addition, Korean soup includes meat, noodles, and various vegetables. Therefore, existing
feeding robots have difficulty handling Korean foods (Fig. 2).
Thirdly, a feeding robot should be suitable for use in private homes and facilities. From an
economic point of view, a feeding robot is effective in facilities that have many persons with
upper-limb disability. Such facilities do not have enough caregivers to help with feeding
due to the heavily time-consuming nature of this task. Thus, a robot reduces the burden of
caregiving for feeding. A feeding robot can also be used in an ordinary home to improve the
quality of life of the users and their families. Members of a family can face each other and
freely enjoy talking. The other members of the family can go out for a few hours because
they are freed from the burden of having to help with feeding.
The location of bowls or a tray is another important factor. According to Korean culture, the
location of bowls or a tray is strongly related to the dignity of the person. A simple feeding
Novel Assistive Robot for Self-Feeding 47
system can be made with the bowls located in front of a user’s mouth. However, some
senior user candidates hate having the bowls right in front of their mouth; they prefer to eat
the food like ordinary people. Thus, we focus mainly on using a tabletop tray.
Other comments of user candidates are as follows: plain machines that can serve simple
dishes with water are required. When a caregiver goes out for a while, a user needs to be
able to eat cereal with milk with the help of a self-feeding robot. The water supply tool
should be located next to the user’s body. The meal tray must have a cover to protect the
food from dust contamination. The cost should be reasonable, e.g., the price should be
between US$1800 and $2700. Obviously, a low-cost system is preferred. The feeding robot
should be able to deal with noodles. The feeding robot should be able to accommodate the
posture of the user. Finally, the feeding robot should be lightweight.
We concentrate on rice handling. We do not take into account the handling of soup in this
development. We will handle the requirements of feeding Korean soup in a future version’s
design. Instead of handling Korean soup via a self-feeding robot, a user drinks soup stored
in a cup. Generally, we assume that a user drinks soup or water through a straw.
Technically, we considered four types of feeding robots in order to ensure that it can grip
and release boiled rice effectively, as shown in Fig. 3.
In the first concept, a number of bowls are located in front of a user’s mouth, and the food is
presented by the spoon with a short traveling distance. For example, if there are three bowls,
one bowl has rice and two bowls have side dishes. However, two side dishes are not enough
to constitute a satisfying meal. In general, Korean people eat three or four side dishes with
boiled rice at a time. Therefore, we need four or five bowls.
In the third concept, the system with a food tray is located on a table. The robotic arm picks
up food and then moves it to a user’s mouth. These tasks are divided into two steps: one is
picking up the food and the other is moving the food to the user’s mouth. Two arms can be
used to perform the above two tasks, respectively. One of the user candidates pointed out
the easy installation of the feeding robots, especially a dual-arm manipulator. This is
significant because some caregivers might be elderly people who are not familiar with
brand-new machines.
Finally, one bowl is located in front of the user’s mouth. The mixed food with rice is loaded
in that bowl. Some users do not like the mixed food, even though they do prefer a simple
feeding system.
We decided on the third concept, which is located on a table, based on the opinions of
specialists and user candidates.
Gripper Spoon
Gripper
Tray
Fig. 4. Assistive robot for self-feeding. Spoon-arm (Arm #1) uses a spoon to transfer the food
from a container to a user’s mouth. Grab-arm (Arm #2) picks up the food from a container
and then loads it onto the spoon of Arm #1
The two arms have different functions. The design of the end-effectors of the two arms
could be chosen effectively. To pick up or release the food stably, a grab-arm can use an
odd-shaped gripper, as shown in the bottom left-hand side of Fig. 4, because that gripper
does not need to approach a user’s mouth. However, the end-effector of a spoon-arm has an
Novel Assistive Robot for Self-Feeding 49
intact round-shaped spoon to serve food to a user’s mouth safely. If an end-effector has an
unusual shape, then it might pose a danger to the user as it approaches his or her face.
The two proposed arms with their different end-effectors mimic Korean eating behavior.
Specifically, Koreans use a spoon and steel chopsticks during mealtime, as shown in Fig. 5
(a) and (b). Some people use a spoon and chopsticks simultaneously [Fig. 5 (c)]. In the
designed system, the gripper of a grab-arm and the spoon of a spoon-arm take on the roles
of chopsticks and a spoon, respectively. Many Korean caregivers pick up food with
chopsticks and then put it on a spoon in order to serve food to users such as children and
patients. In that sense, the proposed two-armed system stems from Korean eating tools.
Axis #3
Axis #4 R3
R4
Arm #2
Axis #5
R5 Axis #2
Connection R2
Axis #6
Bar
P2 P1*
Gripper
Spoon Arm #1
Axis #3
Axis #1
R1
Tray
Fig. 6. The joint configuration of a novel feeding robot for Korean foods. P1 (Prismatic Joint
#1) is optionally applied. R = Revolute. P = Prismatic
The feeding robot uses a microcontroller unit to control a spoon-arm and a grab-arm, as
shown in Fig. 7. We add a small-sized PC with a touch screen to enable the user to enjoy
50 Robotic Systems – Applications, Control and Programming
entertainment and to test various kinds of user interfaces. During mealtime, a user wants to
enjoy multimedia such as movies or music. In addition, the small-sized PC has a Windows
operating system, and we can effectively add assistive devices for human computer
interaction, i.e., switches, a joystick, and biosignal interface devices.
Joystick/
Switch
PC
Micro
USB (Mobile
Arm #1 Controller
Hub Internet
Unit
Device)
1 4
2 1 6 2
3 5 3
4
9 11 6 7
10
1 4 7 15 17
16
2 5 8
3 6 9
provide enough space behind a food container; therefore, the grab-arm should be located on
the left-hand side of a food container.
Fig. 9. The design of a novel feeding robot for Korean food. A spoon-arm (lower left-hand-
side figure) for transferring food and a grab-arm (lower right-hand-side figure) for picking
up and releasing food
The spoon-arm has two additional variables, namely the motorized prismatic motion
toward a user’s mouth, and the manual change of the link length between the first axis and
the second axis of the grab-arm. Fig. 11 shows the overall workspace of the spoon of a grab-
arm. In accordance with the position of a user’s mouth, the predefined location in front of
the mouth is adjusted when the system is installed.
The height of the spoon on the spoon-arm is 250–381 mm with respect to the surface of a
table. The height of the spoon-arm depends on the table height. We assume that the height
of a table is 730–750 mm. The spoon could be located at a height of 980–1131 mm with
respect to the ground. According to the statistics of the Korean disabled, the height of a
user’s mouth could be 1018 mm (female) and 1087 mm (male), as shown in Table 1. Thus,
the height of the spoon corresponds with the height of the user’s mouth.
only the spoon-arm, and a caregiver takes the role of the grab-arm. We explain the two arm
configurations of the feeding robot in the following sections.
Fig. 12. The sequential motions of the self-feeding robot in dual-arm configuration. From the
top left-hand side, the robot puts a gripper into a bowl of water and then grasps rice
The self-feeding robots have three operation modes: an automatic mode, a semiautomatic
mode, and a manual mode. The automatic mode has a fixed serving sequence of dishes;
users only push a start button when they want to eat the next food on a spoon. In a
semiautomatic mode, a user can choose the dishes on the basis of their intention. In this
mode, a user can have the dishes that they want to eat. In a manual mode, the user can
choose food and control the posture of the robot. In all three modes, the user can select the
feeding timing when they want eat.
In experiments on handling boiled rice, we observed that releasing rice is as important as
picking up rice. The stickiness of the boiled rice can change depending on its temperature.
Slightly cool rice is difficult to release from the gripper. In order to solve this problem, the
feeding robot automatically puts the gripper of the grab-arm in water before grasping the
food. The water is located in a bowl next to the rice. When this is done, the gripper can
release the rice on the spoon because the stickiness of the rice has decreased. Fig. 12 shows
the whole operation of the self-feeding robot.
The amount of rice picked up is adjusted on the basis of actual experiments on rice grasping.
A gripper’s mechanism is the simple opening/closing of gripper fingers via a linear
54 Robotic Systems – Applications, Control and Programming
actuator. The weight of the rice corresponding to one grasping motion increases depending
on the open/close width (Fig. 13) of the fingers of the gripper when grasping begins, as
shown in Fig. 13. The default open/close width of the gripper is 32 mm in order to grasp an
average of 10 g rice. The close width of the gripper makes the grasping force to food. Thus,
we can grasp various foods by adjusting the open/close width of the gripper.
Open/Close Width of
a Gripper Fingers
Fig. 14. The motions of the self-feeding robot in single-arm configuration. The caregiver
picks food up on the spoon of the self-feeding robot, and then the user presses the button to
receive the food
Novel Assistive Robot for Self-Feeding 55
arm. The next step is similar to that of the dual-arm robotic arm. A caregiver only provides a
user with food when the spoon is empty in the home position. From the perspective of the
caregiver, he or she can reduce the amount of time needed to check or wait while the user is
chewing. From the perspective of the user, the food can be eaten when he or she wants to
eat. Although a user may have difficulty choosing food in an automatic manner, he or she
can chew the food sufficiently without considering the next spoon serving from a caregiver.
From an economical point of view, the single-arm configuration has a lower cost in
comparison with the dual-arm configuration. Applying a spoon-arm has advantages in
facilities such as hospitals or nursing homes. One caregiver supports more than one
consumer. That means one caregiver can serve food on the spoons of multiple users’ spoon-
arms in turn. This configuration is especially useful if the labor cost of a caregiver is not
high, as in developing countries.
4. User test
At first, we designed the self-feeding robot on the basis of users’ opinions in order to
develop a practical system. After constructing the prototype of the self-feeding robot, we
performed a user test using seven people with disabilities via the developed self-feeding
robot. Participants used the self-feeding robot to eat Korean food, and we collected
feedback.
about a threatening feeling when the spoon moves on the sagittal plane. If the spoon
approaches a user’s mouth from the side, then a user can feel safer. Those motions are
similar to most people’s motions when they eat food. In addition, a user wants to touch the
side surface of a spoon. As a remedy, we will consider how the spoon approaches, as shown
in Figs. 15 (c), (d), and (e). However, the sideways approach may require a large installation
area for the self-feeding robot.
Fig. 15. Spoon approaching modes. (a) The sagittal plane on which a spoon moves. (b), (c),
(d), and (e) represent the top view of a spoon when a robot approaches a user’s mouth. The
red object means a spoon. (c), (d) and (e) are more comfortable then (b)
A spoon should have a safety function because the spoon frequently comes in contact with a
user’s mouth. A spoon can be fixed at the tip of an arm with a spring as a component to
guarantee safety. The spoon can be connected to magnets. When a large force acts on the
spoon, the magnet’s connection with the spoon could be detached for the user’s safety. A
user who has spasticity needs the compliance of a spoon.
Users request a smoother motion of a spoon when it contains food. In addition, the vibration
of a spoon should be reduced at that time.
Users want a small-sized self-feeding robot for easy installation on a desk. In addition, a
user wants to be able to adjust the spoon level of the self-feeding robot. For example, a user
who uses a power wheelchair has various height levels from a table. When users first use
the self-feeding robot, the level of the spoon on the robotic arm should be adjustable.
As a basic function, food rarely drops on a table when a user fails to eat the round-shaped
food. When a user eats food that is too stiff, grasping failure occurs. Therefore, we consider
the motion, i.e., the speed, of a spoon as well as the shape of a spoon.
Some users want to eat rice with a side dish simultaneously. In general, the disabled person
who lives alone and receives a caregiving service usually eats rice and a side dish on one
spoon at the same time. Some people eat rice mixed with side dishes. The self-feeding robot
should mimic that task. The self-feeding robot optionally puts two kinds of food, i.e., rice
and a side dish, on a large spoon simultaneously.
The spoon should be returned to the home position after a predefined time interval. The
robot has two ways to return to the home position of a spoon: one is by making an input
signal, and the other is determining a time interval.
Sometimes a robot should regrasp food in order to avoid grabbing too much of a side dish.
When a robot tries to grab some curry rice, the gripper shape should be changed. The
Novel Assistive Robot for Self-Feeding 57
gripper should be cleaned frequently. The amount of grabbed food could be adjustable. A
stop or pause function is required.
Other comments are as follows: the robots should be less noisy, have a small-sized input
device, serve water, and enable users to eat Korean soup. Users who do not have experience
eating food by themselves have a hard time using a self-feeding robot for the first time. Such
a person needs to be trained on how to use it. Some users want to eat noodles, such as
spaghetti. Most users wish to use a low-cost feeding robot.
The filtering of an involuntary input signal of a user should be considered. For instance, in
order to reduce malfunctions, a user’s input could be ignored immediately after a previous
input. One user who can push a button by himself prefers to use buttons rather than
joysticks. One user prefers using a small-sized joystick. Two joysticks are better than one
joystick with buttons. The length of a joystick should be adjustable. Buttons should be large
sized. The operation without a PC is required. A user who has limited head motion likes to
use the long joystick because that user cannot make a fine motion. The unification of an
input device of a wheelchair and a self-feeding robot should be considered. Wireless input
devices are preferred.
After modification
(a) (b)
Fig. 16. Input devices of a self-feeding robot. (a) Buttons, (b) Joysticks
4.3.1 Buttons
The self-feeding robot has a basic input device consisting of a set of buttons. We usually use
six buttons that correspond with start, return, and four directions. The buttons were
originally developed to check the basic operations of the self-feeding robot. However,
quadriplegics who can only move their neck and head would have difficulty pushing the
button with their chin. Because the buttons are out of the field of view, the user has a hard
time knowing where the buttons are and whether or not they are pushed. Additionally,
pushing these buttons requires excessive force and can result in muscle fatigue in a user’s
neck. Thus, a user who uses his or her neck would have difficulty pushing the buttons.
Users prefer to use joysticks rather than buttons. On the basis of users’ opinions, we tested
input devices that have joysticks.
4.3.2 Joysticks
Two joysticks are employed. Originally, it was determined that a user wants to use two
joysticks rather than one joystick and buttons. The length of the joystick is modified from 10
to 53 mm according to users’ opinions. Because of the gap between the two joysticks, a user
can easily manipulate one of the joysticks without any resulting malfunction of the other
joystick.
The length of the joystick depends on user preference. Some users prefer a long joystick
while others like a short one. Most users prefer the wide gap between the two joysticks
because a short gap can result in the malfunction of the unused joystick. The moving angles
of a joystick are ±25°. Users express satisfaction with the flexibility of the joystick and its
silicon cover, which can be connected to the user’s skin.
Novel Assistive Robot for Self-Feeding 59
5. Concluding remarks
We have developed a novel assistive robot for self-feeding that is capable of handling
Korean food, including sticky rice. This paper presents the overall operation of the self-
feeding robot. The proposed robot has three distinguishing points: handling sticky rice,
using an ordinary meal tray, and a modular design that can be divided into two arms. Users
are people with physical disabilities who have limited arm function. During the
development of the robot, we considered the feedback provided by several users and
experts. In addition, the user candidates tested the actual the self-feeding robot. It was
determined that the input device has the most important role. Many users prefer a dual
joystick for self-feeding. Most of the users who participated in the experiments gave us
positive feedback. Some users were impressed that they were able to eat their desired food
when they wanted to eat it. In future work, we will add several functions to the robot,
including improving the reliability of basic operations and adding a safety feature. We will
also simplify the system components and perform user evaluations.
60 Robotic Systems – Applications, Control and Programming
6. Acknowledgment
This research was supported by a grant (code #10-A-01, #11-A-04) from the Korea National
Rehabilitation Research Institute, Korea National Rehabilitation Center, Korea. The authors
acknowledge the input of consumer candidates, including Mr. Hongki Kim and Mr.
Kwangsup Lee. In addition, the authors gratefully acknowledge the work of Mr. Won-Jin
Song and the comments of clinical specialists, including Dr. Bum-Suk Lee, Dr. Sung-Il
Hwang, and Ms. Mi-Ok Son at the Korea National Rehabilitation Center.
7. References
Ding, D.; Cooper, R. & Pearlman, J. (2007). Incorporating Participatory Action Design into
Research and Education, International Conference on Engineering Education (ICEE)
2007, Coimbra, Portugal.
Employment Development Institute (2009). 2009 Disability Statistics (in Korean), Survey &
Statistics Team , Korea, ISBN 978-89-5813-737-5.
Guglielmelli, E.; Lauro, G.; Chiarugi, F.; Giachetti, G.; Perrella, Y.; Pisetta, A. & Scoglio, A.
(2009). Self-feeding Apparatus, US Patent2009/0104004.
Hermann, R.; Phalangas, A.; Mahoney, R. & Alexander, M. (1999). Powered Feeding Devices:
An Evaluation of Three Models, Arch Phys Med Rehabil, Vol. 80, pp. 1237–1242, 1999.
Mann, W. (2005). Smart Technology for Aging, Disability, and Independence: The State of the
Science, Wiley, Hoboken, New Jersey, ISBN 978-0-471-69694-0.
Mealtime Partners, http://www.mealtimepartners.com.
Neater Solutions, http://www.neater.co.uk.
Sammons Preston, http://www.sammonspreston.com.
Song, W.-K.; Kim, J.; An, K.-O.; Lee, I.-H.; Song, W.-J.; Lee, B.-S.; Hwang, S.-I.; Son, M.-O. &
Lee, E.-C. (2010a). Design of Novel Feeding Robot for Korean Food, ICOST2010, LNCS
6159, Yeunsook Lee et al., Eds. Springer, pp. 152–159.
Song, W.-K.; Kim, J.; An, K.-O.; Lee, I.-H.; Song, W.-J. & Lee, B.-S. (2010b). New Dual-Arm
Assistive Robot for Self-Feeding, Second International Symposium on Quality of Life
Technology, Las Vegas, USA.
Soyama, R.; Ishii, S. & Fukase, A. (2003). The Development of Meal-assistance Robot MySpoon,
International Conference on Rehabilitation Robotics, pp. 88–91, Daejeon, Korea.
Topping, M. & Smith, J. (1999) The Development of HANDY 1, a Robotic System to Assist the
Severely Disabled, International Conference on Rehabilitation Robotics, 1999.
Pendleton, H. & Schultz-Krohn, W. (2001). Pedretti’s Occupational Therapy, Elsevier.
Pourmohammadali, H.; Kofman, J. & Khajepour, A. (2007). Design of a Multiple-user
Intelligent Feeding Robot for Elderly and Disabled People, 30th Canadian Medical and
Biological Engineering Conference (CMBEC30), Toronto, Ontario, Canada.
4
1. Introduction
In cloth making industry, the need for high flexibility in automation is really imperative due
to the extensive style and material variation of the products and the fast fashion changes.
The automated handling of fabric materials is one of the most challenging problems in the
field of robotics, since it contains a vast potential for high productivity and cost reduction.
The main difficulty to get full use of this potential is the fact that it is rather impossible to
predict the behaviour of fabrics towards handling due to their low bending rigidity.
Moreover, it is difficult to be modelled due to their unpredictable, non-linear and complex
mechanical behaviour.
Sewing is the most important joining technology in garments and textiles manufacturing. The
fact that sewing fabrics is a “sensitive” operation due to the erratic behaviour of fabrics poses
barriers in the development of automated sewing systems. A solution to manipulation tasks
that are afflicted with uncertainty, subjectivity, ambiguity and vagueness is an intelligent
control approach. The development of intelligent control systems with vision feedback enables
robots to perform skilful tasks in more realistic environments and make the research efforts for
flexibility and automation really promising. Thus, industrial robots supported by machine
vision and sensors can contribute towards advanced automation in apparel manufacturing.
On the other hand, sewing fabrics with the exclusive use of robots, without human
intervention, is a complicated task. The stitching process is special in the sense that the error
cannot be corrected after a part of the cloth has been stitched, implying that the stitching
process is not reversible. This limitation implies that clothes that do not conform to the
specifications are defective and should be withdrawn from the production. This imposes a
great demand on precision during the sewing process. In practice, there is a predefined
tolerance considered to be acceptable, as errors resulting from robot accuracy and camera
resolution cannot be eliminated.
The main focus of this work lies on the design of an innovative intelligent robot controller
based on visual servoing that aims to enable robots to handle flexible fabric sheets towards
sewing. A great challenge is the design of a robot controller showing robustness against
deformations that are likely to appear on the fabric towards robot handling. Special
emphasis is given on robot handling fabrics comprised of curved edges with arbitrary
curvature. This task is even harder, because up-todate industrial robots present limitations
in their movement, owing to the fact that they can only be programmed to make straight or
circular motions.
62 Robotic Systems – Applications, Control and Programming
(Koustoumpardis & Aspragathos, 2003; Koustoumpardis et al., 2006; Moulianitis et al., 1999;
Zoumponos & Aspragathos, 2008). After the fabric has been placed at a random location on
the working table, a number of sub-tasks should be performed before the sewing process
starts. These preliminary sub-tasks are: the recognition of the fabric’s shape, the extraction of
the ‘seam line’, the detection of the edges targeted for sewing, planning of the stitching
process and the location of the end-effector on the fabric (Koustoumparis et al., 2007). After
the preprocess planning, the robot sewing process is considered and divided into three sub-
tasks: the manipulation of the fabric towards the needle, the sewing of the seam segment
and the rotation around the needle.
Concerning visual servoing, the developed vision system is a combination of image-based
and position-based control system. The image-based analysis is used for the identification of
the fabric’s shape. After the image acquisition of the fabric, the features (vertices of the
edges), the needle-point and the sewing line’s orientation are derived from the image
analysis. Besides, the position of the needle is also known in the robot coordinate system.
The end-effector’s position is unknown in the image coordinate system; however, the robot
system gives feedback to the control system of the current end-effector’s position in the
robot base coordinate system. Moreover, the relation between the robot- and the image-
coordinate system is known from the calibration of the camera. This approach makes the
system more flexible and limits the calibration errors.
x′ = x + u × cos ( φ ) ×dt
y′ = y + u × sin ( φ ) ×dt (1)
θ′ = θ - ω ×dt
Therefore, the robot end-effector moves along the direction of r (defined by φ) and
simultaneously rotates around the end-effector’s z-axis, which is vertical to the table, until
the angle becomes θ′ (Zacharia et al., 2005). The fabric is transferred to a new position with
64 Robotic Systems – Applications, Control and Programming
new orientation as a result of the movement and rotation of the end-effector, which is stuck
on the fabric to avoid slipping between the gripper and the fabric. This motion stops when
the seam segment reaches the needle with the desired orientation within an acceptable
tolerance.
needle
Needle
sewing line
r sewing line
r
φ
φ
θ
θ
y-axis
y-axis
x-axis x-axis
(a) (b)
Fig. 1. Scene of the fabric lying on the work table (a) without deformations (b) with
deformations
The stitching process. During sewing, the fabric is guided along the sewing line with a
constant velocity, which should be the same with the velocity of the sewing machine, so that
good seam quality is ensured. When the end-effector’s velocity is higher than the sewing
velocity, puckering will appear, whereas in the case where it is lower, low seam quality will
be produced due to the tension increase. At each time step, the current image of the fabric is
captured in order that the orientation error is determined. The orientation error is fed back
to be corrected by rotating the fabric around the needle, while simultaneously the robot
moves the fabric towards the direction of the sewing line. To circumvent the problem of
uncertainty due to the distortions of the fabric’s shape, fuzzy logic control is necessary. The
inputs are the orientation error and its rate and the output is the rotation angle around the
needle.
The rotation around the needle. When the seam segment coincides with the sewing line, the
robot rotates the fabric around the needle until the next seam segment is aligned to the
sewing line. It is worth noting that the end-effector is enforced to make a circular motion
around the needle, since it has penetrated into the fabric. The orientation error (eθ) of the
next seam segment in relation to the sewing line and its time rate are the inputs to the fuzzy
system that controls the rotation of the fabric around the needle, whereas the output is the
angular velocity (ω) of the end-effector’s motion around an axis perpendicular to the table at
the needle-point.
effector. The controller inputs are the position error (er) and the orientation error (eθ), which
are computed on the image, and their time rates ( e′r ) and ( e′θ ). The linear (u) and angular
velocity (ω) of the end-effector around z-axis are the outputs.
Features image
extraction
(a) (b)
(c)
Fig. 3. The membership functions of (a) the position error (b) the position error rate (c) the
linear velocity after manual tuning
on the polygonal approximation of the curved edges of the fabric is developed for robot
handling fabrics towards the sewing process.
During the stitching process the robot’s end-effector is commanded to make two motions
simultaneously: a straight motion with constant speed (equal to the sewing machine’s speed
to ensure good seam quality) in the direction of the sewing line, and a rotation around the
needle-point. Therefore, the final stitch is not a straight line, but a line that is yielded as a
result of the combined end-effector’s motion. After all the straight-line segments have been
stitched, the deviation error is computed. Then, a new image is taken and the process is
iterated. The stitching process stops when the whole curve has been sewn, i.e. when the last
point reaches the needle.
The problem is formulated as follows: The curve section of the fabric is defined by N data
points acquired by the vision system in the form of pixel coordinates in clockwise order, and
can be described by a sequence of these points: G={p1, p2, . . ., pN}. Given the N data-points,
find the optimal polygon that approximates the curve satisfying two criteria: the criterion
for sewing time minimization and the criterion for acceptable accuracy. In this case, the time
minimization is translated to the requirement for minimal length of the approximating
polygon. Initially, a local search technique is developed to extract the dominant points and
then a global search technique is applied taking the dominant points as inputs (Zacharia et
al., 2008). The idea behind this is to filter out the initial curve points in an attempt to speed
up the convergence of the algorithm. The proposed approach combines two algorithms to
benefit from the advantages of each one.
The contribution of the work addressed in this Section is twofold. Firstly, a strategy for
handling fabrics with curved edges towards the sewing process using a visual servoing
controller is developed based on polygonal approximation. Secondly, the algorithms are
further extended based on polygonal approximation that exhibited superiority in many
other applications, and a new algorithm is proposed based on the dominant point detection
approach and the genetic algorithm.
defined by the dominant point detection approach. Consequently, the input for the micro-
GA is the output of the dominant point detection approach.
Let G**={P1, P2,…, Pm} be a subset of G*={P1, P2,…, PNs }, G**⊆ G*, where m ( m < N s ) is the
number of the vertices Pi=(xi, yi) of the polygon that approximate the curve section captured
in the current image. The function expressing the total length of a polygon with m vertices
can be written as:
L total = i=1 L i
m-1
(2)
2
L i = Pi+1 - Pi (3)
The constraint that should be satisfied is expressed as the maximum deviation between the
curve section and the approximating polygon section. For two points Pi = ( x i , y i ) and
( )
Pi+1 = ( x i+1 , y i+1 ) defining an edge of the polygon and a data point p j = x j , y j between Pi
and Pi+1 , the perpendicular distance of the arc point p j to the chord Pi Pi+1 is computed by
( Pi+1 - Pi ) × ( Pi - p j )
εj = (4)
Pi+1 - Pi
(
ε = max ε 1 ,ε 2 ,...,ε j ,...,εq ) (5)
1
, if ε ≤ ε 0
fitness = L total (6)
0 , otherwise
where Ltotal is given by Eq.(2).
The representation mechanism: In this algorithm, variable-length chromosomes are defined
to represent a polygon with m vertices approximating the curve. The maximum number of
polygon vertices, defined by the user, is equal to , which is the maximum possible length
of the chromosome. The encoding mechanism maps each approximating polygon to a string
composed of integers is in the following form:
the chromosome represent the curve data-points used to construct the polygon, i.e. the
vertices of the polygon. It should also be mentioned that the first (1) and the last (m) row of
the matrix are fixed for each chromosome.
Crossover: Crossover is a recombination operator and follows the reproduction. The
individuals are randomly selected according to a predefined probability (crossover rate).
The one-point crossover is modified, so that the produced offspring are feasible
chromosomes. The cut-point lies between the first and the last gene of the parent
chromosome with the minimum length. Next, the numbers at each string are reordered in
order to appear in ascending order, so that the polygonal section is created by joining the
successive points.
Population size: The population size depends on the nature and the complexity of the
current problem. In this work, the proposed algorithm was tested for various small
population sizes, so that both quick convergence and near-optimum solution are achieved.
Finally, the selected population size is equal to 20.
fuzzy approaches have been successfully used for other applications in robotics (Marichal et
al., 2001; Rusu et al., 2003; Hui et al., 2006) gave rise to the idea of using it for robot sewing
fabrics of curved edges. The main purpose is to overcome difficulties arising from the fact
that the fabric pieces have curved edges with arbitrary curvatures; thus, a learning
technique is used. The proposed ANFIS based on genetic-oriented clustering combines the
concepts of fuzzy logic and neural networks to form a hybrid intelligent system that
enhances the ability to automatically learn and adapt. The system learns from the
information obtained from the fabrics used for training and is able to respond to new
fabrics, which had not been included in the training process.
A ω
end-effector
d
D needle
25 mm α
40 mm
sewing direction
The position error is defined as eD=D-D0, where the desired distance is D0=0 and the
orientation error is defined as eφ=φ-φ0, where the desired angle is φ0=0. To compensate for
the seam errors (eD and eφ), the robot should be assigned to rotate around the needle with an
angular velocity (ω) around z-axis. The angular velocity (ω) of the robot’s end-effector is
derived through a Sugeno-type fuzzy decision system, which takes as inputs the position
error eD and the orientation error eφ.
The end-effector guides the fabric towards the sewing direction and the camera monitors the
fabric to compensate for the seam error. The fuzzy controller, which is tuned through
ANFIS, inputs the position and orientation error (eD and eφ) of the new fabric piece and
outputs the predicted angular velocity (ω) of the robot’s end-effector. Next, the robot is
commanded to make a circular motion around the sewing needle by an angle θ and a
translational motion in the sewing direction. The angle θ is derived by the relationship
θ≈ω·Δt, where Δt is the sampling period.
parameters, and then through output membership functions and associated parameters to
outputs can be used to interpret the input–output map. Considering a first-order Sugeno
fuzzy model with two inputs x and y and one output f, a typical rule set with two fuzzy if–
then rules can be expressed as
6. Experimental results
The experiments were carried out using a robotic manipulator with 6 rotational degrees of
freedom (RV4A) and controlled by a PC. The robot is programmed in Melfa-Basic language
in Cosirop environment, while the analysis of the visual information is performed with
Matlab 7.1. The vision system consists of a Pulnix analog video camera at 768×576 pixels
resolution RGB and a analog camera of the same resolution, which are fixed above the
working table in a vertical position (Fig. 5). The vision is transferred to the second camera
when the position error becomes less than 10 pixels (≈ 12.3 mm) and the orientation error is
less than 5°. Using the second camera, the accepted position and orientation error are set
equal to 10 pixels (≈ 1.389 mm) and 1°, respectively. A simple gripper has been designed, so
that the arm of the robot is distant from the fabric, as shown in Fig. 5. The fabric is stuck on
the gripper to avoid slipping; however, the placement of the gripper onto the fabric is out of
the reach of this work.
In this work, buckling modes during robot handling are supportable on the condition that it
does not appear in the segment to be sewed. However, it still remains a problem that should
be avoided. Thus, the position where the gripper touches the fabric is estimated in terms of
reducing the possibilities for deformation appearance, taking into account the properties of
the fabric (Koustoumpardis & Aspragathos, 2003). It is worth noting that the intent of this
work deals with buckling in context of achieving a successful seam tracking and not the
correction strategy against folding or wrinkling problems. The experimental tests also
showed that deformations are likely to appear close to the gripper position on the fabric,
and not on the edge.
74 Robotic Systems – Applications, Control and Programming
(a) (b)
Fig. 6. Fabric A with curved edge of small curvature (a) Fabric B with curved edge of large
curvature
In the experiments, a considerable number of fabric pieces of different materials, shape and
colour have been used. Both simulation and experimental tests are conducted to verify the
efficiency of the proposed approach. A fabric piece is indicatively presented to show the
results for both simulation and experimental tests. The shape of this fabric consists of two
Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods 75
straight-line segments and an arbitrary curve, its colour is red and it has medium bending
rigidity (Fig. 6 (a)).
Initially, simulation tests were carried out in order to approve the effectiveness of the
proposed polygonal approximation based on a micro-Genetic Algorithm (micro-GA). A
number of points along a section of the curved edge are captured by the camera that focuses
on the area of the needle-point. The array that contains the curve data-points has
dimensions 185×2 and is the input to the micro-GA. The maximum length of the
chromosomes is set to 6 and the maximum acceptable deviation is arbitrarily set to 8 pixels
(≈1 mm).
Fig. 7 (a) shows the optimum polygon section resulted from the micro-GA, which
approximates the given data-points of the curved edge. The curve section is approximated
by a polygon section consisted of three sides and each of them deviates from the
corresponding arc section 6.87, 5.55 and 7.07 pixels. Increasing the maximum acceptable
deviation from 8 pixels to 12 pixels, the arc section is approximated by a polygon section
with three sides, as shown in Fig. 7 (b), where the maximum deviation errors between the
curve and each polygon side are 4.48, 5.88 and 10.37 pixels, respectively. Decreasing the
maximum acceptable deviation to 4 pixels, the arc is approximated by a polygon section
with six sides, as shown in Fig. 7 (c), where the maximum deviation errors are 3.37, 2.52,
2.32, 0.88, 3.15 and 3.08 pixels. Fig. 7 (d) zooms in the first two sides of the polygon shown in
Fig. 7 (c).
(a) (b)
(c) (d)
Fig. 7. Polygonal approximation with (a) ε=8 pixels (b) ε=12 pixels (c) ε=4 pixels (d) detail of
the approximation with ε=4 pixels
76 Robotic Systems – Applications, Control and Programming
These simulation tests demonstrate that the polygon approximation of the curve serves as a
trade-off between rapidity and smoothness affected by the tolerance for imprecision. The
results show that the approximation leads to a satisfactory seam approximation, while
simultaneously the time for the entire process is minimized.
In practice, the sewing process is repeated many times and the deviation errors are collected
and processed. The maximum acceptable deviation error for all tested cases is set to 8 pixels
(≈1 mm). Fig. 8 (a) shows the deviation errors for Fabric A, a fabric piece with small
curvature (Fig. 6 (a)) and Fig. 8 (b) shows the deviation errors for Fabric B, a fabric piece
with small curvature (Fig. 6 (b)).
7 9.0
8.0
6
deviation (pixels)
7.0
deviation (pixels)
5
6.0
4 5.0
4.0
3
3.0
2
2.0
1 1.0
0.0
0
1 2 3 4 5 6 7 8 9
0 3 6 9 12 15
number of straight-line segments number of straight-line segments
(a) (b)
Fig. 8. Deviation between real and desired path using the micro-GA for (a) fabric A (b) fabric B
Fabric A: The proposed micro-GA is experimentally tested using the dominant points as
input. The maximum deviation (in pixels) between the needle-point and polygon
approximation is computed from the image captured. The sewing process for the curved
edge is accomplished after 15 steps and the results are shown in Fig. 8 (a). The maximum
deviation is 6.63 pixels (≈0.83 mm) and is lower than the maximum acceptable limit of 8
pixels. The average value for the deviation is 2.75 pixels (≈0.34 mm), which is really
satisfactory. The steps 6-10 correspond to the part of the curve with high curvature.
Fabric B: Using the dominant points as input, the micro-GA terminates with 9 steps and the
average value for the deviation is 4.14 pixels (≈0.52 mm). The maximum deviation for each
method is depicted in Fig. 8 (b). The deviation error for this fabric piece is greater compared
to the previous one, which is reasonable, since the curvature is higher.
More fabric pieces are used to test the proposed approach, but detailed results are not
presented due to the space limit. Table 2 shows indicatively the total steps, as well as the
average and maximum deviations for some of the tested fabrics. The task is accomplished in
less steps (corresponding to lower total time) when the micro Genetic-based approach that
uses the dominant points as input is applied, satisfying the predefined the boundary of 1 mm.
In apparel industry, the maximum allowable deviation, which is empirically evaluated, lies
in the range between 0.5-5 mm. For all tested cases, the proposed algorithm is proved to be
quite robust and efficient, since the achieved deviation, which is the maximum predefined
deviation set by the user, is less than 1 mm. Bearing in mind that the robot accuracy is
within ± 0.03 mm, the deviations resulting from the vision- and the robot position errors are
very satisfactory.
Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods 77
(a) (b)
Fig. 9. (a) Robot-camera-sewing machine system (b) zoom in the camera
are shown in Fig. 14 (a). The rule base obtained through the genetic-oriented clustering
approach consists of 7 rules, shown in Table 3.
1. If (eD is Small) and (eφ is Extremely Small) then (ω is Extremely Small) (1)
2. If (eD is Very Small) and (eφ is Small) then (ω is Very Small) (1)
3. If (eD is Extremely Small) and (eφ is Extremely Large) then (ω is Small) (1)
4. If (eD is Large) and (eφ is Very Small) then (ω is Medium) (1)
5. If (eD is Medium) and (eφ is Large) then (ω is Large) (1)
6. If (eD is Very Large) and (eφ is Medium) then (ω is Very Large) (1)
7. If (eD is Extremely Large) and (eφ is Very Large) then (ω is Extremely Large) (1)
Fig. 13. Cluster centers resulting from the genetic-oriented clustering method
(a) (b)
Fig. 14. The membership functions for inputs using (a) genetic-based clustering (b) ANFIS
80 Robotic Systems – Applications, Control and Programming
The next step is the training process that aims at tuning the fuzzy inference system. Fig. 14
(b) shows the final Gaussian membership functions derived after training the system. In
contrast to the first input, there is a considerable change in the final membership functions
concerning the second input, since the supports of all fuzzy sets are broadened. The root
mean squared errors of the output over 100 training epochs, which are obtained by using
300 training datasets and 50 checking data sets, are plotted in Fig. 15. It is obvious that both
training and checking error gradually decrease versus the number of epochs.
data points fall very close to the predicted points, indicating good correlation. The average
error of the prediction of the angular velocity is around 2.57%, which means that the
accuracy is as high as 97.43%.
An interesting and important conclusion established from the results is that the proposed
approach is capable of well estimating data points outside the training space using the
advantages of fuzzy logic, neural networks and genetic algorithms. This conclusion reflects
the model’s ability to predict the output based on the input data used for training. In
practice, this implies that the method is effective in robot handling fabrics with curved
edges, which have not been used in the training process.
The strategy for robot-sewing fabrics with curved edges described above has the advantage
of performing well regardless of the fabric’s deformations. This advantage is of major
importance, as fabrics are limp materials that have a tendency to distort and change their
shape when handled on a work table. The fact that this approach is based on the
information taken from the image that captures the fabric in the neighbourhood of the
needle amplifies the system’s robustness against deformations. Therefore, deformations that
may appear on the fabric during robot handling do not affect the effectiveness of the
stitching process.
7. Conclusion
The main focus of this work lies on the design of an innovative visual servoing manipulator
controller based on Fuzzy Logic that aims to enable robots to handle flexible fabric sheets
lying on a work table. Visual servoing and fuzzy logic are used in robot motion control
increases the intelligence of robots and the flexibility of the system. The designed visual
servoing control system can deal with a variety of fabrics that are likely to deform and can
cope with possible deformations due to buckling (wrinkling, folding) towards handling
without degrading its performance, on condition that the seam segment to be sewed is
undistorted. The desired accuracy is achieved in approaching the needle even in cases
where deformations appeared.
This work focuses on the difficult case, where fabrics consist of edges with arbitrary
curvatures. The need for approximating the curved edges through straight lines arises from
the fact that current industrial robots can only be programmed to make straight or circular
motions. From the standpoint of apparel industry manufacturing, it is important to assure
that the fabric is sewn in the minimum time, while simultaneously satisfactory accuracy and
smoothness are achieved.
In our approach, the curve is approximated through small straight segments applying a
micro-GA approach that uses the dominant point detection method as input. The proposed
approach aims at the minimization of the execution time satisfying a predefined maximum
tolerance. Based on the results, some consideration is made concerning the trade-offs
between running times and the quality of the final approximation.
To alleviate the computational burden of geometrical computations, an innovative method
for robot handling fabrics with curved edges towards sewing has been proposed, which is
based on a novel genetic-oriented clustering method and an adaptive neuro-fuzzy inference
system. This work presents the design and tune of an ANFIS network with the minimum
number of fuzzy rules for modelling the complex process of robot sewing fabrics. The
proposed adaptive neuro-fuzzy inference system benefits from the advantages of fuzzy
logic, neural networks and genetic algorithms. This feature makes this approach a powerful
82 Robotic Systems – Applications, Control and Programming
tool to deal with uncertainty embedded in the curved edges of real cloth parts and to cope
with new fabric pieces that have not been used for the training process.
The experimental results show that the proposed control scheme is effective and efficient in
guiding the fabric towards the sewing needle, sewing it and rotating it around the needle.
After extensive experimentation, it has been proved to be rather simple, flexible and robust
due to its capability to respond to any position and orientation error for a variety of fabrics
that are likely to present deformations. The experimental data were obtained using the
robot-camera-sewing machine system and real parts of cloths. The proposed method
presented good results when applied to fabrics with curved edges of unknown curvatures,
which manifest the validity of proposed approach. This method is applicable to any piece of
fabric with edges of arbitrary curvature, since it has been proved to be efficient in estimating
the appropriate angular velocity of fabrics that were not included in the training process.
Although no direct comparison with human stitching is possible, as the equipment
deployed attained differ, the achieved accuracy is really promising for future use in
industrial applications.
8. Acknowledgment
The author would like to acknowledge Robotics Group of Mechanical Engineering &
Aeronautics Dep., University of Patras (http://robotics.mech.upatras.gr/www/) for help
and cooperation.
9. References
Amin-Nejad, S.; Smith, J.-S. & Lucas, J. (2003). A visual servoing system for edge trimming
of fabric embroideries by laser, Mechatronics, Vol.13, No.6, pp.533–551.
Delgado, M.R.; Von Zuben, F. & Gomide, F. (2001). Hierarchical genetic fuzzy systems,
Information Sciences, Vol.136, No.1, pp.29–52.
Gershon, D. & Porat, I. (1986). Robotic sewing using multi-sensory feedback, Proceedings of
the 16th International Symposium of Industrial Robots, Brussels, pp. 823-34.
Gershon, D. & Porat, I. (1988). Visual servo control of a robotic sewing system, Proceedings of
the IEEE International Conference on Robotics and automation, pp.1830–1835,
Philadelphia, PA, USA.
Gershon, D. (1990). Parallel process decomposition of a dynamic manipulation task: robotic
sewing, IEEE Transactions on Robotics and Automation, Vol.6 No.3, pp. 357-366.
Gershon, D. (1993), Strategies for robotic handling of flexible sheet material, Mechatronics,
Vol.3 No.5, pp. 611-23.
Hui, N.B.; Mahendar, V. & Pratihar, D.K. (2006). Time-optimal, collision-free navigation of a
carlike mobile robot using neuro-fuzzy approaches, Fuzzy Sets and Systems, Vol.157,
No.16, pp.2171–2204.
Jang, J.-S.R. (1993). ANFIS: Adaptive network-based fuzzy inference system, IEEE
Transactions on Systems, Man and Cybernetics, Vol.23, No.3, pp.665–685.
Jang, J.-S.R.; Sun, C.-T. & Mizutani, E. (1997). Neuro-Fuzzy and Soft Computing: A
Computation Approach to Learning and Machine Intelligence, Prentice Hall, Upper
Saddle River.
Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods 83
Yin, P.-Y. (2004). A discrete particle swarm algorithm for optimal polygonal approximation,
Journal of Visual Communication and Image Representation, Vol.15, No.2, pp.241–260.
Zacharia P.Th. (2010). An adaptive neuro-fuzzy inference system for robot handling fabrics
with curved edges towards sewing, Journal of Intelligent and Robotic Systems, Vol.58,
No.3, pp.193-209.
Zacharia, P.; Mariolis, I., Aspragathos, N. & Dermatas, E. (2005). Visual servoing of a robotic
manipulator based on fuzzy logic control for handling fabric lying on a table,
I*PROMS Virtual Conference on Innovative Production Machines and Systems, , pp. 411–
416, July 4–15 ,2005.
Zacharia, P.; Mariolis, I., Aspragathos, N. & Dermatas, E. (2006). Visual servoing controller
for robot handling fabrics of curved edges, I*PROMS Virtual Conference on
Innovative Production Machines and Systems, pp. 301–306, July3–14, 2006.
Zacharia, P.Th. & Aspragathos, N.A. (2008), Genetically oriented clustering using variable
length chromosomes, I*PROMS Virtual International Conference on Intelligent
Production Machines and Systems, pp.204-209, July 1-14, 2008.
Zacharia, P.Th.; Mariolis, I.G.; Aspragathos, N.A. & Dermatas E.S. (2008). Robot handling
fabrics with curved edges based on visual servoing and polygonal approximation,
Special Issue of Proceedings of the Institution of Mechanical Engineers, Part B, Journal of
Engineering Manufacture, Vol.222, No.10, pp. 263-1274.
Zacharia, P.Th.; Mariolis, I.G.; Aspragathos, N.A. & Dermatas E.S. (2009). A robotic system
based on fuzzy visual servoing for handling flexible sheets lying on a table,
Industrial Robot, Vol. 36, No.5, pp.489-496.
Zhang, H. & Guo, J. (2001). Optimal polygonal approximation of digital planar curves using
meta heuristics, Pattern Recognition, Vol.34, No.7, pp.1429–1436.
Zoumponos, G. & Aspragathos, N. (2008). Fuzzy logic path planning for the robotic
placement of fabrics on a work table, Robotics & Computer-integrated Manufacturing,
Vol.24, No.2, pp.174-186.
5
1. Introduction
Medical robotics is an exciting and relatively new field. Robotics plays an important role in
medical engineering. Medical robots were initially used in the 1980s, in the field of urology.
Robotic arms were developed and used for prostate resection. They can also be highly
specialized and assist in diagnosing and treating patients. While there is still much more
work to be done, using robots can enhance medical treatments in terms of both the quality
and accessibility of care. Using robots can help reduce human error and bring highly
specialized information to remote areas without requiring physicians’ direct intervention.
In radiation therapy, high-energy radiation from x-rays, gamma rays, neutrons, and other
sources has been used to kill cancer cells and shrink tumors. Radiation may come from a
machine outside the body (external-beam radiation therapy), or it may come from
radioactive materials placed in the body near cancer cells (internal radiation therapy,
implant radiation, or brachytherapy).
The usage of robotic systems to improve the cancer treatment outcome is a new field. This
field overlaps with electronics, computer science, artificial intelligence, mechatronics,
nanotechnology, and bioengineering. For this purpose, robots can be used in medical
facilities to perform different tasks such as delivering radiation sources, real-time tumor
tracking during radiation delivery or external beam delivery.
The only product in the market for robotic radiotherapy is CyberKnife Robotic Radiosurgery
System. The robotic system has provision for so-called real-time tracking during beam
delivery. The device itself is a 6MV linear accelerator mounted on a six degree-of-freedom
(DOF) Keller und Knappich Augsburg (KUKA) industrial robot. This system has real-time
image-guided control. Consequently, there is a significantly long time delay (about 200 ms)
between the acquisition of tumor coordinates and repositioning to the linear accelerator. The
CyberKnife-KUKA robot with linear accelerator end-effector is suited for radiation therapy
to any body sites. Its field size is restricted to the limited geometry of 12 discrete circular
fields ranging from 5mm to 60mm in diameter. Therefore, the workspace is confined and the
radiation therapy community has not fully embraced the idea of using an industrial
articulated robotic manipulator yet.
The details about CyberKnife robotic system are not included in this chapter. Consequently,
the basic idea is to present the novel research results in the field of robotic radiation therapy
and its applications.
86 Robotic Systems – Applications, Control and Programming
2. Brachytherapy robotics
Brachytherapy is a method of treatment in which sealed radioactive sources are used to
deliver radiation at a short distance by interstitial, intracavitary, or surface application. With
this mode of therapy, a high radiation dose can be delivered locally to the tumor with rapid
dose falloff in the surrounding normal tissue.
Recently, several research groups have reported the investigation and development of the
robotic systems for the prostate seeds implants, (Stoianovici et al., 2003), (Wei et al., 2004),
(Fichtinger et al., 2006), (Yu et al., 2007), (Meltsner, 2007), (Meltsner et al., 2007), (Stoianovici
et al., 2007), (Podder et al., 2007), (Salcudean et al., 2008), (Lin et al., 2008), (Moerland et al.,
2008). The potential advantages of the robotic seed implants include improving the accuracy
of the needle placement and seed delivery, as well as improving the consistency of the seed
implant. In medical, especially prostate brachytherapy, applications of robots, precise end-
effector position, steady state and positioning accuracy is required. In such applications,
even small positioning errors at the manipulator end-effector can have dangerous and costly
consequences, (Mavrodis et al., 1997). To achieve the enhancements of the robotic seed
delivery, the robots need to be calibrated. Properly calibrated robotic systems have higher
absolute positioning accuracy and the deposited seeds positions correspond better to the
ones calculated in the planning systems. The brachytherapy robots are usually ultrasound
(US) or magnetic resonance imaging (MRI) guided. Generally, to improve needle placement
and seed deposition in brachytherapy procedure several methods have been presented in
the literature, such as parameter optimization, different needle rotation techniques, robotic
insertion, force modeling, and needle steering techniques. Robot assisted therapeutic
delivery systems are attractive for several reasons. The potential advantages are increased
accuracy, reduced human variability, reduction of clinician’s fatigue and reduction of
operation time.
There can be two methods for robotic needle insertion and seed deposition: single-channel
approach and multi-channel approach. In single-channel approach one needle can be
inserted at a time and typically 2-5 seeds along the needle track are deposited in the prostate
according to dosimetry plan. On the other hand, multi-channel system is capable of placing
several needles at the time. Thereby, seed delivery can be faster. Since prostate is not rigidly
mounted, it can move and rotate as well as unpredictably deform. When several needles are
inserted concurrently, prostate will be uniformly pushed back symmetrically to more stable
position and deformation of the tissue can be better estimated for precise delivery.
In the following sections two brachytherapy robotic systems has been presented. Both
robotic systems: single-channel and multi-channel systems have been designed, developed
and manufactured in our research laboratories.
kinematic chains called needling mechanism and an ultrasound probe driver with five and
two DOF, respectively.
a) b)
Fig. 1. a) EUCLIDIAN – image-guided brachytherapy robotic system; b) Kinematic DH
schema with reference frames
surgical module, such as needle insertion, needle rotation, seed deposition, x-y movement of
the gantry, and system abort.
The three DOF cart (two translations and a rotation about the vertical axis) provides gross
movement of the robotic system to align with the patient, while the six DOF platform
enables finer movement for desired positioning and orientation of the robot in three-
dimensional (3D) space.
The computer, system electronics, and cable junctions are housed in the electronics housing,
Fig. 1. The EUCLIDIAN’s surgery module is fully autonomous; all the motors are fitted
with high-resolution optical encoders and precision gear boxes. The robot is controlled by
an industrial computer, which is proven to be robust and reliable for working in harsh
industrial environments and military applications. It has a special metallic casing for
minimizing electromagnetic interferences.
Two Galil control cards, Model DMC- 1842; Galil Motion Control, Inc., Rocklin, CA, are
used: One card to control the TRUS probe driver and gantry motions and the other card to
control the needle driver and the seed pusher. A robust and stable proportional, integral
and derivative (PID) controller has been developed for controlling the motorized surgical
module. We have tuned the PID gains in such a manner so that the system’s stability is
maintained when the needle changes its states from merely position control in the air to
both position and force control mode in the patient’s body. The needle can be driven at a
maximum velocity of approximately 100 mm/s; however, a lower velocity, 60 mm/s, setting
is used as the default. A frame grabber, FlashBus, Integrated Technologies, Indianapolis, IN,
is used for TRUS image capturing. Three force-torque sensors , Nano17, ATI Industrial
Automation, Apex, NC and M13, Honeywell, Morristown, NJ, are used for needle insertion
force monitoring and robot control feedback. Each of the motors is fitted with an optical
encoder, MicroMo Electronics, Inc., Faulhaber Group, Clearwater, FL, which can provide
final motion resolutions, considering gear ratios and screw leads, of 0.0007 mm for gantry x-
y translations, 0.004 mm for stylet and cannula motions, and 0.005 mm and 0.06° for TRUS
probe translation and rotation, respectively.
appears at the output of the system on the other side. When the reactive force is minimized,
tissue displacement decreases. The force prediction control was also used to minimize the
effect of system time-delay. Because of the fact that this procedure required an accurate
robot system model, it was necessary to obtain a more precise model. That is a reason for
adopting the robotic system model as a singular system of differential equations.
Fig. 4 represents the contact surface where reactive force appears during surgery. A
mathematical equation of contact surface is calculated using EUCLIDIAN robotic system
software.
Φ(p)=0
Fig. 4. a) 3D representation of the contact surface (prostate gland) during insertion. b) 5DOF
surgical module. Absolute and local coordinate systems together with generalized
coordinates qi
Assume that the contact surface is a scalar functionΦ: Rm→R1, Fig. 4a.
Φ(p) = 0 (1)
f = DT (p)λ (2)
λ is a scalar multiplier for the constraint function and D(p) is the gradient of the constraint
function, described as in (3),
∂Φ(p) (3)
D( p ) =
∂p
In the previous equation p∈ℜ3 is a position vector from the fixed reference frame to the
constrained surface. Additional constraint is p=Φ(q) at the contact point. The matrix function
J(q) is defined as the Jacobian matrix function.
∂Φ (q) (4)
J (q ) =
∂q
= τ + J T (q ) f
M(q)q + G(q , q) (5)
Robotic Systems for Radiation Therapy 91
where M(q) denotes the inertia matrix function and G is the vector function which describes
coriolis, centrifugal and gravitational effects. q is a vector of the generalized coordinates and
τ is the torque vector. Influence of the external contact forces is described by factor f.
Combining equations (1)-(5) the mathematical model of the system is
I 0 0 q 0 I 0 q 0 0
=
0 M ( q ) 0 q 0 0 J T DT q + −G( q , q ) − M( q ) + I τ (7)
0 0 0 λ 0 0 0 λ Φ( p ) 0
System (7) is linearized at the surrounding point where the needle is inserted into the
patient. The linearized mathematical model of the system is obtained, as follows
with singular matrix E∈ℜn×n, and vector d which represents the unmeasured disturbances
such as needle deflection, tissue deformation and displacement, A∈ℜn×n, B∈ℜm×r. x and u
represent the system state-space vector and control vector, respectively, x∈ℜn, u∈ℜr.
The system defined by (8) is known as a singular system, descriptor system, semi-state
system, system of differential-algebraic equations or generalized state space system.
They arise naturally in many physical applications, such as electrical networks, aircraft
and robotic systems, neutral delay and large-scale systems, economics, optimization
problem and constrained mechanics. The matrices of linearized system (8) are defined as
follows
0 I 0
I 0 0
∂
A = (G − J T DT λ )|0 0 J D |0 , E = 0 M(q0 ) 0 ,
T T
∂q
0 0 0
DJ |0 0 0 (9)
0 0
B = I , u = δτ , d = Δτ
0 0
Now it is possible to find subspaces S and F, as in (Buzurovic et al., 2010.a), having in mind
that for state space X the system equation (8) can be represented as the direct sum of S and
F, X= S ⊕ F. As a result of the matrix transformation M applied to system (8), the slow and
fast subsystems can be described by a mathematical formulation
x s = Ls xs + Bs u + ds
(10)
L f x f = x f + B f u + d f
with Ls = MA|S, Lf =MA|F, Bs=PMB, Bf=QMB, ds=PMd and df=QMd for some linear
transformations Q and P represented by matrices Q and P, respectively. For that case, the
92 Robotic Systems – Applications, Control and Programming
initial conditions are chosen to satisfy x0s=P x0 and x0f=Q x0. The solution of system (10) is
x=xs+ xf
t t v−1
xs = e Ls t x0 s + e Ls (t -τ ) Bsu(τ )dτ + e Ls (t -τ )d(τ )dτ , x f = − Lif B f ui (11)
0 0 i =0
Now it is possible to apply the MPC control, Fig. 5, and to investigate dynamical behavior of
the system. Disturbance d is given to MPC and its effect is predicted and compensated
before the effect appears at the system output.
φ(p)
d ynami c model
τˆ0
q0 , q 0 , λ0 z + + τ robot q, q , f
κ dynami c
y η
x C
+ - q,q, λ φ(p)
ref
λ
f D
insertion are minimized. These conclusions give us a reason to believe that tissue
deformation can be decreased better than using the traditional PID control.
Fig. 6. Controlled force with MPC (red) and insertion force with PID controller (blue)
range of insertion speed higher that 40mm/s (60mm/s or 80mm/s). The conclusion based
on this fact is that insertion speed can be divided into two regions with different insertion
parameters.
It can be concluded that the average maximum insertion force was less then 45N for
moderate speed range (insertion speed 40mm/s and stylet speed 30mm/s, 16 needles in the
stage) and 52N for high speed range (insertion speed greater then 40mm/s and stylet speed
50mm/s, 16 needles in the stage). Insertion time per seed was 5-8 seconds. Plan delivery
time for high speed range was 8min and 12min for moderate speed range.
The summary of conclusion is as follows. It was observed that more needles were inserted
together force value was higher. However, when 8 and 16 needles were inserted in the same
time maximum insertion force did not change. Furthermore, force did not change
significantly in the range of insertion speed higher than 40mm/s. Consequently, insertion
speed range can be divided into two regions with different insertion parameters.
Preliminary results reveal that MIRAB is efficacious for delivering seed accurately. MIRAB
can potentially be used for image-guided robotic brachytherapy.
Insertion force for insertion speed of 40mm/s Insertion force for insertion speed of 80mm/s
50 60
45
50
40
35 1 needle 40 1 needle
30
Force [N]
2 needles 2 needles
Force [N]
25 4 needles 30 4 needles
20 8 needles 8 needles
16 needles 20 16 needles
15
10
10
5
0 0
50 60
45
50
40
35 5mm/s 5mm/s
40
10mm/s 10mm/s
30
Force [N]
Force [N]
20mm/s 20mm/s
25 30
40mm/s 40mm/s
20
60mm/s 60mm/s
20
15 80mm/s 80mm/s
10
10
5
0 0
Fig. 9. a) Insertion force for different number of needles installed on the template stage, for
insertion speed of 40mm/s. b) Insertion force for different number of needles installed on
the template stage, for insertion speed of 80mm/s. c) Insertion force for different insertion
speed;8 needles inserted in the same time. d) Insertion force for different insertion speed; 16
needles inserted in the same time
Robotic Systems for Radiation Therapy 97
d ∂L ∂L
− =τ (13)
dt ∂q ∂q
where, q∈Rn is the vector of generalized coordinates, and τ is the generalized force (or
torque) applied to the system through the actuators.
The dynamical behavior of a 6DOF robotic couch, has been investigated. Proposed approach
is explained below.
Fig. 10. HexaPOD robotic couch, a) External isometric view with moving coordinate system,
b) External schematic view with fixed coordinate system, c) Schematic of the leg
In this study, we have used the following notations for modeling the HexaPOD, i.e. the
Stewart platform. Referring figure 2 we have assigned an inertial frame (X,Y,Z) at the center
O of the lower platform, i.e. the BASE, and assigned another moving coordinate system
(x,y,z) at the center of the top platform, i.e. the TOP. The BASE frame is assigned and called
as fixed frame and the TOP frame is moving and is called as moving frame.
To specify the configuration of the 6DOF Stewart Platform, six independent position-
orientation variables are needed. Denote the position of the origin of the TOP frame with
respect to the BASE frame [ px py p z ]T . The orientation is not defined by the standard
Euler angles, but by rotating the TOP frame first about the fixed X-axis by α, then about
fixed Y-axis by β and finally about Z-axis by γ. We denote RX(α), RY(β) and RZ(γ) as the three
matrices that represent three basic rotations as follows:
1 0 0 Cβ 0 Sβ Cγ − Sγ 0
R X (α ), = 0 Cα
− Sα , RY ( β ), = 0 1 0 , RZ (γ ), = Sγ Cα 0 (14)
0 Sα Cα − Sβ 0 Cβ 0 0 1
Robotic Systems for Radiation Therapy 99
where C(.) =cos(.), and S(.) = sin(.). This definition of the orientation not only provides us
with a clear physical meaning but also avoids violating the one-to-one relationship between
the system configuration and the values of Xp-0, which may cause the Jacobian matrix to lose
its rank, even if the system is not in a singular configuration. Thus, the position and
orientation of the upper platform is specified by the Cartesian coordinate system
as X p −o = [ p x , pY , p Z , α , β , γ ]T .
Now, the dynamic equations in Cartesian-space (or task-space) are expressed as:
where, Ftd = ( f 1 , f 2 , f 3 , f 4 , f 5 , f 6 ) is the vector of forces applied by the actuators of the legs, J
is the full system Jacobian matrix. For convenience, we divide the HexaPOD robotic couch
into two subsystems: the upper platform (TOP) and the six legs, Fig. 10. We compute the
kinetic energy and potential energy of these subsystems and then derive the global dynamic
equations.
1 1
K up = K up( trans ) + K up( rot ) = mu ( p X2 + p Y2 + p Z2 ) + ΩTup( mf ) I Ω up( mf ) (16)
2 2
is a expression for kinetic energy of the upper platform. Potential energy of the upper
platform:
Pup = mu g pZ (17)
1
K Li = (m1 + m2 )[ h iVTTj VTj − kiVTTj (ui )(ui )T VTj ] (18)
2
where, Li is the length of the leg at the current configuration, (m1+m2)is the mass of the leg,
VTj is the velocity of the leg, and :
1
lˆ m2 ˆ
2
δ m1 − m2 l2 m2
2
hi = + , l = 2 , = − (19)
L m + m
k i h i
i 1 2 m1 + m2 m1 + m2
Potential energy of the legs:
3
1 1 2 m2
PLegs = (m1 + m2 ) g lˆ + + ( pZ + ZTi ) (20)
L
2i
i =1 L2i −1 m1 + m2
The dynamic equations of motion for HexaPOD robotic couch have been discussed above.
These equations are essential in developing our proposed dynamics-based coordinated
control system. In the following part it can be seen that the speeds for tracking are slow.
However, it was necessary to use dynamical model approach to fulfill strong requirements
regarding the tracking accuracy. Since the tracking is impaired with a real-time radiation
delivery, it was of the significant importance to have accurate model and to decrease
possibilities for inaccurate radiation delivery during tracking.
+ ξ ( X , X ) = ℑ ,
M( X )X (22)
ˆ [X
M + K ( X − X ) + K ( X − X )] + ξˆ = ℑ , (23)
d V d P d
where, M̂ and ξˆ are the estimates of the system’s model parameters, KV is the derivative
gain matrix and KP is the proportional gain matrix. For HexaPOD
KV = diag( k v 1 kv 2 k v 3 kv 4 kv 5 kv 6 ) and K P = diag( kp 1 k p 2 kp 3 kp 4 kp 5 k p 6 ) . If the estimates of the
model parameters are close approximation of the actual system’s parameter, from equations
(22) and (23) it can be written,
= X
X + K ( X − X ) + K ( X − X ) (24)
d V d P d
ε + KV ε + K Pε = 0 (25)
Equation (25) guarantees asymptotic reduction of the tumor tracking errors or at least keeps
error in the acceptable margins. However, to reduce any steady-state errors, an integral
control part was also incorporated into equation (26). Thus the final control equation
becomes,
t
ε + KV ε + K P ε + K I ε dt = 0 , (26)
0
where, KI is the integral gain. Thus, equation (26) ensures asymptotic decay of the transient
errors as well as reduction of steady-state errors.
Now, filter vector becomes W(n) = W(n-1)+μ X(n)e(n), where μ is the learning rate. In order
to ensure the stability of the algorithm it is necessary to adjust LMS to its normalized form
nLMS. For the nLMS algorithm the filter vector is:
μ X(n)e(n)
W (n) = W (n − 1) + (29)
X ( n )H X ( n )
Finally, prediction module calculates next position of the tumor, based on algorithm which
incorporates nLMS error e1(n) and physical error e2(n). Therefore, the resultant error is sum
of the prediction and tracking error.
3.3 Simulation
The computer simulation results for HexaPOD robotic couch have been presented in the Fig.
12 through Fig. 13 below. Fig. 12 shows position of the each HexaPOD leg during tracking
task. Combined motion of the robotic legs results in tracking the desired trajectory. This
means that the robotic couch will start moving the legs according to the desired trajectory,
obtained from 4D-CT. Combined motion of the HexaPOD legs will result in patient motion
which has opposite direction of the tumor. Consequently, tumor will appear steady and the
beam will irradiate smaller PTV which does not include all possible tumor position during
the respiratory cycle.
Fig. 13. Overall system errors; a) εX, εY,εZ in X, Y and Z directions for HexaPOD couch.
b) The error amplitudes for steady states
An average system error after the transient time was εXH=εYH =εZH= ±0.2mm and 0.4mm for
irregular motion pattern, Fig.13. The tracking error cannot be zero, but it can be kept within
the tolerable limit.
From the simulation results it appeared that proposed methods could yield superior
prediction and tracking of the tumor motion induced by respiratory motion.
Analyzing the simulation results it can be concluded that the robotic system show the same
maximum tracking error which was 1mm. Based on dosimetric studies, (Buzurovic et al.,
2010.b,c) it was noticed that implementation of real-time tracking techniques can minimize
irradiation to healthy tissues and improve sparing of critical organs. It was shown in the
studies that the dosimetric effect on PTV and CTV coverage, caused by the prediction error
in tumor tracking procedure, is insignificant. Consequently, tumor tracking error for the
described proposed method will not compromise patient treatment outcome.
Implementation of the proposed technique can potentially improve real-time tracking of the
tumor-volume to deliver highly conformal precise radiation dose at almost 100% duty cycle
while minimizing irradiation to health tissues and sparing critical organs. This, in turn, will
potentially improve the quality of patient treatment by lowering the toxicity level and
increasing survival.
In this study, we have deployed a closed-loop PID control. Adaptive control can be a good
choice because of the variability in the payload on the system, i.e., the weight of the patient.
Results of the adaptive control applied to the proposed system can be found in (Buzurovic
et al., 2011.b).
4. Conclusion
In this chapter the use of the robotic systems in radiation therapy has been presented. It was
shown that robotic systems can greatly influence to the current method of radiation therapy
in both delivering radioactive material inside the cancerous tissue and in compensation of
moving tumors during the external beam radiation therapy. The presented systems are
novel and the clinical applications of such systems will be near future in modern cancer
treatments. It was shown many times that modern robotics can improve many aspects of
human work. The presented robotics systems are the example of the previous statement.
104 Robotic Systems – Applications, Control and Programming
5. References
Benchetrit, G. (2000). Breathing pattern in humans: diversity and individuality, Respir.
Physiol., 22(2-3), pp. 123–129
Buzurovic, I.; Podder, T.; Yan, K.; et al. 2008. Parameter optimization for brachytherapy
robotic needle insertion and seed deposition, Medical Physics, Vol. 35, No. 6, p. 2865
Buzurovic, I.; Podder, T.K.; and Yu, Y. (November 2008). Force prediction and tracking for
image-guided robotic system using neural network approach, Proceedings of IEEE
Biomedical Circuits and Systems Conference (BIOCAS ’08), Baltimore, MD, USA, pp.
41–44
Buzurovic, I.; Podder, T.K.; and Yu, Y. (2010). Prediction Control for Brachytherapy Robotic
System, Journal of Robotics, Vol. 2010, Article ID 581840, doi:10.1155/2010/581840
Buzurovic, I.; Werner-Wasik, M.; Biswas. T.; Galvin J.; Dicker, A.P.; Yu, Y.; Podder, T. (2010).
Dosimetric Advantages of Active Tracking and Dynamic Delivery. Med. Phys., 37,
p. 3191
Buzurovic, I.; Huang, K.; Werner-Wasik, M.; Biswas. T.; Galvin J.; Dicker, A.P.; Yu, Y.;
Podder, T. (2010). Dosimetric Evaluation of Tumor Tracking in 4D Radiotherapy,
Int. J. Radiat. Oncol. Biol. Phys., 78(3) Suppl.1, p. S689
Buzurovic, I.; Huang, K.; Yu, Y.; Podder, T. (2011). Robotic Approach to 4D Real-time Tumor
Tracking for Radiotherapy, Phys. Med. Biol., 56(5), pp.1299-1320
Buzurovic, I.; Huang, K.; Yu, Y.; Podder, T. (May-June, 2010). Tumor Motion Prediction and
Tracking in Adaptive Radiotherapy, Proc. of 2010 IEEE Int. Conf. on Bioinformatics
and Bioengineering (BIBE), pp. 273-278
Buzurovic, I., Yu, Y.; Podder, T.K. (September, 2011). Active Tracking and Dynamic Dose
Delivery for Robotic Couch in Radiation Therapy, Proc. of 2011 IEEE Int. Conf. on
Engineering in Medicine and Biology (EMBC), Boston, MA, USA
Chen, W.K. (1993). Linear Networks and Systems: Algorithms and Computer-Aided
Implementations, Belmont Wadsworth, pp. 123–135
Chung, H.; et al. (2006). Mechanical Accuracy of a Robotic Couch, Medical Physics, 33(6),
p.2041
Cobb, D. (May, 1983). Descriptor Variable Systems and State Regulation”, IEEE Transaction
on Automatic Control, Vol. AC-28., No.5
D’Souza, W.; McAvoy, T.J.; Analysis of the treatment couch and control system dynamics
for respiration-induced motion compensation. Medical Physics, 2006, 33(12):4701-
4701.
D’Souza, W.; Naqvi, S.A.; Yu, C.X.; (2005). Real-time intra-fraction-motion tracking using the
treatment couch: a feasibility study, Phys. Med. and Biol., 50, pp. 4021-4033
Fichtinger, G.; Burdettem, E.C.; Tanacsm, A.; et al. (2006). Robotically assisted prostate
Brachytherapy with transrectal ultrasound guidance - Phantom experiments,
Brachytherapy 5, pp. 14-26
Kamino Y.; Takayama, K.; Kokubo, M.; et al. (2006). Development of a four-dimensional
image-guided radiotherapy system with a gimbaled x-ray head, Int. J. Radiation
Oncology Biol. Phys., 66(1), pp. 271–278
Keall, P.J.; Mageras, G.S.; Balter, J.M.; et al. (2006). The management of respiratory motion
in radiation oncology report of AAPM Task Group 76, Medical Physics, 33(10), pp.
3874-3900
Robotic Systems for Radiation Therapy 105
Keall, P.J.; Cattell, H.; Pokhrel, D.; et al. (2006). Geometric accuracy of a real-time target
tracking system with dynamic multileaf collimator tracking system, Int. J. Radiation
Oncology, Biol. Phys., 65(5), pp. 1579-1584
Lin, A.; Trejos, A.L.; Patel R.V.; and Malthaner, R.A. (2008). Robot-Assisted Minimally
Invasive Brachytherapy for Lung Cancer, Teletherapy (book chapter), ISBN: 978-3-
540-72998-3, Springer, Berlin, Germany, pp. 33-52
Mavrodis, C.; Dubowsky, S.; et al. (April 19-23, 1997). A Systematic Error Analysis of
Robotic Manipulators: Application to a High Performance Medical Robot, IEEE Int.
Conf. Robotics and Automation (ICRA), Albuquerque, NM, pp. 2975-2981
Meltsner, M.A. (2007). Design and optimization of a brachytherapy robot, PhD Thesis,
Department of Medical Physics, University of Wisconsin-Madison
Meltsner, M.A.; Ferrier N.J.; and Thomadsen, B.R. (2007). Observations on rotating needle
insertions using a brachytherapy robot, Phys. Med. Biol. 52, pp. 6027-6037
Mills, J.K. (1988). Constrained Manipulator Dynamic Models and Hybrid Control, Proc. of the
IEEE International Symposium on Intelligent Control, Vol. I, Arlington, VA, USA, pp.
424-429
Moerland, M; Van den Bosch, M.; Lagerburg, V.; et al. (2008). An MRI scanner compatible
implant robot for prostate brachytherapy, Brachytherapy, 7, p.100
Ozhasoglu, C.; Murphy, M.J. (2002). Issues in respiratory motion compensation during
external-beam radiotherapy, Int. J Radiat Oncol Biol Phys., 52(5), pp. 1389–1399
Podder, T.K.; Ng, W.S.; and Yu, Y. (August 23-26, 2007). Multi-channel robotic system for
prostate brachytherapy, Int. Conf. of the IEEE Engineering in Medicine and Biology
(EMBS), Lyon, France, pp. 1233-1236
Podder, T.K.; Buzurovic, I.; and Yu, Y. (May-June 2010). Multichannel Robot for Image-
guided Brachytherapy, Proc. of 10th IEEE International Conference on Bioinformatics&
Bioengineering (BIBE), Philadelphia, PA, USA, pp.209-213
Podder, T.K.; Buzurovic, I.; Hu, Y.; Galvin, J.M.; Yu, Y. (2007). Partial transmission high-
speed continuous tracking multi-leaf collimator for 4D adaptive radiation therapy,
IEEE Int. Conf. on Bioninformatics and Bioengineering (BIBE), Boston, MA, USA, pp.
1108-1112
Podder, T.K.; Buzurovic, I.; Hu, Y.; Galvin, J.M.; Yu, Y. (2008). Dynamics-based
decentralized control of robotic couch and multi-leaf collimators for tracking tumor
motion, IEEE Int. Conf. on Robotics and Automation (ICRA), Pasadena, CA, USA, pp.
2496-2502.
Salcudean, S.E.; Prananta, T.D.; et al. (May 19-23, 2008). A robotic needle guide for prostate
brachytherapy, IEEE Int. Conf. Robotics and Automation (ICRA), Anaheim, CA, pp.
2975-2981
Schweikard, A.; Glosser, G.; Bodduluri, M.; et al. (2000). Robotic motion compensation for
respiratory movement during radiosurgery, Comput. Aided Surg., 5(4), pp. 263–277
Sharp, G.C.; Jiang, S.B.; Shimizu, S.; Shirato, H. (2004). Prediction of respiratory tumour
motion for real-time image-guided radiotherapy, J.Phys. Med Biol., 49(3), pp. 425–
440
Stoianovici, K.; Cleary, A.; Patriciu, M.; et al. (2003). AcuBot: A robot for radiological
percutaneous Interventions, IEEE Trans. on Robotics and Automation, Vol. 19, pp. 927
930
106 Robotic Systems – Applications, Control and Programming
Stoianovici, D.; Song, S.; Petrisor, D.; et al. (2007). MRI Stealth robot for prostate
interventions, Minimally Invasive Therapy 16, pp. 241-248
Vedam, S.S.; Keall, P.J.; Docef, A.; et al. (2004). Predicting respiratory motion for four-
dimensional radiotherapy, J. Medical Physics, 31(8), pp. 2274–2283
Wei, Z; Wan, G; Gardi, L; Fenster, A.; et al. (2004). Robot-assisted 3D-TRUS guided prostate
brachytherapy: system integration and validation, Med. Phys. 31, pp. 539-548
Widrow, B.; Walach, E. (1994). Adaptive Inverse Control, Prentice-Hall, NJ, USA
Yu Y.; Podder, T.K.; Zhang, Y.D.; Ng, W.S.; et al. (2007). Robotic system for prostate
brachytherapy, J. Comp. Aid. Surg. 12, pp. 366-370
6
1. Introduction
Minimally invasive surgery has gained popularity over the last decade by offering shorter
convalescences, improved peri-opertive outcomes as well as enhanced cosmesis.
Community urologists have typically performed standard laparoscopic nephrectomies
secondary to its short learning curve, low complication rate, and limited requirement for
sophisticated laparoscopic skills. However, advanced minimally invasive operations such as
partial nephrectomies, pyeloplasties and prostatectomies necessitate advanced laparoscopic
adeptness. The emergence of robotics, with 3D vision and articulated instruments, has
allowed wider applications of minimally invasive techniques for more complex urological
procedures.
Though robotics has overcome some shortcomings of laparoscopic surgery, there remains a
limitation with its assertion as a standard amongst the urological community. The lack of
tactile feedback, displacement of the surgeon from the bedside, fixed-port system, longer
operating room times and cost remain barriers to widespread acceptance of robotics. In
addition, the deficiencies within the robotic platform have propagated an evolution in the
field with micro and nano-robotics. This chapter highlights the history of robotic surgery
along with current and future applications.
2.1 Prostatectomy
Robotic-assisted laparoscopic radical prostatectomy (RARP) was first reported in 2000
(Abbou et al., 2000). Since then the number of patients undergoing RARP for prostate cancer
has steadily increased. Early comparisons between radical retropubic prostatectomy (RRP)
and RARP show encouraging results. Menon et al. in a prospective nonrandomized study,
compared results of 30 consecutive patients undergoing (RRP) and 30 initial patients
undergoing (RARP). Estimated blood loss (EBL), blood transfusions, pain score, hospital
stay, and mean duration of postoperative catheterization were improved in the RARP
group. However, the mean operating time increased for RARP (Menon et al. 2002).
In a recent review, Ficcara et al. found that the mean OR time for RARP ranged between 127
and 288 minutes, corroborating a longer operative time with robotics versus open.
Nevertheless, transfusion rates and hospital stay were lower with RARP then open RRP. In
addition there were comparable complication rates between the RARP and open RRP
patients (Ficarra et al., 2009).
Robotic Urological Surgery: State of the Art and Future Perspectives 109
Positive surgical margins are a surrogate for oncological outcomes reported for RARP. A
significant advantage in positive margin rates was demonstrated for RALP over RRP. This
same difference amongst the robotic and open groups does not exist when comparing
laparoscopic radical prostatectomy (LRP). The positive surgical margin rates ranged from
11% to 37% after RRP, from 11% to 30% after LRP, and from 9.6% to 26% after RALP (Ficarra
et al., 2009).
Recently, survival outcomes have been reported after RARP. Barocas et al. compared 491
open RRP with 1,413 patients undergoing RARP, over a median follow-up of 10 months
(Barocas et al., 2010). Robotic group had lower pathological stage (80.5% pT2 vs 69.6% pT2,
p <0.01). The 3-year biochemical recurrence-free survival rate was similar between the 2
groups, even after adjusting for pathological stage, grade and margin status. More recently,
comparisons of 522 consecutive RARP were matched to patients who underwent LRP and
RRP evaluating oncological outcomes (Magheli et al., 2011). Positive surgical margin rates
were higher in the robotic group (19%), compared to LRP (13%) and RRP (14%). This
difference was not significant for pT2 disease. The mean follow-up was 2.5, 1.4 and 1.3 years
for RRP, LRP and RARP respectively, with no statistically significant difference in
biochemical-free survival between groups.
Erectile function and continence are major outcomes evaluated after prostatectomy. In a
matched cohort, comparing 294 RARP for clinically localized prostate cancer with 588 RRP,
showed no difference in continence between the 2 approaches at the 1-year follow-up
(Krambeck et al. 2009). Furthermore, Roco et al showed 1year continence rates of 97% vs
88% after RARP and RRP, respectively (P = 0.014). The 1 year overall potency recovery rate
was 61% vs 41%, after RARP and RRP, respectively (P = 0.003). Overall, RRP seems to be a
faster procedure. However, EBL, hospitalization time, and functional outcomes were
superior with RARP. Early oncological outcome seemed to be equivalent in the two groups
(Rocco et al., 2009).
The drawback of RARP is the cost related to purchasing and maintaining the instruments of
the robotic system. Bolenz et al. compared the cost of 262 RALP, 220 LRP, and 161 RRP
performed at the same institution. The direct cost was higher for the robotic approach than
laparoscopic or open. The median direct cost was US$ 6752, US$ 5687, and US$ 4437 for
RALP, LRP, and RRP respectively. The most important difference was due to surgical
supply and operating room cost (Bolenz et al., 2010). However, the surgical volume may
reduce this difference. Scales et al. showed that the cost of RALP is volume dependent, and
cost equivalence is achievable with RRP at a surgical volume of 10 cases weekly (Scales et
al., 2005). Even the cost of robotic surgery is a difficult question to assess; the cost-
effectiveness of robotics will probably continue to improve with time (Wilson & Torrey,
2011).
The available data demonstrates improvements in blood loss, hospital stay, and pain control
with RALP. However the lack of long term cancer specific mortality limits robust
oncological comparisons of RARP to RRP.
Platforms where robotics may be applied to radical nephrectomies are complex tumors with
caval thrombosis. Abaza reported five patients with a renal tumor and inferior vena cava
(IVC) thrombus who underwent robotic nephrectomy. The robotic system allowed for
venotomy into the vena cava with suture of the defect and no complications. (Abaza, 2011)
Robotic-assisted nephrectomy was also applied for live kidney donor. A series of 35 first
cases was reported with a mean warm ischemia time of 5.9 minutes. (Louis et al., 2009).
However, the cost of the robotic approach may limit its applicability for radical
nephrectomies except in complex renal cases with caval thromsis.
26). After a mean follow-up of 10 months, no recurrences had occurred indicating that RPN
is a safe and feasible option for highly complex renal masses (White et al., 2011).
RPN seems to be as an effective and safe alternative to LPN. Surgical technique for RPN has
improved and indications have been expanded to more challenging tumors. Currently
available comparative studies are retrospective and with a limited follow-up. Future trials
are expected to confirm encouraging findings from early reported series.
2.4 Adrenalectomy
Robotic-assisted adrenalectomy (RAA) was first reported in 1999 and 2001 using the Aesop
(Hubens et al., 1999) and Da Vinci systems, respectively (Horgan & Vanuno, 2001). Since
then, the few studies published were about RAA using Da Vinci system.
Brunaud et al. prospectively evaluated 100 consecutive patients who underwent RAA. The
mean operative time was 95 minutes with a conversion rate 5%. Complication and mortality
rates were 10% and 0%, respectively. The mean operative time decreased by 1 minute every
10 cases. Operative time improved more for junior surgeons than for senior surgeons after
the first 50 cases. Surgeon’s experience, first assistant level and tumor size were
independent predictors of operative time. The robotic procedure was 2.3 times more costly
than laparoscopic adrenalectomy (Brunaud et al., 2008a).
The same authors, compared prospectivly perioperative data of 50 patients who underwent
RAA with 59 patients who underwent laparoscopic adrenalectomy (LA). RAA was
associated with lower blood loss but longer operative times. However, the difference in
operative time was not significant after the learning curve of 20 cases. Operative time
increased, only in the LA group for obese patients (body mass index >30 kg/m2) and
patients with large tumors (>55mm). Length of hospital stay, complication and conversion
rates were equivalent in the groups (Brunaud et al., 2008b).
Recently, Giulianotti et al. examined 42 patients who underwent RAA by a single surgeon.
Median hospital stay was 4 days with postoperative complication rate of 2.4% and mortality
rate of 2.4% (Giulianotti et al., 2011).
Suggestions have been made that robot assistance may be beneficial for obese patients with
large tumors (Brunaud et al., 2008a; Giulianotti et al., 2011), as well as for surgeons with
limited laparoscopic experience. Regardless of its feasibility, Prospective studies must focus
on potential improve in learning curve with robotic utilization along with a cost analysis
evaluating RAA compared to current standards. (Brunaud et al., 2008a)
2.5 Pyeloplasty
The first clinical experience of robot-assisted pyeloplasty (RAP) was reported in 2002
(Gettman et al., 2002a). Since then, numerous studies have evaluated the efficiency of RAP.
Gupta et al. prospectively evaluated results of 85 consecutive patients who had
transperitoneal RAP, using four or five ports. Based on anatomic considerations, different
types of pyeloplasty were completed. The mean operative time was 121 min, 47 min of
which was for the anastomosis. Mean EBL was 45 mL, with hospital stay of 2.5 days. Three
patients had stent migrations, and delayed drainage. Mean follow-up was 13.6 months with
an overall success rate of 97%, based on imaging assessment (Gupta et al., 2010).
In a comparative non-randomized study, 98 RAP were compared with 74 LP with a mean
operative time of 189.3 and 186.6 minutes for RAP and LP respectively. Complication rate
was similar with 5.1% and 2.7% for RAP and LP respectively. The suturing time was shorter
112 Robotic Systems – Applications, Control and Programming
for the RAP but without statistical significance (48.3 and 60 minutes (P =0.30) for RAP and
LP respectively). RAP had a success rate of 93.4% versus 95% for LP based on renal
scintigraphy (Bird et al., 2011).
Hemal and colleagues illustrated successful application of robotics to pyeoplasty surgery. A
nonrandomized study, comparing results of 30 RAP with 30 LP, performed in a
transperitoneal approach by a single surgeon. The mean total operating times were 98
minutes and 145 minutes, the mean EBL were 40 mL and 101 mL, and the mean hospital
stay of the patients were 2 days and 3.5 days, for RAP and LP, respectively. At follow up,
one patient in LP group had obstruction managed by balloon dilation (Hemal et al., 2010).
Insufficent evidence exists for the retroperitoneal approach to RAP. Kaouk et al. reported
results of 10 patients who underwent retroperitoneal RAP. Four ports were placed for the
robot and a successful operation was accomplished. Operative time was 175 mins with
minimal complications. The advantage to the retroperitoneal approach was the direct access
to the UPJ; however, retroperitoneal surgery has limited working space with unfamiliar
anatomy to most urologist (Kaouk et al., 2008).
Cestari et al. compared 36 patients who underwent retroperitoneal RAP and 19
transperitoneal RAP for UPJO. Median operative time and hospital stay were similar.
Complication rates were comparable. (Cestari et al, 2010).
Studies of RAP demonstrate feasibility, efficacy and safety. However, the cost of robotic
surgery continues to limit the widespread application of this platform.
2.6 Cystectomy
Radical cystectomy with pelvic lymphadectomy (PLND) remains the gold-standard
treatment for patients with muscle-invasive blader cancer. However, morbidity related to
open radical cystectomy (ORC) presents a real challenge. Thus, robotic-assisted radical
cystectomy (RRC) represents a potentiel alltenative to the open approach. Similar to other
robotic procedures, the potential advantages of RRC over ORC are decrease in blood loss,
pain, and hospital stay. On the other hand, the oncologic outcomes of the RRC remain
largely unknown.
In a series of 100 consecutive patients who underwent RRC for clinically localized bladder
cancer, the mean operative time was 4.6 hours and an EBL 271 ml. Mean hospital stay was
4.9 days with 36 patients experiencing postoperative complications; 8% major complications.
Urinary diversion included, ileal conduits 61 % of the time (Pruthi et al., 2010).
Kauffman et al. presented results of 85 consecutive patients treated with RRC for bladder
cancer. The median age was 73.5 years with high proportion of patients having
comorbidities (46% of ASA class ≥3). Extended pelvic lymphadenectomy was performed in
almost all patients (98%). Extravesical disease was found in 36.5% cases and positive
surgical margins were present in 6% of patients. At a mean follow-up of 18 months, 20 (24%)
patients had presented recurrence; three of them (4%) only had a local recurrence. The
overall survival and disease-specific survival rates for the cohort at 2 years were 79% and
85%, respectively. Extravesical disease, positive lymph node, and lymphovascular invasion
were associated with worse prognosis (Kauffman et al., 2011). Even with the encouraging
results of this study, comparative analysis and long term outcomes are still needed.
Josephson et al., in his report of 58 RRC, found different results, with a overall survival rate
of 54% and disease specific survival rates of 76% at 2-year (Josephson et al., 2010). However,
stratification of survival outcomes by patholoical stage was not reported, making
comparasions with other studies futile.
Robotic Urological Surgery: State of the Art and Future Perspectives 113
In an others series of 100 patients with a mean follow up of 21 months, 15 patients had
disease recurrence and 6 died of bladder cancer (Pruthi et al., 2010). Comparisons between
open and robotic cystectomy are lacking because of the heterogenity in disease burden and
patient selection.
In a multi-institutional study, of the 527 patients who underwent RRC, 437 (82.9%) had a
lymphadenectomy. Surgeons experience on the robot influenced whether a
lymphadenectomy was performed (Hellenthal, 2010a). In the same data base, the Positive
surgical margin was 6.5% (Hellenthal, 2010b). In a study of 100 consecutive patients with
RRC, no positive margin was found (Pruthi et al., 2010). However in this series only 13%
cases were pT3/T4 at the final pathology, which may explain this result.
RRC offers the potential for improving peri-operative outcomes with the advantages of
minimally invasive surgery. It may offer shorter hospital stay, decrease blood loss and pain,
and offer a smaller incision, with equal complication rates and oncological outcomes to the
current gold standard, ORC. Future studies with longer follow up are essential to have
robust data on the comparability of RRC to ORC
with the advent of the novel robotic platform (VeSPA, Intuitive Surgical, California, USA)
(Figures. 4 & 5). These curved cannulae and semi-rigid instruments have been designed to
compensate for the limitations encountered with conventional LESS surgery. The VeSPA
curved cannulae and semirigid instrument design allow the cannulae and instruments to
be inserted in close proximity while allowing approximate triangulation intra-
abdominally. The VESPA platform was successfully applied in a porcine model without
conversions or addition of ports (Haber et al., 2010b). However, it is recognized that
Robotic-LESS for the kidney is technically more challenging in humans compared to
animal model. Therefore, further clinical research is required to confirm these early
experimental results.
Fig. 2. The robot scope and two arms are inserted through a 2.5-cm incision in the umbilicus
Fig. 5. VeSPA instruments and accessories. (A) Curved cannulae; (B) multichannel single-
port, 8.5-mm robotic scope, flexible instruments passed through the cannulae
We recently reported our initial experience with single port robotic partial nephrectomy in
two patients without conversions or complications (Kaouk et al. 2009b). A multichannel port
(Triport; Advanced Surgical Concepts, Bray, Co Wicklow, Ireland) was utilized. Pediatric 5-
mm robotic instruments were used. A 30° robotic lens placed in the upward configuration
minimized clashing between the scope and instruments. A 2.8 cm left lower pole tumor and
a 1.1 cm right lower pole tumor were excised without renal hilar clamping using the
Robotic Urological Surgery: State of the Art and Future Perspectives 117
harmonic scalpel. EBL was 100 ml, operative time was 170 min, length of stay was 3.5 days,
and visual analog pain scale at discharge was 1.0/10.
Stein et al. reported robotic LESS using a gel port (Applied Medical, Rancho Santa
Margarita, California, USA) as the access platform (Stein et al., 2010). Four clinical
procedures were performed, including two pyeloplasties, one radical nephrectomy, and one
partial nephrectomy. The gel port system was used to allow for a larger working platform.
The partial nephrectomy was completed in 180 min. The mass was excised without hilar
clamping, using the harmonic scalpel and Hem-o-lok clips (Weck Closure Systems, Research
Triangle Park, North Carolina, USA). A radical nephrectomy was performed for a 5-cm left-
sided lower pole mass. The renal vein and artery were secured with an endoscopic stapler,
and the remainder of the dissection was completed with hook cautery. The specimen was
retrieved with an entrapment sac and removed via the umbilical incision extended to 4 cm.
EBL was 250 ml, operative time was 200 min, and the hospital stay was 2 days.
The Da Vinci system has several advantages over conventional laparoscopy allowing an
increasing number of urologist adopt this method. Preliminary results of these minimal
invasive approaches are encouraging in LESS and NOTES techniques. Further refinement in
instrumentation, improved triangulation and development of robotic systems specific to
LESS and NOTES may define the new horizons in single site surgery.
4. Flexible robots
The development of robotic technologies has also flourished in endocopic surgeries. A
flexible robotic catheter manipulator (Sensei, Hansen Medical, Mountain View, CA, USA)
thought to allow enhanced ergonomics with improved efficiency over standard flexible
ureteroscopy was developed by Sensei, Hansen medical. Initial experience in the porcine
model showed promising outcomes.
Fig. 6. Pictorial depiction of components of flexible robotic catheter control system. Surgeon
console (workstation) showing three LCD screens, one touch screen, and MID
118 Robotic Systems – Applications, Control and Programming
Miniaturization has been the key advancement for the progression of robotics in endoscope.
Further studies are needed to corroborate these early clinical results.
multiple robots could be inserted through a single incision rather than the traditional use of
multiple port sites. Capabilities such as tissue retraction, single incision surgery,
supplementary visualization, or lighting can be delivered by these micro-robots.
A well-known limitation of the current robotic platform is the lack of tactile feedback.
However, the improved visualization and accuracy of robotic instrumentation may
overcome this limitation. Furthermore, newer technologies with multiple compact robots,
tracking tools, and tactile feedback apparatus may further expand the application of robotic
surgery.
7. Conclusion
Robotic applications in urology are effective and safe technologies. Surgical technique have
evolved and indications have been expanded to more challenging scenarios. New
technologies are constantly reshaping the urological sphere especially robotics. Further
refinements in robotic systems along with a reduction in cost are key components for rapid
assimilation amongst the urological community.
8. References
Abaza R. (2011) Initial series of robotic radical nephrectomy with vena caval tumor
thrombectomy. Eur Urol. 2011 Apr;59(4):652-6.
Abbou CC, Hoznek A, Salomon L, Lobontiu A, Saint F, Cicco A, Antiphon P, & Chopin D.
(2000) [Remote laparoscopic radical prostatectomy carried out with a robot.Report
of a case]. Prog Urol. 2000 Sep;10(4):520-3.
Barocas DA, Salem S, Kordan Y, Herrell SD, Chang SS, Clark PE, Davis R, Baumgartner R,
Phillips S, Cookson MS, & Smith JA Jr. (2010) Robotic assisted laparoscopic
prostatectomy versus radical retropubic prostatectomy for clinically localized
prostate cancer: comparison of short-term biochemical recurrence-free survival. J
Urol. 2010 Mar;183(3):990-6.
Bird VG, Leveillee RJ, Eldefrawy A, Bracho J, & Aziz MS. (2011) Comparison of robot-
assisted versus conventional laparoscopic transperitoneal pyeloplasty for patients
with ureteropelvic junction obstruction: a single-center study. Urology. 2011
Mar;77(3):730-4.
Benway BM, Bhayani SB, Rogers CG, Porter JR, Buffi NM, Figenshau RS, & Mottrie A. (2010)
Robot-assisted partial nephrectomy: an international experience. Eur Urol. 2010
May;57(5):815-20
Boger M, Lucas SM, Popp SC, Gardner TA, & Sundaram CP. (2010) Comparison of robot-
assisted nephrectomy with laparoscopic and hand-assisted laparoscopic
nephrectomy. JSLS. 2010 Jul-Sep;14(3):374-80.
Bolenz C, Gupta A, Hotze T, Ho R, Cadeddu JA, Roehrborn CG, Lotan Y. (2010) Cost
comparison of robotic, laparoscopic, and open radical prostatectomy for prostate
cancer. Eur Urol. 2010 Mar;57(3):453-8.
Box GN, Lee HJ, Santos RJ, Abraham JB, Louie MK, Gamboa AJ, Alipanah R, Deane LA,
McDougall EM, & Clayman RV. (2008) Rapid communication: robot-assisted
NOTES nephrectomy: initial report. J Endourol. 2008 Mar;22(3):503-6.
Brunaud L, Ayav A, Zarnegar R, Rouers A, Klein M, Boissel P, & Bresler L. (2008a)
Prospective evaluation of 100 robotic-assisted unilateral adrenalectomies. Surgery.
2008a Dec;144(6):995-1001.
Brunaud L, Bresler L, Ayav A, Zarnegar R, Raphoz AL, Levan T, Weryha G, & Boissel P.
(2008b) Robotic-assisted adrenalectomy: what advantages compared to lateral
transperitoneal laparoscopic adrenalectomy? Am J Surg. 2008b Apr;195(4):433-8.
Cestari A, Buffi NM, Lista G, Sangalli M, Scapaticci E, Fabbri F, Lazzeri M, Rigatti P, &
Guazzoni G. (2010) Retroperitoneal and transperitoneal robot-assisted pyeloplasty
in adults: techniques and results. Eur Urol. 2010 Nov;58(5):711-8.
122 Robotic Systems – Applications, Control and Programming
Desai MM, Aron M, Berger A, Canes D, Stein R, Haber GP, Kamoi K, Crouzet S, Sotelo R,
Gill IS. (2008a) Transvesical robotic radical prostatectomy. BJU Int. 2008a
Dec;102(11):1666-9.
Desai MM, Aron M, Gill IS, Haber GP, Ukimura O, Kaouk JH, Stahler G, Barbagli F, Carlson
C, & Moll F. (2008b) Flexible robotic retrograde renoscopy: description of novel
robotic device and preliminary laboratory experience. Urology. 2008b Jul;72(1):42-6.
Desai MM, Grover R, Aron M, Haber GP, Ganpule A, Kaouk I, Desai M, & Gill IS. (2008c)
Remote robotic ureteroscopic laser lithotripsy for renal calculi: initial clinical
experience with a novel flexible robotic system. J Urol 2008c; 179(4S):435 [Abstract #
1268].
Ficarra V, Novara G, Artibani W, Cestari A, Galfano A, Graefen M, Guazzoni G,
Guillonneau B, Menon M, Montorsi F, Patel V, Rassweiler J, & Van Poppel H.
(2009) Retropubic, laparoscopic, and robot-assisted radical prostatectomy: a
systematic review and cumulative analysis of comparative studies. Eur Urol. 2009
May;55(5):1037-63.
Gettman MT, Neururer R, Bartsch G, & Peschel R. (2002a) Anderson–Hynes dismembered
pyeloplasty performed using the da Vinci robotic system. Urology (2002a) 60(3),
509–513
Gettman MT, Lotan Y, Napper CA, & Cadeddu JA. (2002b) Transvaginal laparoscopic
nephrectomy: development and feasibility in the porcine model. Urology 2002b;
59:446–450.
Gettman MT, Blute ML, Chow GK, Neururer R, Bartsch G, & Peschel R. (2004) Robotic-
assisted laparoscopic partial nephrectomy: technique and initial clinical experience
with DaVinci robotic system. Urology. 2004 Nov;64(5):914-8.
Giulianotti PC, Buchs NC, Addeo P, Bianco FM, Ayloo SM, Caravaglios G, & Coratti A.
(2011) Robot-assisted adrenalectomy: a technical option for the surgeon? Int J Med
Robot. 2011 Mar;7(1):27-32.
Gupta NP, Nayyar R, Hemal AK, Mukherjee S, Kumar R, & Dogra PN. (2010) Outcome
analysis of robotic pyeloplasty: a large single-centre experience. BJU Int. 2010
Apr;105(7):980-3.
Haber GP, Crouzet S, Kamoi K, Berger A, Aron M, Goel R, Canes D, Desai M, Gill IS, &
Kaouk J. (2008a) Robotic NOTES (Natural Orifice Translumenal Endoscopic
Surgery) in reconstructive urology: initial laboratory experience. Urology. 2008a
Jun;71(6):996-1000.
Haber GP, Kamoi K, Vidal C, Koenig P, Crouzet S, Berger A, Aron M, Kaouk J, Desai M, &
Gill IS. (2008b) Development and Evaluation of a Transrectal Ultrasound Robot for
Targeted Biopsies and Focal Therapy of Prostate Cancer. EUS 2008 Annual
Meeting; 2008b. p. 85.
Haber GP, White WM, Crouzet S, White MA, Forest S, Autorino R, & Kaouk JH. (2010a)
Robotic versus laparoscopic partial nephrectomy: single-surgeon matched cohort
study of 150 patients. Urology. 2010a Sep;76(3):754-8.
Haber GP, White MA, Autorino R, Escobar PF, Kroh MD, Chalikonda S, Khanna R, Forest S,
Yang B, Altunrende F, Stein RJ, & Kaouk J. (2010b) Novel robotic da Vinci
instruments for laparoendoscopic single-site surgery. Urology. 2010b
Dec;76(6):1279-82.
Robotic Urological Surgery: State of the Art and Future Perspectives 123
Hellenthal NJ, Hussain A, Andrews PE, Carpentier P, Castle E, Dasgupta P, Kaouk J, Khan
S, Kibel A, Kim H, Manoharan M, Menon M, Mottrie A, Ornstein D, Palou J,
Peabody J, Pruthi R, Richstone L, Schanne F, Stricker H, Thomas R, Wiklund P,
Wilding G, & Guru KA. (2011) Lymphadenectomy at the time of robot-assisted
radical cystectomy: results from the International Robotic Cystectomy Consortium.
BJU Int. 2011 Feb;107(4):642-6.
Hellenthal NJ, Hussain A, Andrews PE, Carpentier P, Castle E, Dasgupta P, Kaouk J, Khan
S, Kibel A, Kim H, Manoharan M, Menon M, Mottrie A, Ornstein D, Palou J,
Peabody J, Pruthi R, Richstone L, Schanne F, Stricker H, Thomas R, Wiklund P,
Wilding G, & Guru KA. (2010) Surgical margin status after robot assisted radical
cystectomy: results from the International Robotic Cystectomy Consortium. J Urol.
2010 Jul;184(1):87-91.
Hemal AK, Mukherjee S, & Singh K. (2010) Laparoscopic pyeloplasty versus robotic
pyeloplasty for ureteropelvic junction obstruction: a series of 60 cases performed by
a single surgeon. Can J Urol. 2010 Feb;17(1):5012-6.
Ho HS, Mohan P, Lim ED, Li DL, Yuen JS, Ng WS, Lau WK, & Cheng CW. (2009) Robotic
ultrasound-guided prostate intervention device: system description and results
from phantom studies. Int J Med Robot. 2009 Mar;5(1):51-8
Horgan S, Vanuno D. (2001) Robots in laparoscopic surgery. J Laparoendosc Adv Surg Tech
2001;11:415-9. 15.
Hubens G, Ysebaert D, Vaneerdeweg W, Chapelle T, & Eyskens E. (1999) Laparoscopic
adrenalectomy with the aid of the AESOP 2000. Acta Chir Belg 1999;99:125-9.
Joseph JV, Oleynikov D, Rentschler M, Dumpert J, & Patel HR. (2008) Microrobot assisted
laparoscopic urological surgery in a canine model. J Urol. 2008 Nov;180(5):2202-5.
Josephson DY, Chen JA, Chan KG, Lau CS, Nelson RA, & Wilson TG. (2010) Robotic-
assisted laparoscopic radical cystoprostatectomy and extracorporeal continent
urinary diversion: highlight of surgical techniques and outcomes. Int J Med Robot.
2010 Sep;6(3):315-23.
Kaouk JH, Hafron J, Parekattil S, Moinzadeh A, Stein R, Gill IS, & Hegarty N. (2008) Is
retroperitoneal approach feasible for robotic dismembered pyeloplasty: initial
experience and long-term results. J Endourol. 2008 Sep;22(9):2153-9.
Kaouk JH, Goel RK, Haber GP, Crouzet S, Stein RJ. (2009a) Robotic single-port
transumbilical surgery in humans: initial report. BJU Int. 2009a Feb;103(3):366-9.
Kaouk JH, Goel RK. (2009b) Single-port laparoscopic and robotic partial nephrectomy. Eur
Urol 2009b; 55:1163–1169.
Kauffman EC, Ng CK, Lee MM, Otto BJ, Wang GJ, & Scherr DS. (2011) Early oncological
outcomes for bladder urothelial carcinoma patients treated with robotic-assisted
radical cystectomy. BJU Int. 2011 Feb;107(4):628-35.
Krambeck AE, DiMarco DS, Rangel LJ, Bergstralh EJ, Myers RP, Blute ML, & Gettman MT.
(2009) Radical prostatectomy for prostatic adenocarcinoma: a matched comparison
of open retropubic and robot-assisted techniques. BJU Int. 2009 Feb;103(4):448-53.
Lehman AC, Wood NA, Farritor S, Goede MR, & Oleynikov D. (2011) Dexterous miniature
robot for advanced minimally invasive surgery. Surg Endosc. 2011 Jan;25(1):119-23.
Louis G, Hubert J, Ladriere M, Frimat L, & Kessler M. (2009) [Robotic-assisted laparoscopic
donor nephrectomy for kidney transplantation. An evaluation of 35 procedures].
Nephrol Ther. 2009 Dec;5(7):623-30
124 Robotic Systems – Applications, Control and Programming
Magheli A, Gonzalgo ML, Su LM, Guzzo TJ, Netto G, Humphreys EB, Han M, Partin AW, &
Pavlovich CP. (2011) Impact of surgical technique (open vs laparoscopic vs robotic-
assisted) on pathological and biochemical outcomes following radical
prostatectomy: an analysis using propensity score matching. BJU Int. 2011
Jun;107(12):1956-62.
Menon M, Tewari A, Baize B, Guillonneau B, & Vallancien G. (2002) Prospective comparison
of radical retropubic prostatectomy and robot-assisted anatomic prostatectomy: the
Vattikuti Urology Institute experience. Urology. 2002 Nov;60(5):864-8.
Mozer PC, Partin AW, & Stoianovici D. (2009) Robotic image-guided needle interventions of
the prostate. Rev Urol. 2009 Winter;11(1):7-15
Patriciu A, Petrisor D, Muntener M, Mazilu D, Schär M, & Stoianovici D. (2007) Automatic
brachytherapy seed placement under MRI guidance. IEEE Trans Biomed Eng. 2007
Aug;54(8):1499-506
Pruthi RS, Nielsen ME, Nix J, Smith A, Schultz H, & Wallen EM. (2010) Robotic radical
cystectomy for bladder cancer: surgical and pathological outcomes in 100
consecutive cases. J Urol. 2010 Feb;183(2):510-4.
Rocco B, Matei DV, Melegari S, Ospina JC, Mazzoleni F, Errico G, Mastropasqua M, Santoro
L, Detti S, & de Cobelli O. (2009) Robotic vs open prostatectomy in a
laparoscopically naive centre: a matched-pair analysis. BJU Int. 2009 Oct;104(7):991-
5.
Rogers CG, Singh A, Blatt AM, Linehan WM, & Pinto PA. (2008a) Robotic partial
nephrectomy for complex renal tumors: surgical technique. Eur Urol. 2008a
Mar;53(3):514-21
Scales CD Jr, Jones PJ, Eisenstein EL, Preminger GM, & Albala DM. (2005) Local cost
structures and the economics of robot assisted radical prostatectomy. J Urol. 2005
Dec;174(6):2323-9.
Stein RJ, White WM, Goel RK, Irwin BH, Haber GP, & Kaouk JH. (2010) Robotic
laparoendoscopic single-site surgery using GelPort as the access platform. Eur Urol.
2010 Jan;57(1):132-6
Wang AJ, Bhayani SB. (2009) Robotic partial nephrectomy versus laparoscopic partial
nephrectomy for renal cell carcinoma: single-surgeon analysis of >100 consecutive
procedures. Urology. 2009 Feb;73(2):306-10.
White MA, Haber GP, Autorino R, Khanna R, Hernandez AV, Forest S, Yang B, Altunrende
F, Stein RJ, & Kaouk JH. (2011) Outcomes of robotic partial nephrectomy for renal
masses with nephrometry score of ≥7. Urology. 2011 Apr;77(4):809-13.
Wilson T, Torrey R. (2011) Open versus robotic-assisted radical prostatectomy: which is
better? Curr Opin Urol. 2011 May;21(3):200-5.
Yohannes P, Rotariu P, Pinto P, Smith AD, & Lee BR. (2002) Comparison of robotic versus
laparoscopic skills: is there a difference in the learning curve? Urology. 2002.
Jul;60(1):39-45
7
Reconfigurable Automation of
Carton Packaging with Robotic Technology
Wei Yao and Jian S. Dai
King’s College London
United Kingdom
1. Introduction
In the highly competitive food market, a wide variety of cartons is used for packaging with
attractive forms, eye-catching shapes and various structures from a logistical and a
marketing point of view. These frequent changes require innovative packaging. Hence, for
some complex styles like origami cartons and cartons for small batch products, most
confectionery companies have to resort to using manual efforts, a more expensive process
adding cost to the final products, particularly when an expensive seasonal labor supply is
required. This manual operating line must go through an expensive induction and training
program to teach employees how to erect a carton.
Current machines are only used for the same general style or are specifically designed for a
fixed type of cartons and all new cartons require development and manufacture of new
tooling for this machinery. New tooling is also required for each different pack size and
format. The development and manufacture of this tooling can be very expensive and
increases the manufacturer’s lead time for introducing new products to market, therefore
reducing the manufacturer’s ability to match changes in consumer demands. Tooling
changeovers, when changing production from one packaging format to another, also adds
cost to the manufacturer. It is uneconomical to produce a dedicated machine for a small
batch production. Hence, in this high seasonal and competitive market, manufacturer needs
a dexterous and reconfigurable machine for their variety and complexity in attracting
customers.
The machines are expected to reconfigure themselves to adapt to different types of cartons
and to follow different instructions for various closing methods. Rapid expansion of robotics
and automation technology in recent years has led to development of robots in packaging
industry. Though pick and place robots were extensively used, complex tasks for erecting
and closing origami-type cartons still resort to manual operations. This presents a challenge
to robotics.
Some related areas with carton folding on the problem of creating 3-D sheet metal parts
from sheet metal blanks by bending are explored by some researchers to provide a starting
point for exploration in this field when discussing folding sequences based on their objective
function, consisting of a high-level planner that determines the sequence of bends, and a
number of low level planners for individual actions. deVin (de Vin et al.,1994) described a
computer-aided process planning system to determine bending sequences for sheet-metal
manufacturing. Shpitalni and Saddan (Shpitalni & Saddan., 1994) described a system to
126 Robotic Systems – Applications, Control and Programming
automatically generate bending sequences. The domain-specific costs for example the
number of tool changes and part reorientations were used to guide the A* search algorithm.
Radin et al., (Radin et al., 1997) presented a two-stage algorithm. This method generates a
bending sequence using collision avoidance heuristics and then searches for lower cost
solutions without violating time constrains. Inui et al. (Inui et al., 1987) developed a method
to plan sheet metal parts and give a bending simulation. Gupta et al. (Gupta et al., 1997)
described a fully automated process planning system with a state-space search approach.
Wang and Bourne (Wang & Bourne, 1997) explored a way to shape symmetry to reduce
planning complexity for sheet metal layout planning, bend planning, part stacking, and
assembly, the geometric constraints in carton folding parallel those in assembly planning.
Wang (Wang, 1997) developed methods to unfold 3-D products into 2-D patterns, and
identified unfolding bend sequences that avoided collisions with tools.
In computational biology, protein folding is one of the most important outstanding
problems that fold a one-dimensional amino acid chain into a three-dimensional protein
structure. Song and Amato (Song & Amato, 2001a, 2001b) introduced a new method by
modeling these foldable objects as tree-like multi-link objects to apply a motion planning
technique to a folding problem. This motion planning approach is based on the successful
probabilistic roadmap (PRM) method (Kavraki et al., 2001a, 2001b) which has been used to
study the related problem of ligands binding (Singh et al., 1999, Bayazit et al., 2001).
Advantages of the PRM approach are that it efficiently covers a large portion of the
planning space and it also provides an effective way to incorporate and study various initial
conformations, which has been of interest in drug design. Molecular dynamics simulations
are the other methods (Levitt et al, 1983, Duan. et al., 1993) which tried to simulate the true
dynamics of the folding process using the classic Newton’s equations of motion. They
provided a means to study the underlying folding mechanism, to investigate folding
pathways, and provided intermediate folding states.
Rapid expansion of robotics and automation technology in recent years has led to
development of robots in packaging industry. Automation of the carton folding process
involves several subtasks including automatic folding sequence generation, motion
planning of carton folding, and development of robotic manipulators for reconfigurability.
Though pick and place robots have been extensively used, complex tasks for erecting and
closing origami-type cartons still resort to manual operations. This presents a challenge to
robotics.
Investigating origami naturally leads to the study of folding machinery. This evolves to
folding a carton. Lu and Akella (Lu & Akella, 1999, 2000) in 1999 by exploring a fixture
technique to describe carton folding based on a conventional cubical carton using a SCARA-
type robot and on the similarity between carton motion sequences with robot operation
sequences. This approach uses the motion planner to aid the design of minimal complexity
hardware by a human designer. But the technology only focused on rectangular cartons not
on origami-type cartons which include spherical close chains on vertexes. Balkcom and
Mason (Balkcom & Mason, 2004, Balkcom et al., 2004) investigated closed chain folding of
origami and developed a machine designed to allow a 4DOF Adept SCARA robot arm to
make simple folds. However for complex cartons including multi-closed chains the
flexibility and reconfigurability of this method are limited. This requires a quantitative
description of a carton, its folding motion and operations. Dai (Dai & Rees Jones, 2001, 2002,
2005, Liu & Dai, 2002, 2003, Dubey &Dai, 2006) and Yao (Yao & Dai, 2008, 2010, 2011) at
King’s College London developed a multi-fingered robotic system for carton folding.
Reconfigurable Automation of Carton Packaging with Robotic Technology 127
the joint of the box paper do not produce unexpected deformation. Five revolute joints axes
that intersect in the point that is the intersection of the creases define the spherical linkage
and their movement is defined.
Figure 2 gives a half-erected origami carton which consists of four sections at vertexes
located at four corners of the carton labeled as O1, O2, O3, and O4Folding manipulation of the
carton is dominated by these sections. The rectangular base of the carton is fixed on the
ground. Four creases along the rectangular base are given by four vectors b1, b2, b3 and b4
Four centrally movable creases are given as s, s’’, s’’ and s’’’ each of which is active in a gusset
corner. The carton is symmetric along line AA’.
A s′′′ O s′′ O3
4
b4 b3
b2
s
O1 s′ A′
b1
O2
s' t s p
t'
p'
O1 s'''
O2 b1 b4 t'''
s'' p'''
t''
b2
O4
p'' O3 b3
Fig. 3. The equivalent mechanism of the base of a double wall tray
Reconfigurable Automation of Carton Packaging with Robotic Technology 129
One of the corners located at O1 formed one spherical mechanism. When its folding joint s
moves and controls the configuration of the mechanism and drives joints t, and p. When the
two joints attached the mobility of the mechanism reduces from 2 to 1 and the mechanism
gets its singularity. The joint s is active and called control joint. During the manually folding
of this origami carton human fingers attaches the control joints and drives them into goal
positions.
y
s p
t
C D
ρ3
ρ2
ρ4
B b2 x
ρ1
σ ρ O1 E
5
z b1
ρ 1 = arccos(b ⋅ t )
ρ 2 = arccos(t ⋅ s)
ρ 3 = arccos( s ⋅ p )
130 Robotic Systems – Applications, Control and Programming
ρ 4 = arccos( p ⋅ d )
ρ 5 = arccos( d ⋅ b ) (1)
t ⋅ s = c ρ2 (4)
c ρ 2 = cφ cξ cσ s ρ 1 + sξ sσ s ρ 1 − sφ c ρ 1 cξ (5)
Collecting the same terms with cσ and sσ , the above equation can be rewritten as,
Where
A(φ , ξ ) = sρ 1 cξ cφ ,
B(φ , ξ ) = sξ sρ 1 , (7)
C (φ , ξ ) = −sφ c ρ 1 cξ − c ρ 2 .
The side elevation angle σ varying with the transmission angles φ and ξ can hence be
given:
A C
σ (φ , ξ ) = tan −1 ( ) ± cos −1 ( ) (8)
B A + B2
2
Hence,
Reconfigurable Automation of Carton Packaging with Robotic Technology 131
cρ 5 0 sρ 5 cς −sς 0 cρ 4 0 sρ 4 0
p = 0 1 0 sς cς 0 0 1 0 0 (10)
−sρ 5 0 cρ 5 0 0 1 −sρ 4 0 cρ 4 1
0
p = sς (11)
−cς
Substituting p of equation (4.15) and s of equation (4.6) into equation (4.3) gives,
c ρ 3 = c ς sφ cξ + s ς sξ (12)
Assembling the same terms of c ς and s ς , the above can be rearranged as,
Where,
D(σ , ξ ) = s φ c ξ
E(σ , ξ ) = s ξ (14)
F = − c ρ3
The main elevation angle ς varying with the transmission angles φ and ξ can hence be
obtained,
D F
ς (φ , ξ ) = tan −1 ( ) ± cos −1 ( ) (15)
E D + E2
2
−cρ1sρ 2 cμ cσ − cρ 2 sρ1cσ + sσ sμ sρ 2
s = −sσ sμ sρ 2 + cσ sμ sρ 2
(17)
−sρ 2 sρ 2 cμ + cρ 2 cρ1
cρ 3 cς − cρ 2 cρ 1
cμ = (18)
sρ 22
cρ 3 cς − cρ 2 cρ 1
μ (σ ,ς )=arccos (19)
sρ 22
132 Robotic Systems – Applications, Control and Programming
Hence, the elevation angles are related to the transmission angles that control the
configuration vector s. Both sets of angles feature the carton configuration. The following
Figure 5 shows the end-point trajectories of four configuration control vector.
y
$23 $34
$12
κ
μ ζ
O x
σ
$45
z
51
.
Fig. 6. Screw of the spherical five-bar mechanism
Since the system is spherical the fixed point o can be selected as represent point and the
system only have 3 rotation parts in Plücker coordinates
the infinitesimal
screw are given
by, $51 = (0,0,1;0) , $12 = ( − cσ sρ 1 ,sσ sρ 1 , cρ 1 ;0) , $45 = (1,0,0;0) , $34 = (0,sς , − cς ;0) ,
$23 = ( s ;0) . $23 = ( s ;0) .
Reconfigurable Automation of Carton Packaging with Robotic Technology 133
v = ς $45 + κ $34 = σ $15 + μ $21 + ωs $32 (20)
Thus after canceling the dual parts of the infinitesimal screw the joint speeds can be
computed as,
κ
[ −$34 $21 $32 ] μ = − ς $45 + σ $15 (21)
ω
s
With the application of Cramer’s rule,
• •
•
− ς b2 + σ b1 t p
κ= (22)
t ps
−s κ s ς +s ρ3c κ c ς +s ς
G1 =
c σ c ς s ρ1s κ s ρ3c ρ3 − c σ c ς s ρ1c κ s ς
−s κ s ς + s ρ3c κ c ς + s ς c ρ1 − s ς c ς s ρ1
(23)
then,
v = ς b2 + ς G1 + σ G 2
(25)
Where,
ωt
sκsς − sρ3 cκcς −cσsρ1 1 0 0
ωb
ωs cκsς − sρ3sκcρ3 = sσsρ1 0 0 sς 1 (26)
ωb
−cρ3 cς cρ1 0 1 −cς 2
ωp
−cσsρ1 1 0 0
0 0 sς is the Jacobean matrix for the control of the system.
where, the matrix sσsρ1
cρ1 0 1 −cς
134 Robotic Systems – Applications, Control and Programming
Finger3
Finger1
Finger4
Finger2
[ $s ] = [ J ][ωi ] (27)
For corner1 which located in O1 in configuration space the main parts of the screw is shown
in equation 25.
Expanding to the four fingers robotic system, each finger related one Conner manipulation
which their velocities is got from four five bar’s control joints twist.
While ,
Reconfigurable Automation of Carton Packaging with Robotic Technology 135
[ν 1 ν2 ν3 ν4 ] (28)
Then the Jacobean matrix of the robotic system is determined as $ij , where i for finger’s
number and j for motors number.
ωij = [ J ] [ νi ]
−1
(29)
6. Conclusion
This paper has presented new approaches to carton modelling and carton folding analysis.
By investigating carton examples, with an analysis of the equivalent mechanism, the gusset
vertexes of the cartons - being a common structure of cartons - were extracted and analyzed
based on their equivalent spherical linkages and were identified as guiding linkages that
determine folding.
A reconfigurable robotic system was designed based on the algorithms for multi-fingered
manipulation and the principle of reconfigurability was demonstrated. The simulation
results for the folding of an origami carton were given to prove the theory of the multi-
fingered manipulation strategy and the concept of reconfigurable robotic systems. Test rigs
were developed to demonstrate the principles of the reconfigurable packaging technology.
7. References
Balkcom, D.J., Demaine, E.D. and Demaine, M.L. Folding paper shopping bags, In
Proceedings of the 14th Annual Fall Workshop on Computational Geometry, Cambridge,
Mass., Nov. 19-20. 2004.
Balkcom, D.J.and Mason, M.T. Introducing Robotic Origami Folding, Proc. 2004 IEEE
International Conference on Robotics and Automation, 2004.
Bayazit, O. B., Song, G. and Amato, N. M. Ligand binding with obprm and haptic user
input: Enhancing automatic motion planning with virtual touch, In Proc. IEEE Int.
Conf. Robot. Autom. (ICRA), 2001. To appear. This work will be presented as a poster
at RECOMB’01.
Dai, J. S. and Rees Jones, J. Interrelationship between screw systems and corresponding
reciprocal systems and applications, Mech.Mach.Theary, 36(3), pp633-651, 2001.
Dai, J. S. and Rees Jones, J., Kinematics and Mobility Analysis of Carton Folds in Packing
Manipulation Based on the Mechanism Equivalent, Journal of Mechanical Engineering
Science, Proc, IMechE, Part C, 216(10): 959-970, 2002.
Dai, J. S. and Rees Jones, J., Matrix Representation of Topological Configuration
Transformation of Metamorphic Mechanisms, Transactions of the ASME: Journal of
Mechanical Design, 127(4): 837-840, 2005
de Vin, L. J., de Vries, J., Streppel, A. H., Klaassen, E. J. W. and Kals, H. J. J., The generation
of bending sequences in a CAPP system for sheet metal components, J. Mater.
Processing Technol., vol. 41, pp. 331–339, 1994.
Duan, Y. and Kollman, P.A. Pathways to a protein folding intermediate observed in a 1-
microsecond simulation in aqueous solution, Science, 282:740–744, 1998.
Dubey, V. N., and Dai, J.S., A Packaging Robot for Complex Cartons, Industrial Robot: An
International Journal, ISSN: 0143-991X, 33(2): 82-87, 2006.
Gupta, S. K., Bourne, D. A., Kim, K. H. and Krishnan, S. S., Automated process planning for
sheet metal bending operations, J. Manufact. Syst., vol. 17, no. 5, pp. 338–360, 1998.
Inui, M., Kinosada, A., Suzuki, H., Kimura, F. and Sata, T., Automatic process planning for
sheet metal parts with bending simulation, in ASME Winter Annu. Meeting, Symp.
on Intelligent and Integrated Manufacturing Analysis and Synthesis, 1987, pp. 245–258.
Kavraki, L., Geometry and the discovery of new ligands, In Proc. Int. Workshop on Algorithmic
Foundations of Robotics (WAFR), pages 435–448, 1996.
138 Robotic Systems – Applications, Control and Programming
Kavraki, L., Svestka, P., Latombe, J. C. and Overmars, M., Probabilistic roadmaps for path
planning in high-dimensional configuration spaces, IEEE Trans. Robot. Automat.,
12(4):566–580, August 1996.
Levitt, M., Protein folding by restrained energy minimization and molecular dynamics, J.
Mol. Bio., 170:723–764, 1983.
Liu, H. and Dai, J.S., Carton Manipulation Analysis Using Configuration Trasformation,
Journal of Mechanical Science, Proc. IMechE. 216(C5), pp. 543-555,2002.
Liu,H. and Dai, J.S., An Approach to Carton-Folding Trajectory Planning Using Dual
Robotic Fingers, Robotics and Autonomous Systems, 42(1), pp. 47-63, 2003.
Lu, L and Akella, S., Folding Cartons with Fixtures: A Motion Planning Approach, IEEE
Trans on Robotics and Automation, vol.16, no. 4, pp. 346-356, August 2000.
Lu, L. and Akella, S. Folding Cartons with Fixtures: A Motion Plannin Approach, Proc of the
IEEE International Conference on Robotics and Automation, May 1999.
Radin, B., Shpitalni, M. and Hartman, I., Two-stage algorithm for determination of the
bending sequence in sheet metal products, ASME J. Mechan. Design, vol. 119, pp.
259–266, 1997.
Radin, B., Shpitalni, M. and Hartman, I., Two-stage algorithm for determination of the
bending sequence in sheet metal products, ASME J. Mechan. Design, vol. 119, pp.
259–266, 1997.
Shpitalni, M. and Saddan, D., Automatic determination of bending sequence in sheet metal
products, Ann. CIRP, vol. 43, no. 1, pp. 23–26, 1994.
Singh, A.P., Latombe, J.C. and Brutlag, D.L. A motion planning approach to flexible ligand
binding, In 7th Int. Conf. on Intelligent Systems for Molecular Biology (ISMB), pages
252–261, 1999.
Song, G. and Amato, N. M., A motion planning approach to folding: From paper craft to
protein folding, In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pages 948–953, 2001.
Song, G. and Amato, N. M., Using motion planning to study protein folding pathways, In
Proc. Int. Conf. Comput. Molecular Biology (RECOMB), pages 287–296, 2001.
Wang, C.-H. and Bourne, D. A., Using symmetry to reduce planning complexity: A case
study in sheet-metal production, in Proc. 1997 ASME Design Engineering Technical
Conf., Sacramento, CA, Sept. 1997, DETC97DFM4327.
Wang, C.-H., Manufacturability-Driven Decomposition of Sheet Metal Products, Ph.D.
dissertation, The Robotics Inst., Carnegie Mellon Univ., Pittsburgh, PA, Robotics
Inst. Tech. Rep. CMU-RI-TR-97-35, Sept. 1997.
Yao,W. and Dai,J.S., Dexterous Manipulation of Origami Cartons with Robot Fingers in the
Interactive Configuration Space, Transactions of the ASME: Journal of Mechanical
Design, 130(2),pp.022303_1-8, 2008.
Yao,W., Cannella,F and Dai,J.S., Automatic Folding of Cartons Using a Reconfigurable
Robotic System, Robotics and Computer-Integrated Manufacturing. 27(3), pp 604-613,
2011.
Yao,W., Dai,J.S., Medland,T. and Mullineux,G., A Reconfigurable Robotic Folding System
for Confectionery Industry, Industrial Robot: An International Journal, 37(6), pp 542-
551, 2010.
8
New Zealand
1. Introduction
An autonomous anthropomorphic robotic arm has been designed and fabricated to
automatically monitor plant tissue growth in a modified clonal micro-propagation system
which is being developed for the New Zealand Institute for Plant & Food Research Limited.
The custom-fabricated robotic arm uses a vertical linear ball shaft and high speed stepper
motors to provide the movements of the various joints, with the arm able to swivel 180
degrees horizontally. Sensors located at the end of the arm monitor plant growth and the
ambient growing environment. These include a compact colour zoom camera mounted on a
pan and tilt mechanism to capture high resolution images, RGB (red, green and blue) colour
sensors to monitor leaf colour as well as sensors to measure ambient atmospheric
temperature, relative humidity and carbon dioxide. The robotic arm can reach anywhere
over multiple trays (600mm x 600mm) of plantlets. Captured plant tissue images are
processed using innovative algorithms to determine tissue, or whole plant, growth rates
over specified time periods. Leaf colour sensors provide information on the health status of
tissue by comparing the output with predetermined optimum values. Custom software has
been developed to fully automate the operation of the robotic arm and capture data,
allowing the arm to return to specified sites (i.e. individual plantlets) at set time intervals to
identify subtle changes in growth rates and leaf colour. This will allow plant nutrient levels
and the immediate environment to be routinely adjusted in response to this continuous
sensing, resulting in optimised rapid growth of the plant with minimal human input.
These low cost colour sensors can be incorporated into a continuous automated system for
monitoring leaf colour of growing plants. Subtle colour changes can be an early indication of
stress from less than optimum nutrient concentrations. In this chapter we also detail the
calibration technique for a RGB sensor and compare it with a high end spectrophotometer.
industry, and it is only now, as robots become more sophisticated and reliable, that we are
beginning to see them used to undertake routine, often repetitive tasks, which are expensive
to do using a highly paid labour force. With rapid strides in technological advancement,
more and more applications have become possible. These include the development of a
robotic system for weed control (Slaughter, et al., 2008), a system for automatic harvesting of
numerous agri-science products such as cutting flowers grown in greenhouses (Kawollek &
Rath, 2008) and automating cucumber harvesting in greenhouses (van Henten, et al., 2002).
Advances in electronics have empowered engineers to build robots that are capable of
operating in unstructured environments (Garcia, et al., 2009). Camera-in-hand robotic
systems are becoming increasingly popular, wherein a camera is mounted on the robot,
usually at the hand, to provide an image of the objects located in the robot’s workspace
(Kelly, et al., 2000). Increasingly, robots are being used to sort, grade, pack and even pick
fruits. Fruits can be identified and classified on a continuously moving conveyer belt (Reyes
& Chiang, 2009). An autonomous wheeled robot has been developed to pick kiwifruit from
orchard vines (Scarfe, et al., 2009). Robotic techniques for production of seedlings have been
developed, identifying a need to add a machine vision system to detect irregularities in seed
trays and to provide supplementary sowing using a 5-arm robot (HonYong, et al., 1999).
Automation of micro propagation for the rapid multiplication of plants has been described
for the micro propagation of a grass species that replaces the costly and tedious manual
process (Otte, et al., 1996). A system has also been developed that combines plant
recognition and chemical micro-dosing using autonomous robotics (Sogaard & Lund, 2007).
Colour as a means of assessing quality is also gaining popularity amongst researchers. These
include evaluating bakery products using colour-based machine vision (Abdullah, et al.,
2000), monitoring tea during fermentation (Borah & Bhuyan, 2005), grading specific fruits
and vegetables (Kang & Sabarez, 2009; Miranda, et al., 2007; Omar & MatJafri, 2009) and in
the health sector to determine blood glucose concentrations (Raja & Sankaranarayanan,
2006). Near infrared (NIR) sensors are also gaining popularity as non-destructive means of
assessing fruit and plant material, including the measurements of plant nutrient status
(Menesatti, et al., 2010) as well as testing of fruit quality (Hu, et al., 2005; Nicola¨ı, et al.,
2007; Paz, et al., 2009).
Investigation into non-destructive methods to measure the health status of plants is a
relatively new area of research. Subtle leaf colour changes can be used as a measure of plant
health. Although limited work has been carried out in real time, a recent micro-propagation
based system used potato tissue images captured via a digital camera, to identify the colour
of selected pixels (Yadav, et al., 2010). Spectral reflectance, using a range of spectral bands,
has been used as a non-destructive measure of leaf chlorophyll content in a range of species
(Gitelson, et al., 2003). Alternative methods make use of spectroscopic systems using a fixed
light source to record colour reflectance of multiple samples (Yam & Papadakis, 2004).
The introduction of machine vision as a means of investigating the environment allows for
very complex systems to be developed. Over the years the conventional “robotic design
types” have remained more or less the same; however modified designs are increasingly
being employed for specific tasks. Designs of robotic arms have made huge progress in
recent years, as motor controllers, sensors and computers have become more sophisticated.
It is envisaged that as more sensors, such as NIR (near infra-red) and colour sensors, become
readily available, these will be integrated in the robotic arm. One such integrated system,
which is unique and different from off-the-shelf robots, is detailed in this chapter.
Autonomous Anthropomorphic Robotic System
with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory 141
module. The received data is then sent to the camera and the servo controller board. The
user interface is shown in figure 2 and the completed robotic system, with plant trays, is
shown in figure 3.
Image
Additional Commands
Joystick On/Off
Zoom position
Slide Bars
(Manual
Control)
Coordinate
system
Fig. 2. The Graphical User Interface (GUI) of the robot control system
Autonomous Anthropomorphic Robotic System
with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory 143
Fig. 3. The completed robotic arm with camera, joystick and plant trays
Fig. 4. Block diagram showing connection between PC, servo controller board and joystick
In the first instance a simple program was written in Visual Basic, allowing each servo
motor to be controlled separately by clicking buttons. The motor parameters such as
stepping rate and acceleration interval could be entered through the program’s user
interface. The user interface of the test program is shown in figure 5. This program sent the
corresponding commands to the servo motor controller board. The next step was to control
the servo motor by implementing a slide bar (figure 2). This allowed the operator to slide
the bar, which incremented or decremented the position value, allowing simple movements
based on the position byte. On successfully implementing controls for one servo motor,
multiple servo motors could then be controlled in the same manner.
144 Robotic Systems – Applications, Control and Programming
Fig. 8. (a) Completed robotic arm using stepper motors (b) Camera housing and RGB colour
Sensors
146 Robotic Systems – Applications, Control and Programming
make the links of the robotic arm extendable so that the robot can be flexible and adapted to
operate in various workspaces. To ensure the motor torque ratings were not exceeded, a
gearing system was investigated, which made use of spur gears to increase torque ratios.
However, once constructed, a number of issues arose, including excessive weight (with the
links extended) and slippage between gears. To overcome these issues a rethink of the arm
design removed the ability to extend the link lengths, and a belt and pulley system was
integrated to alleviate slippage within the gears. However, each design iteration maintained
the overall original design concept. Figure 7 shows an initial version of the robotic arm in
various stages of assembly and figure 8 shows the final design, with the various aluminium
parts anodized. The completed robotic arm is shown in figure 8 (a). The close up view of the
camera housing mechanism and the RGB colour sensors is shown in figure 8(b).
3.2.2 Actuation and control of the robotic arm joints using stepper motors
To allow the various joints to move, the arm uses bipolar, high torque stepper motors, which
have varying amounts of torque, depending on the joint. The robotic arm uses five stepper
motors that are controlled through a motor controller (KTA-190) and micro-step driver
(M542) ("OceanControls," 2010). All the five motors have a step angle of 1.8 degrees and
make use of a micro step driver that allows the user to select an even finer resolution (i.e.
increasing the number of steps per revolution). Both a pulse signal and a direction signal are
required for connecting a 4-wire stepper motor to the driver, with speed and torque
depending on the winding inductance.
The KTA-190 motor controllers provide an interface between the computer and up to 4
stepper motors as well as the ability to control each motor independently or collectively.
Utilizing a RS-232 9600, 8N1 ASCII serial communication protocol, up to 4 controller boards
can be linked, giving control of up to 16 stepper motors (figure 9). A motor is controlled by a
simple address, followed by the appropriate ASCII commands. The controller has as
interface to allow limit switches to be used to prevent the motors from travelling out of
range. With a total of 17 commands it is possible to tailor the operation and move the
motors. Commands include: setting the position of the motors, returning the current
positions of the motors, moving the motors by a relative or absolute amount and
acceleration settings. A status command returns a 12-bit binary representation on the status
of the controller board at any given time, providing information on the movement, direction
and status of the limit switch respectively.
information. A proximity sensor has been integrated into the system to ensure that the
colour readings are taken at a fixed height of 20mm above the plant material.
4. Sensors
Sensors provide information on the surroundings, which vary depending on their intended
applications. A number of sensors have been integrated into the system to provide
information in a non-destructive method on plant growth, with the intention of using low
cost sensors which are easily amendable into the system. The use of a camera provides
information on how fast the plant is growing, by identifying the quantity of plant material
from a 2D view. Colour sensors provide information on the colour of an object. When colour
sensors are used to monitor plant leaves, subtle changes can be detected before the human
eye can identify them. This allows for media and nutrient levels to be adjusted accordingly.
A proximity sensor ensures colour readings are taken at a fixed height, while temperature,
humidity and CO2 sensors provide information on the ambient environment.
We detail the camera control and testing in section 4.1. Colour sensor selection, along with
calibration techniques and results, are presented in detail in section 4.2.
4.1 Camera
A Sony colour camera (model: FCB-IX11AP) was used. It features a ¼″ CCD (charge coupled
device) image sensor using PAL (Phase Alternating Line) encoding system. The camera has
a 40x zoom ratio (10x optical, 4x digital) that is controllable from a PC via Sony’s VISCA
(Video System Control Architecture) command protocol. Over 38,000 different command
combinations are possible for controlling the cameras features. The camera’s macro
capability allows it to capture images as close as 10mm from the subject and it can operate in
light conditions as low as 1.5 Lux. The electronic shutter speed is controllable from 1/10 to
1/10000 of a second allowing for clarity in photographs. In order for the camera’s functions
to be controlled, hexadecimal commands (as serial data) are sent to the camera. These serial
commands require 8 data bits, 1 start bit, 1 (or 2) stop bit and no parity. They have a
communication speed of 9.6, 19.2 or 38.4 kbps. The camera can be programmed and
controlled using a TTL or RS232C signal level serial interface. To test the camera features, it
was directly wired to the PC using the RS232C interface via a USB-to-RS232 converter as
shown in figure 10. The video output signal from the camera was fed to a frame
grabber/digitizer which is interfaced to the PC using USB. The image captured is displayed
on the application’s GUI (figure 12).
Fig. 10. Block diagram showing wired connection of PC to camera’s RS232 inputs
148 Robotic Systems – Applications, Control and Programming
To familiarize with the VISCA command structure and to test the various camera functions,
especially the programming commands for controlling the zoom, a standard communication
program (Terminal v1.9b) was used to send commands to the camera. To test the TTL
interface, the system shown in figure 11 was employed. IC ST232 was used to convert the
RS232 level signals to 5V TTL.
Fig. 11. Block diagram showing wired connection of PC to camera’s TTL inputs
In the final design of the robotic system the camera was connected using the original RS-232
interface, with custom software created to control the features of the camera. Figure 12
shows the user interface which allows camera zoom, iris, gain, exposure settings and shutter
features to be manually controlled. It also displays the image captured by the camera. Figure
13 shows the test program depicting the magnified image of the object under observation,
with the camera set to 8x zoom.
Each of the controllable settings for the camera is controlled by sending arrays of
hexadecimal commands to the camera, making use of Sony’s VISCA protocol. Custom
created software allows the user to control a number of settings to customize the camera to
the user’s desire.
Fig. 13. Graphical User Interface (GUI) of the remote control application (camera zoom 8x)
Connection
pins (+5V, Snorkel
GND, Signal)
the object. The ColorPAL makes use of a TAOS (Texas Advanced Optoelectronic Solutions)
light-to-voltage chip. When light is reflected, the voltage, which is proportional to the light
reflected, is used to determine the sample’s R, G and B colour contents. The ColorPAL
requires the sample to be illuminated using each of the red, green and blue LEDs, with a
‘snorkel’ to shield possible interference from external light sources. This requires the
ColorPAL to be in direct contact with the object for an optimum reading without
interference.
Lens
Connection Pins
(on the bottom
of the board)
White LED
Fig. 16. Light absorbed from TCS3200 across the white LED light spectrum when the sensor
is positioned at 6 different heights
Autonomous Anthropomorphic Robotic System
with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory 151
Since the TCS3200 is mounted 20 mm above the sample, and therefore not in direct contact
with the sample, it was more suited for our application than the full contact required by the
ColorPAL sensor. A Konica Minolta CM-700D Spectrophotometer (Konica Minolta Sensing
Americas, Inc., NJ, USA) was used to validate and calibrate the RGB sensors. For accurate
measurements, the CM-700D was calibrated by taking both white and black readings by
sampling a supplied white and black object respectively.
The CM-700D gives colour in the XYZ colour space, as well as L*a*b*, L*C*h, Hunter Lab,
Yxy and Munsell. A linear transformation matrix was required to transform data from the
XYZ colour space to the RGB colour space to enable comparisons and calibrations to the
Parallax sensor. The linear transformation equation to be solved (Juckett, 2010) is:
X R
Y = M × G (1)
Z B
X
x= (2)
X +Y +Z
Y
y= (3)
X +Y +Z
Z
z= (4)
X +Y +Z
Equations (2 – 4) combined with the standard 1931 xy chromaticity diagram provided the
foundation for the linear transformation (Eq. 1). This transformation converted the XYZ data
initially to sRGB colour. The chromaticity values of x, y and z are shown in Table 1
(Lindbloom, 2010).
Colour x y z
Red 0.64 0.33 0.212656
Green 0.30 0.60 0.715158
Blue 0.15 0.06 0.072186
Table 1. x, y, and z chromaticity values of red, green and blue converting xyz to sRGB
From the x, y and z chromaticity values (Table 1), the transformation matrix, M, is calculated
(Eq. 5)
R X
G = M −1 × Y (6)
B Z
152 Robotic Systems – Applications, Control and Programming
Colour x y z
Red 0.7350 0.2650 0.176204
Green 0.2740 0.7170 0.812985
Blue 0.1670 0.0090 0.010811
Table 2. x, y, and z chromaticity values of red, green and blue converting xyz to CIE RGB
The x, y and z chromaticity values shown in Table 2, are again used to solve the
transformation matrix, M (Eq. 8)
Testing showed that the TCS3200 produced light in the CIE RGB colour space, and although
results would later show the colour sensor could be calibrated to the sRGB colour space,
results from the calibration would be incorrect. Therefore a colour transformation to a CIE
RGB colour space was more appropriate than the sRGB colour space; consequently a new
linear transformation was required.
RN' = RN γ (10)
Autonomous Anthropomorphic Robotic System
with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory 153
where Rmax , Gmax , Bmax represent the maximum R, G and B value of a white object from the
TCS3200.
For each colour sample measured, the calibration factor was calculated and averaged using
a geometric mean (as opposed to the more general arithmetic mean function (Fleming &
154 Robotic Systems – Applications, Control and Programming
Wallace, 1986), thus providing the γ factor for R, G and B individually. The (desired)
calibrated values were then obtained using equation 13.
For a range of fifteen colours, measurements were taken using the TCS3200 RGB sensor and
the CM-700D Spectrophotometer (Table 3). The gamma calibration factors calculated were:
Fig. 17. TCS3200 sensor RGB readings, calibrated and un-calibrated, compared with the CM-
700D readings of: Red (A); Green (B); Blue (C) using a CIE RGB colour transformation.
(Colour samples are as given in Table 3)
An example of a green colour interpreted by the CM-700D and TCS3200 colour sensor
before and after calibration is shown in figure 18.
Fig. 18. Graphical representation of the colour differences between, calibrated and un-
calibrated TCS3200 colour sensor
Results showed when calibrating the CM-700D XYZ values to CIE RGB instead of sRGB, the
calibration results improved, as shown in Table 5, to have a much smaller error for R, G and
B. A colour representation can be seen in Figure 19.
Fig. 19. An example of a green colour interpreted by the TCS3200 colour sensor with no
calibration compared with the CM-700D with both a sRGB and CIE RGB
5. Conclusion
An anthropomorphic robotic arm has been designed, fabricated and fully tested to meet
the requirements set out by The New Zealand Institute for Plant & Food Research
Limited. The robotic system is able to reach and monitor plantlets growing in trays on a
multi-level shelving unit. Custom software has been developed to fully automate the
control of the robotic arm. The position of the robotic arm can be controlled with great
precision using the microstepper controller to allow micro-motion of the stepper motors.
The robot can be programmed to autonomously position itself and take readings at
regular intervals.
Several sensors have been integrated in the robotic system, namely a high-end colour
camera for capturing high resolution images of plantlets; proximity sensor to position the
arm at a predetermined distance from the plant surface for taking measurements;
temperature, humidity and CO2 sensors to monitor the ambient atmospheric conditions and
a low-cost RGB sensor to measure the colour of plant leaves.
Two different RGB sensors have been evaluated. Experimental results show that the
Parallax TCS3200 RGB sensor is a useful low cost colour sensor, which when calibrated to an
industry standard spectrophotometer, can provide accurate RGB readings. It is therefore a
useful component for integrating into an automated system such as a robotic arm, with
various other sensors, for monitoring plants growing in a modified plant micro-propagation
system.
The system has the potential for not only monitoring plant material in a laboratory
environment but other applications as well where non-destructive measurements of colour
are required.
6. Acknowledgment
The hardware cost of this research has been funded by The New Zealand Institute for Plant
& Food Research. The authors greatly appreciate the financial and technical workshop
support given by the School of Engineering and Advanced Technology, Massey University
and The New Zealand Institute for Plant & Food Research Limited.
Autonomous Anthropomorphic Robotic System
with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory 157
7. References
Abdullah, M. Z., Aziz', S. A., & Mohamed, A. M. D. (2000). Quality Inspection Of Bakery
Products Using A Color-Based Machine Vision System. Journal of Food Quality, 23
(1), 39-50.
Borah, S., & Bhuyan, M. (2005). A computer based system for matching colours during the
monitoring of tea fermentation. International Journal of Food Science and
Technology, 40(6), 675-682.
Lindbloom, B. (2010). Useful colour Equations. Retrieved 18/8/2010, from
http://www.brucelindbloom.com/
Fleming, P. J., & Wallace, J. J. (1986). How not to lie with statistics: The correct way to
summarize the benchmark results. Communications of the ACM, 29(3), 218-221
Garcia, G. J., Pomares, J., & Torres, F. (2009). Automatic robotic tasks in unstructured
environments using an image path tracker. Control Engineering Practice, 17(5), 597-
608.
Gitelson, A. A., Gritz, Y., & Merzlyak, M. N. (2003). Relationships between leaf chlorophyll
content and spectral reflectance and algorithms for non-destructive chlorophyll
assessment in higher plant leaves. Journal of Plant Physiology, 160(3), 271-282.
HonYong, W., QiXin, C., Masateru, N., & JianYue, B. (1999). Image processing and robotic
techniques in plug seedling production. Transactions of the Chinese Society of
Agricultural Machinery, 30, 57-62.
Hu, X., He, Y., Pereira, A. G., & Gómez, A. H. (2005). Nondestructive Determination Method
of Fruit Quantity Detection Based on Vis/NIR Spectroscopy Technique. 27th
Annual International Conference of the Engineering in Medicine and Biology
Society (IEEE-EMBS 2005), 1956-1959
Juckett, R. (2010). RGB color space conversion - Linear transformation of color. Retrieved
14/8/2010, from http://www.ryanjuckett.com
Kang, S. P., & Sabarez, H. T. (2009). Simple colour image segmentation of bicolour food
products for quality measurement. Journal of Food Engineering, 94, 21-25.
Kawollek, M., & Rath, T. (2007). Robotic Harvest of Cut Flowers Based on Image Processing
by Using Gerbera jamesonii as Model Plant. In S. DePascale, G. S. Mugnozza, A.
Maggio & E. Schettini (Eds.), Proceedings of the International Symposium on High
Technology for Greenhouse System Management (Greensys 2007), 557-563
Kelly, R., Carelli, R., Nasisi, O., Kuchen, B., & Reyes, F. (2000). Stable Visual Servoing of
Camera-in-Hand Robotic Systems. IEEE/ASME Transactions on Mechatronics, 5(1),
39 - 48.
Menesatti, P., Antonucci, F., Pallottino, F., Roccuzzo, G., Allegra, M., Stagno, F., et al. (2010).
Estimation of plant nutritional status by Vis–NIR spectrophotometric analysis on
orange leaves. Biosystems Engineering, 105(4), 448-454.
Miranda, C., Girard, T., & Lauri, P. E. (2007). Random sample estimates of tree mean for
fruit size and colour in apple. Scientia Horticulturae, 112 (1), 33-41.
Nicola¨ı, B. M., Beullens, K., Bobelyn, E., Peirs, A., Saeys, W., Theron, K. I., et al. (2007).
Nondestructive measurement of fruit and vegetable quality by means of NIR
spectroscopy: A review. Postharvest Biology and Technology, 46(2), 99-118.
OceanControls. (2010). Retrieved 12/04/2010, from www.oceancontrols.com.au
158 Robotic Systems – Applications, Control and Programming
Omar, A. F. B., & MatJafri, M. Z. B. (2009). Optical Sensor in the Measurement of Fruits
Quality: A Review on an Innovative Approach. International Journal of Computer
and Electrical Engineering, 1(5), 557-561.
Otte, C., Schwanke, J., & Jensch, P. (1996). Automatic micropropagation of plants. Optics in
Agriculture, Forestry, and Biological Processing II, Proc. SPIE 2907, 80-87.
Parallax Home. (2010). Retrieved 05/07/2010, from www.Parallax.com
Paz, P., S´anchez, M. ı.-T., P´erez-Mar´ın, D., Guerrerob, J. e.-E., & Garrido-Varob, A. (2009).
Evaluating NIR instruments for quantitative and qualitative assessment of intact
apple quality. Journal of the Science of Food and Agriculture, 89(5), 781-790.
Raja, A. S., & Sankaranarayanan, K. (2006). Use of RGB Color Sensor in Colorimeter for
better clinical measurements of blood Glucose. BIME Journal 6(1), 23 - 28.
Reyes, J., & Chiang, L. (2009). Location And Classification Of Moving Fruits In Real Time
With A Single Colour Camera. Chilean Journal Of Agricultural Research, 69, 179-
187.
Scarfe, A. J., Flemmer, R. C., Bakker, H. H., & Flemmer, C. L. (2009). Development of An
Autonomous Kiwifruit Picking Robot. 4th International Conference on
Autonomous Robots and Agents, 380-384.
Slaughter, D. C., Giles, D. K., & Downey, D. (2008). Autonomous robotic weed control
systems: A review. Computers and Electronics in Agriculture, 61(1), 63-78.
Sogaard, H. T., & Lund, I. (2007). Application accuracy of a machine vision-controlled
robotic micro-dosing system. biosystems engineering, 96(3), 315-322.
Sony Professional. (2010). Retrieved 05/06/2009, from www.pro.sony.eu
van Henten, E. J., Hemming, J., van Tuijl, B. A. J., Kornet, J. G., Meuleman, J., van Os, E. A.,
et al. (2002). An autonomous robot for harvesting cucumbers in greenhouses.
[Article]. Autonomous Robots, 13(3), 241-258.
Yadav, S. P., Ibaraki, Y., & Gupta, S. D. (2010). Estimation of the chlorophyll content of
micropropagated potato plants using RGB based image analysis. Plant Cell Tissue
and Organ Culture, 100(2), 183-188.
Yam, K. L., & Papadakis, S. E. (2004). A simple digital imaging method for measuring and
analyzing color of food surfaces. Journal of Food Engineering, 61, 137-142.
Part 2
1. Introduction
The ability to efficiently move in a complex environment is a key property of animals. It
is central to their survival, i.e. to avoid predators, to look for food, and to find mates for
reproduction (Ijspeert, 2008). Nature has found different solutions for the problem of legged
locomotion. For example, the vertebrate animals have a spinal column and one or two pairs
of limbs that are used for walking. Arthropoda animals are characterized by a segmented
body that is covered by a jointed external skeleton (exoskeleton), with paired jointed
limbs on each segment and they can have a high number of limbs (Carbone & Ceccarelli,
2005). The biological mechanisms underlaying locomotion have therefore been extensively
studied by neurobiologists, and in recent years there has been an increase in the use of
computer simulations for testing and investigating models of locomotor circuits based on
neurobiological observations (Ijspeert, 2001). However, the mechanisms generating the
complex motion patterns performed by animals are still not well understood (Manoonpong,
2007).
Animal locomotion, for instance, requires multi-dimensional coordinated rhythmic patterns
that need to be correctly tuned so as to satisfy multiple constraints: the capacity to generate
forward motion, with low energy, without falling over, while adapting to possibly complex
terrain (uneven ground, obstacles), and while allowing the modulation of speed and direction
(Ijspeert & Crespi, 2007). In vertebrate animals, an essential building block of the locomotion
controller is the Central Pattern Generator (CPG) located in the spinal cord. The CPG is a
neural circuit capable of producing coordinated patterns of rhythmic activity in open loop,
i.e. without any rhythmic inputs from sensory feedback or from higher control centers
(Delcomyn, 1980; Grillner, 1985). Interestingly, very simple input signals are sufficient to
modulate the produced patterns. Furthermore, CPG can adapt to various environments by
changing the periodic rhythmic patterns. For instance, the cats and horses are able to change
their locomotor patterns depending on the situation.
This relevance of locomotion both for biology and for robotics has led to multiple interesting
interactions between the two fields. The interactions have mainly been in one direction, with
robotics taking inspiration from biology in terms of morphologies, modes of locomotion,
and/or control mechanisms. In particular, many robot structures are directly inspired
by animal morphologies, from snake robots, quadruped robots, to humanoid robots.
Increasingly, robotics is now providing something back to biology, with robots being used
162
2 Robotic Systems – Applications, Control and Programming
Robotic Systems
as scientific tools to test biological hypotheses (Ijspeert, 2008). Researchers have studied the
CPGs for decades and some principles have been derived to model their functionality and
structure. CPGs have been proposed as a mechanism to generate an efficient control strategy
for legged robots based on biological locomotion principles. Locomotion in legged robots is
much more complex than wheeled robots, since the formers have between twelve or twenty
degrees of freedom, which must be rhythmically coordinated to produce specific gaits. The
design of locomotion control systems of legged robots is a challenge that has been partially
solved. To develop bio-inspired solutions, detailed analyses of candidate biological neural
systems are essential as in the case of legged locomotion.
Models of CPGs have been used to control a variety of different types of robots and different
modes of locomotion. For instance, CPG models have been used with hexapod and octopod
robots inspired by insect locomotion, quadruped robots inspired by vertebrates, such as horse,
biped robots inspired by humans and other kind of robots inspired by reptiles, such as snakes,
as it will be discussed in the section 2. CPGs could been modeled with different levels of
abstraction, these aspects will be presented in section 3. Additionally, CPGs present several
interesting properties including distributed control, the ability to deal with redundancies,
fast control loops, and allowing modulation of locomotion by simple control signals. These
properties, when transferred to mathematical models, make CPGs interesting building blocks
for locomotion controllers in robots (Fujii et al., 2002; Ijspeert, 2008), as shown in section 4.
In spite of its importance, the CPG-approach presents several disadvantages. One of the
main drawbacks of CPGs is related with the learning methodologies to generate the rhythmic
signals (Zielinska, 1996). For that purpose, a method to tune a CPG using genetic algorithms
(GA) is proposed, whose details are provided in section 5. Moreover, a methodology for
designing CPGs to solve a particular locomotor problem is yet missing. Other problem is that
animals rarely perform steady-state locomotion for long time, and tend to superpose, and
switch between, multiple motor behaviors. A remaining open challenge is therefore to design
control architectures capable of exhibiting such richness motor skills.
Engineering CPG-based control systems has been difficult since the simulation of even
rather simple neural network models requires a significant computational power that
exceeds the capacity of general embedded microprocessors. As a result, CPG dedicated
hardware implementations, both analog and digital, have received more attention (Nakada,
2003). On one hand, analog circuits have been already proposed, being computation
and power efficient but they usually lack flexibility and dynamics and they involve large
design cycles. On the other hand, Field-Programmable Gate Array (FPGA) substrate might
provide real-time response, low power consumption and concurrent processing solutions for
low-level locomotion issues (Barron-Zambrano et al., 2010b). So, a brief analysis of CPG
implementations is presented in section 6. Also, the integration of environment information
might allow to produce behaviors by means of selection and adaption of lower substrates.
Under this scenario, legged robots may reconfigure their locomotion control strategies on-line
according to sensory information so as to effectively handle dynamic changes in the real
world. Yet, the FPGA-based approach fits well since it could handle efficiently higher
perception functions for CPG parameter adaptation, and to provide the neural processing and
coordination of a custom and adaptive CPG hardware module. For those reasons, section 7
presents a hardware implementation for quadruped locomotion control based on FPGA. The
experimental results of hardware implementation and the future work are shown in sections
8 and 9, respectively. Finally, the conclusions of this work are presented in section 10.
CPG Implementations
CPG Implementations for Robot
for Robot Locomotion: Locomotion:
Analysis and Design Analysis and Design 1633
model control has been used to control dynamic walking bipedal robots (Pratt et al., 2001) and
an agile 3D hexapod in simulation (Torres, 1996).
shown that connectivity alone is sufficient to produce an oscillatory output. The goal of these
models is on how rhythmic activity is generated by network properties and how different
oscillatory neural circuits get synchronized via interneuron connections (Ijspeert, 2008).
Other extensively used oscillators include phase oscillators (Buchli & Ijspeert, 2004;
Matsuoka, 1987). Most of the oscillators have a fixed waveform for a given frequency. In
some cases, closed-form solutions or specific regimes, as for example phase-locked regimes,
can be analytically derived but most systems are solved using numerical integration. Several
neuromechanical models have been developed. The addition of a biomechanical model of
the body and its interaction with the environment offers the possibility to study the effect of
sensory feedback on the CPG activity.
Finally, oscillator models are based on mathematical models of coupled nonlinear oscillators
to study population dynamics. In this case, an oscillator represents the activity of a complete
oscillatory center (instead of a single neuron or a small circuit). The purpose of these models is
not to explain rhythmogenesis (oscillatory mechanisms are assumed to exist) but to study how
inter-oscillator couplings and differences of intrinsic frequencies affect the synchronization
and the phase lags within a population of oscillatory centers. The motivation for this type of
modeling comes from the fact that the dynamics of populations of oscillatory centers depend
mainly on the type and topology of couplings rather than on the local mechanisms of rhythm
generation, something that is well established in dynamical systems theory (Ijspeert & Crespi,
2007).
Fig. 1. Configurations of typical gait patterns in quadruped locomotion and their relative
phases among the limbs.
ẍ − α( p2 − x2 ) ẋ + ω 2 x = 0 (1)
where x is the output signal from the oscillator, α, p and ω are the parameters that tune the
properties of oscillators. In general, α affects the shape of the waveform, the amplitude of x
depends largely on the parameter p. When the amplitude parameter p is fixed, the output
frequency is highly dependent on the parameter ω. However, a variation of parameter p can
slightly change the frequency of the signal, and α also can influence the output frequency.
x ai = ∑ wij x j (3)
j
168
8 Robotic Systems – Applications, Control and Programming
Robotic Systems
where wij is the coupling weight that represents the strength of ith oscillator over the jth
oscillator. The generation of the respective gaits depends on the values of the oscillators
parameters and the connection wights among oscillators.
The analysis of the behavior reported in (Barron-Zambrano & Torres-Huitzil, 2011) shows
that the sign of the coupling weight determines the phase difference. For inhibitory connections, the
phase lag is around of 360/n to particular values of wij , where n is the number of coupled oscillators.
For excitatory connections, the phase difference is equal to zero. Finally, the matrix built for the
connection weights, wij , might be symmetrical and regular.
n = [ αpω ] (4)
Each individual n is evaluated according to the next fitness function which is minimized by
the GA:
Fig. 2. Block diagram of first stage. It estimates the oscillator parameter values, [ α, p, ω ], to
produce a signal with specific frequency and amplitude.
n = [ wx4 x3 x2 x1 ] (6)
where the initial values and connection weight were represented by real numbers.
The GA only needs estimate one value of w for one gait and by changing the algebraic sign it
is possible to generate the rest of the gaits. The fitness function is based on the phase among
the patterns. To estimate the phase among the signals, the correlation was used. The oscillator
labeled as LF (X1) is the reference signal, phase equal to zero, and the other signals are shifted
with respect to this signal. The fitness function to evaluate the individual is given by equation
7. Here, the GA minimizes the value of the fitness function.
where phase( xi , x j ) is the phase between the ith and jth oscillator. The simulation of x1..4 is
generated by around of 50 seconds by each n and only the last 10 seconds are considered for
the phase estimation. Figure 2 shows a block diagram of the second stage.
Fig. 3. Block diagram of second stage. It finds the values of coupling, wij and the initial
values of x1..4 .
microprocessors providing high accuracy and flexibility but those systems consume high
power and occupy a large area restricting their utility in embedded applications. On the other
hand, analog circuits have been already proposed, being computation and power efficient but
they usually lack flexibility and dynamics and they involve large design cycles.
Relatively few works have focused on adopting the hardware technology to fully practical
embedded implementations. Examples of this adaptation using analog circuit are presented
by (Kier et al., 2006; Lee et al., 2007; Lewis et al., 2001; Nakada, 2003). In the first one,
Nakada et al present a neuromorphic analog CMOS controller for interlimb coordination in
quadruped locomotion. Authors propose a CPG controller with analog CMOS circuits. It
is capable of producing various rhythmic patterns and changing these patterns promptly.
Lewis at al constructed an adaptive CPG in an analog VLSI (Very Large Scale Integration)
chip, and have used the chip to control a running robot leg. They show that adaptation
based on sensory feedback permits a stable gait even in an under actuated condition: the
leg can be driven using a hip actuator alone while the knee is purely passive. Their chip
has short-term on board memory devices that allow the continuous, real-time adaptation of
both center-of-stride and stride amplitude. In addition, they make use of integrate-and-fire
neurons for the output motor neurons. Finally, the authors report that their abstraction is
at a higher level than other reported work, which lends itself to easier implementation of
on-chip learning. Kier et al present a new implementation of an artificial CPG that can be
switched between multiple output patterns via brief transient inputs. The CPG is based
on continuous-time recurrent neural networks (CTRNNs) that contain multiple embedded
limit cycles. The authors designed and tested a four-neuron CPG chip in AMI’s 1.5 μm
CMOS process, where each neuron on the chip implements the CTRNN model and is
fully programmable. The authors report the measured results from the chip agree well
with simulation results, making it possible to develop multi-pattern CPGs using off-line
simulations without being concerned with implementation details. Lee at al report a feasibility
study of a central pattern generator-based analog controller for an autonomous robot. The
operation of a neuronal circuit formed of electronic neurons based on Hindmarsh-Rose neuron
dynamics and first order chemical synapses modeled. The controller is based on a standard
CMOS process with 2V supply voltage. Simulated results show that the CPG circuit with
coordinate controller and command neuron is viable to build adaptive analog controller
for autonomous biomimetic underwater robots. Finally, a biologically inspired swimming
machine is presented by Arena in (Arena, 2001). The system used to generate locomotion is
CPG Implementations
CPG Implementations for Robot
for Robot Locomotion: Locomotion:
Analysis and Design Analysis and Design 171
11
an analog array of nonlinear systems known as Cellular Neural Networks (CNN). The CNN
is defined as a two-dimensional array of M × N identical cells arranged in a rectangular grid,
where each cell was defined as the nonlinear first order circuit. The author reports that the
application to service robot further underlines that CNNs can be used as powerful devices to
implement biologically inspired locomotion and swimming.
Another solution to implement the CPG is using FPGAs as the hardware platform. The
CPG is programmed in either VHDL or Verilog and download it onto the FPGA. This
approach provide the hardware efficiency with software flexibility. In (Torres-Huitzil, 2008),
Torres and Girau present an implementation of a CPG module based on the Amari-Hopfield
structure into pure FPGA hardware. In their work, they have chosen to attached the CPG
implementation to a Microblaze soft-core processor running uclinux. By doing this, they
achieve the computation speed from having the CPG running the mathematical operations
in the FPGA hardware making it possible to achieve some degree of parallelization in
the calculations and then having the soft-core CPU free to handle other high level control
algorithms and this could change the control parameters on the CPG implementation. Barron
et al in (Barron-Zambrano et al., 2010b) present a FPGA implementation of a controller,
based on CPG, to generate adaptive gait patterns for quadruped robots. The proposed
implementation is based on an specific digital module for CPGs attached to a soft-core
processor so as to provide an integrated and flexible embedded system. Experimental results
show that the proposed implementation is able to generate suitable gait patterns, such as
walking, trotting, and galloping.
Other CPGs implementations are using microprocessor. It gives the designer more freedom,
because it is not constrained by the difficulty of designing e.g. a differential analog circuits.
These implementations using a scheme where the control is distributed trough the robot
body. The first implementation example of these is presented by (Inagaki et al., 2006).
In that work, Inagaki et al proposed a method to control gait generation and walking
speed control for an autonomous decentralized multi-legged robot using CPGs. The robot
locomotion control is composed by subsystems. Each subsystem controls a leg and has
a microcomputer. The subsystems are connected mechanically and communicate with
neighbors. Each microcomputer calculates the dynamics of two oscillators, sends and receives
the dynamic results from neighbor subsystems. The gait generation and the walking speed
control are achieved by controlling the virtual energy of the oscillators (Hamiltonian). A real
robot experiment showed the relationship to the Hamiltonian, the actual energy consumption
and the walking speed. The effectiveness of the proposed method was verified. The
proposed controller can be generalized as a wave pattern controller, especially for robots
that have homogeneous components. In (Crespi & Ijspeert, 2006), Crespi and Ijspeert
proposed an amphibious snake robot designed for both serpentine locomotion (crawling)
and swimming. It is controlled by an on-board CPG inspired by those found in vertebrates.
The AmphiBot II robot is designed to be modular: it is constructed out of several identical
segments, named elements. Each element contains three printed circuits (a power board, a
proportional-derivative motor controller and a small water detector) connected with a flat
cable, a DC motor with an integrated incremental encoder, a set of gears. The elements are
connected (both mechanically and electrically) using a compliant connection piece fixed to the
output axis. The main contribution of this article is a detailed characterization of how the CPG
parameters (i.e., amplitude, frequency and wavelength) influence the locomotion speed of the
robot.
172
12 Robotic Systems – Applications, Control and Programming
Robotic Systems
The related works present control schemes for robot locomotion based on biological systems
under different implementation platforms. The schemes compute the walk pattern in a
parallel way through the specialized either systems or coupled subsystems. The CPG control
can be implemented by different approaches (analog circuits, reconfigurable hardware and
digital processors). The analog circuit implementations present a good performance between
power and energy consumption, but they have a large design cycle. For that, these approaches
are useful in robots with either limited storage of energy or data processing with real time
constraints. In the second approach is possible to implement CPG control with real time
constraint sacrificing the energy performance. These implementations have smaller design
cycle than the analog circuit implementations. The approach is ideal to be used in the
early robot design stages. The last approach presents a high flexibility because it is not
constrained by the difficulty of designing. The design cycle is shorter but it has the worst
energy performance. Furthermore, those systems occupy a large area restricting their utility
in embedded applications.
The analyzed implementations shown several common features. The first one is that they are
reconfigurable. The second one, they are capable to adapt oneself to different environments.
The most of the implementations presents a scheme where the control is distributed trough the
robot body or they are implemented through the interaction of basic elements. These similar
features are seen in animal locomotion tasks.
(a) (b)
Fig. 4. (a) Digital hardware architecture for the Van Der Pol oscillator and (b) average error as
a function of the bit precision used in the basic blocks.
a simplified block diagram of the VPD network interfacing scheme to the bank registers and
microblaze processor.
8. Experimental results
8.1 GA-based method results
The proposed GA-based method was implemented in Matlab and its results were tested
through hardware implementation reported in (Barron-Zambrano et al., 2010b). For the
implementation of the GA, the specialized toolbox of Matlab was used. The parameters
used to estimate the variables of the oscillator were: a population of 30 individuals and 200
generations. The probabilities of crossover and mutation are set to be 0.8 and 0.2 respectively.
The method has an average error with the desired frequency of ±0.03 Hz and with the desired
amplitude of ±0.15. Simulations show the state of the parameters in different generations, the
first plot shows the simulations in the 2nd population generation. The generated signal in the
early generations is periodic but it has low value of frequency and amplitude respect to the
actual desired value. The second plot shows a signal with periodic behavior, 7th generation,
but with a different frequency with respect to the desired frequency. Finally, the last plot
shows the pattern with the desired frequency, 1 Hz, and amplitude, 2.5. The final parameters
[ α, p, ω ] estimated for the method are [0.705, 0.956, 4.531].
Figure 6 shows the behavior of the fitness values at each generation for the oscillator
parameters, plot 6(a), and for the network weights and initial conditions of x1..4 , plot 6(b). In
both cases, the fitness function values decrease suddenly in the early stages of tuning. Also,
plots show that the method needs a reduced number of generations to tune the oscillator and
the network.
In a second step, the estimation of wih and the initial values of x1..4 is computed. The
parameters used for this step were: a population of 40 individuals and 300 generations. The
probabilities of crossover and mutation are set to be 0.5 and 0.2 respectively. In the test, the
desired pattern is a walk gait. This pattern has a phase of 90 degrees among the signal in
fixed order, X1 − X3 − X2 − X4. Only, the walk matrix was estimated by the method. It
is possible to generate the trot and gallop gait with the walk matrix and only changing the
initial condition of x1..4 , but it is impossible to switch from one gait to another when the gait
CPG Implementations
CPG Implementations for Robot
for Robot Locomotion: Locomotion:
Analysis and Design Analysis and Design 175
15
0.5
150
0.4
Fitness
Fitness
0.3 100
0.2
50
0.1
0 0
0 5 10 15 20 25 20 40 60 80
Generations Generations
(a) (b)
Fig. 6. (a) Fitness convergence plots for the first stage, oscillator parameters and (b) for the
second stage, network weights and initial conditions of x1..4 .
Gait Walk Trot Gallop
⎛ ⎞ ⎛ ⎞ ⎛
⎞
1.0 −0.26 −0.26 −0.26 1.0 −0.26 0.26 −0.26 1.0 0.26 −0.26 −0.26
⎜−0.26 1.0 −0.26 −0.26⎟ ⎜−0.26 1.0 −0.26 0.26 ⎟ ⎜−0.26 1.0 0.26 −0.26⎟
Matrix ⎜ ⎟ ⎜ ⎟ ⎜
⎝−0.26 −0.26 1.0 −0.26⎠ ⎝ 0.26 −0.26 1.0 −0.26⎠ ⎝−0.26 −0.26 1.0 0.26 ⎠
⎟
−0.26 −0.26 −0.26 1.0 −0.26 0.26 −0.26 1.0 0.26 −0.26 −0.26 1.0
is stable. Then, the remaining matrices, trot and gallop, were calculated using the behavior
analysis presented in (Barron-Zambrano & Torres-Huitzil, 2011). The analysis shows that the
combination of inhibitory and excitatory connections is able to modify the phase among the
oscillators. Thus, it is possible to generate different gaits from walk matrix. To change the
connections from inhibitory to excitatory type, and vice-versa, is only necessary to change the
algebraic sign of the values in the gait matrix.
In the trot gait, the front limbs have the reverse phase and the limbs on a diagonal line move
synchronously. From that, to generate the trot gait is necessary to change the algebraic sign
in the matrix of weights for limbs on a diagonal line. The gallop has similarities with the
walk gait, the phase is equal to 90 degrees but the order is different, X1 − X2 − X3 − X4.
Therefore, the design of a new matrix for this gait is necessary. The gallop matrix is obtained
only by changing the algebraic sign in the one connections off-diagonal, this produces a phase
equal to 90 degrees among the limbs. Table 1 shows the matrix weights for the three basic
gaits, walk, trot and gallop with parameters α, p and ω equal to [1.12, 1.05, 4.59] and initial
conditions x1..4 = [2.44, 4.28, −1.84, 2.71].
Figure 7 shows the evolution of network weights and the initials condition of x. The plot 7(a)
shows the pattern generated when the second stage starts. It shows a synchronized pattern
where all signals have the same phase. The evolution of population shows how the signals
start to have different phases, figure 7(b). Figure 7(c) shows that the method found the correct
weight and the initial values of x1..4 to generate the walk gait pattern. Finally, figure 7(d) and
7(e) show the pattern generated by the trot and gallop gaits with matrices estimated from the
walk gait matrix.
176
16 Robotic Systems – Applications, Control and Programming
Robotic Systems
3 3 3
X1 X1 X1
X2 X2 X2
2 X3 2 X3 2 X3
X4 X4 X4
1 1 1
Amplitude
Amplitude
Amplitude
0 0 0
−1 −1 −1
−2 −2 −2
−3 −3 −3
5 6 7 8 9 10 5 6 7 8 9 10 5 6 7 8 9 10
Time (s) Time (s) Time (s)
3 3
X1 X1
X2 X2
2 X3 2 X3
X4 X4
1 1
Amplitude
Amplitude
0 0
−1 −1
−2 −2
−3 −3
5 6 7 8 9 10 5 6 7 8 9 10
Time (s) Time (s)
(d) (e)
Fig. 7. (a)-(c)Evolution of pattern, in different generations, generated by the walk gait. (d)-(e)
Pattern generated by the trot and gallop gait with matrices estimated from the walk gait
matrix.
3
X1 X1 X1
2 X2 2 X2 2 X2
X3 X3 X3
Amplitude
Amplitude
Amplitude
1
X4 X4 X4
0 0 0
−1
−2 −2 −2
−3
7 8 9 10 7 8 9 10 7 8 9 10
Time (s) Time (s) Time (s)
(a) Walk (b) Trot (c) Gallop
9. Future work
In general, the discussion of future work will be focused in three goals: (a) to scale up
the present approach to legged robots with several degrees of freedom to generate complex
rhythmic movements and behavior, (b) integrate visual perception information to adapt the
locomotion control in unknown environment, (c) incorporate the feedback from the robot
body to improve the generation of patterns.
4 4
X1 X2
(b) (f) X2 X2 knee
2 X3 2
Amplitude
Amplitude
X4
0 (a) (c) (e) (g) 0
−2 −2
(d) (h)
−4 −4
7 8 9 10 7 8 9 10
Time (s) Time (s)
(i) Walk time graph (j) Right forelimb
4 4
X3 |−−−−−Walk−−−−−−−| X1
X3 knee X2
2 2 X3
Amplitude
Amplitude
X4
0 0
−2 −2
|−−−−−−−Trot−−−−−−|
−4 −4
7 8 9 10 8 10 12 14
Time (s) Time (s)
(k) Right hindlimb (l) walk to trot transition
Fig. 9. (a)-(h) Walking locomotion patter snapshots for the bioloid robot simulator. (i) Walk
time graph for different joints. (j)-(k) Oscillation patterns for the right forelimbs, hindlimbs
and knees. (l) Waveforms during transition between walking and trotting.
CPG Implementations
CPG Implementations for Robot
for Robot Locomotion: Locomotion:
Analysis and Design Analysis and Design 179
19
does not exploit the fact that the walking machine will be presented with a similar situation
again and again.
Recently, approaches consider to eliminate the intermediate explicit model and consider
creating a direct coupling of perception to action, with the mapping being adaptive and
based on experience. Also, continuous visual input is not necessary for accurate stepping.
Not all visual samples have the same potential for control limb movements. Samples taken
when the foot to be controlled is in stance are by far more effective in modulating gait.
It has been suggested that during stepping visual information is used during the stance
phase in a feed forward manner to plan and initiate changes in the swing limb trajectory
(Hollands & Marple-Horvat, 1996; Patla et al., 1996).
Taken together, this may indicate that gait is modulated at discrete intervals. This modulation
may be a program that depends on a brief sampling of the visual environment to instantiate
it (c.f. (Patla Aftab E., 1991) ). This hypothesis is intriguing because it implies that after a
brief sample it is not necessary to store an internal representation of the world that needs to
be shifted and updated during movement. This shifting and updating is problematic for both
neural and traditional robotics models (Lewis & Simo, 1999).
10. Conclusions
The quadruped locomotion principles were studied and analyzed. It is feasible to generate
locomotion patterns by coupling the neural oscillators signals that are similar to the neural
oscillators found in animals by the definition of a mathematical model. The GA based
approach takes advantage that the fitness function works directly with the oscillator and
the network. It makes possible consider other restriction as the power consumption and
it does not need knowledge of the robot dynamic to generate the pattern gait. The GA
based approach uses small population, some tens individuals, and limited generations, some
hundreds generations, ideal to be processed on computers with reduced resources. The
applications only estimate the matrix for walk gait. Furthermore, it is possible to build the
matrix for gallop and trot gaits using the walk gait matrix. This reduces the number of
operations necessary to estimate the matrix for each gait
180
20 Robotic Systems – Applications, Control and Programming
Robotic Systems
The hardware implementation exploits the distributed processing able to carry out in FPGA
devices. The presented examples show that the measured waveforms from the FPGA-based
implementation agree with the numerical simulations. The architecture of the elemental
Van Der Pol oscillator was designed and attached as a co-processor to microblaze processor.
The implementation provides flexibility to generate different rhythmic patterns, at runtime,
suitable for adaptable locomotion and the implementation is scalable to larger networks. The
microblaze allows to propose an strategy for both generation and control of the gaits, and it
is suitable to explore the design with dynamic reconfiguration in the FPGA. The coordination
of joint of a legged robot could be accomplished by a simple CPG-based network extremely
small suitable for special-purpose digital implementation. Also, the implementation takes
advantage of its scalability and reconfigurability and it can be used in robots with different
numbers of limbs.
11. References
Amari, S.-I. (1988). Characteristics of random nets of analog neuron-like elements, pp. 55–69.
Arena, P. (2001). A mechatronic lamprey controlled by analog circuits, MED’01 9th
Mediterranean Conference on Control and Automation, IEEE.
Arena, P., Fortuna, L., Frasca, M. & Sicurella, G. (2004). An adaptive, self-organizing
dynamical system for hierarchical control of bio-inspired locomotion, IEEE
Transactions on Systems, Man, and Cybernetics, Part B 34(4): 1823–1837.
Barron-Zambrano, J. H. & Torres-Huitzil, C. (2011). Two-phase GA parameter tunning method
of CPGs for quadruped gaits, International Joint Conference on Neural Networks,
pp. 1767–1774.
Barron-Zambrano, J. H., Torres-Huitzil, C. & Girau, B. (2010a). FPGA-based circuit for
central pattern generator in quadruped locomotion, Australian Journal of Intelligent
Information Processing Systems 12(2).
Barron-Zambrano, J. H., Torres-Huitzil, C. & Girau, B. (2010b). Hardware implementation
of a CPG-based locomotion control for quadruped robots, International Conference on
Artificial Neural Networks, pp. 276–285.
Billard, A. & Ijspeert, A. J. (2000). Biologically inspired neural controllers for motor control
in a quadruped robot., in a IEEE-INNS-ENNS International Joint Conference on Neural
Networks. Volume VI, IEEE Computer Society, pp. 637–641.
Buchli, J. & Ijspeert, A. J. (2004). Distributed central pattern generator model for robotics
application based on phase sensitivity analysis, In Proceedings BioADIT 2004, Springer
Verlag, pp. 333–349.
Buchli, J., Righetti, L. & Ijspeert, A. J. (2006). Engineering entrainment and adaptation in limit
cycle systems: From biological inspiration to applications in robotics, Biol. Cybern.
95: 645–664.
Carbone, G. & Ceccarelli, M. (2005). Legged Robotic Systems, Cutting Edge Robotics, Vedran
Kordic, Aleksandar Lazinica and Munir Merdan.
Chiel, H. J. & Beer, R. D. (1997). The brain has a body: adaptive behavior emerges
from interactions of nervous system, body and environment, Trends in Neurosciences
20(12): 553 – 557.
Crespi, A. & Ijspeert, A. J. (2006). AmphiBot II: An Amphibious Snake Robot that Crawls and
Swims using a Central Pattern Generator, Proceedings of the 9th International Conference
on Climbing and Walking Robots (CLAWAR 2006), pp. 19–27.
Dagg, A. (1973). Gaits in mammals, Mammal Review 3(4): 6135–154.
CPG Implementations
CPG Implementations for Robot
for Robot Locomotion: Locomotion:
Analysis and Design Analysis and Design 181
21
Delcomyn, F. (1980). Neural basis of rhythmic behavior in animals, Science 210: 492–498.
Fujii, A., Saito, N., Nakahira, K., Ishiguro, A. & Eggenberger, P. (2002). Generation of an
adaptive controller CPG for a quadruped robot with neuromodulation mechanism,
IEEE/RSJ international conference on intelligent robots and systems pp. 2619–2624.
Fukuoka, Y., Kimura, H., Hada, Y. & Takase, K. (2003). Adaptive dynamic walking of a
quadruped robot ’tekken’ on irregular terrain using a neural system model, ICRA,
pp. 2037–2042.
Grillner, S. (1985). Neural control of vertebrate locomotion - central mechanisms and reflex
interaction with special reference to the cat, Feedback and motor control in invertebrates
and vertebrates pp. 35–56.
Hollands, M. A. & Marple-Horvat, D. E. (1996). Visually guided stepping under conditions
of step cycle-related denial of visual information, Experimental Brain Research
109: 343–356.
Hooper, S. L. (2000). Central pattern generator, Current Biology 10(2): 176–177.
Ijspeert, A. (2001). A connectionist central pattern generator for the aquatic and terrestrial
gaits of a simulated salamander, Biological Cybernetics 84(5): 331–348.
Ijspeert, A. (2008). Central pattern generators for locomotion control in animals and robots: A
review, Neural Networks 21(4): 642–653.
Ijspeert, A. J. & Crespi, A. (2007). Online trajectory generation in an amphibious snake robot
using a lamprey-like central pattern generator model, Proceedings of the 2007 IEEE
International Conference on Robotics and Automation (ICRA 2007), pp. 262–268.
Inagaki, S., Yuasa, H., Suzuki, T. & Arai, T. (2006). Wave CPG model for autonomous
decentralized multi-legged robot: Gait generation and walking speed control,
Robotics and Autonomous Systems 54(2): 118 – 126. Intelligent Autonomous Systems.
Jalal, A., Behzad, M. & Fariba, B. (2009). Modeling gait using CPG (central pattern generator)
and neural network, Proceedings of the 2009 joint COST 2101 and 2102 international
conference on Biometric ID management and multimodal communication, Springer-Verlag,
Berlin, Heidelberg, pp. 130–137.
Kier, R. J., Ames, J. C., Beer, R. D. & Harrison, R. R. (2006). Design and implementation of
multipattern generators in analog vlsi.
Kimura, H., Fukuoka, Y., Hada, Y. & Takase, K. (2003). Adaptive dynamic walking of a
quadruped robot on irregular terrain using a neural system model, in R. Jarvis &
A. Zelinsky (eds), Robotics Research, Vol. 6 of Springer Tracts in Advanced Robotics,
Springer Berlin / Heidelberg, pp. 147–160.
Kimura, H., Shimoyama, I. & Miura, H. (2003). Dynamics in the dynamic walk of a quadruped
robot, Irregular Terrain Based on Biological Concepts. International Journal of Robotics
Research 22(3Ű4):187Ű202, MIT Press, pp. 187–202.
Lathion, C. (2006). Computer science master project, biped locomotion on the hoap2 robot,
Biologically Inspired Robotics Group .
Lee, Y. J., Lee, J., Kim, K. K., Kim, Y.-B. & Ayers, J. (2007). Low power cmos electronic
central pattern generator design for a biomimetic underwater robot, Neurocomputing
71: 284–296.
Lewis, M. A., Hartmann, M. J., Etienne-Cummings, R. & Cohen, A. H. (2001). Control of a
robot leg with an adaptive VLSI CPG chip, Neurocomputing 38-40: 1409 – 1421.
Lewis, M. A. & Simo, L. S. (1999). Elegant stepping: A model of visually triggered gait
adaptation.
Loeb, G. (2001). Learning from the spinal cord, The Journal of Physiology 533(1): 111–117.
182
22 Robotic Systems – Applications, Control and Programming
Robotic Systems
Manoonpong, P. (2007). Neural Preprocessing and Control of Reactive Walking Machines: Towards
Versatile Artificial Perception-Action Systems (Cognitive Technologies), Springer-Verlag
New York, Inc., Secaucus, NJ, USA.
Matsuoka, K. (1987). Mechanisms of frequency and pattern control in the neural rhythm
generators, Biological Cybernetics 56(5): 345–353.
Murata, S., K. K. & Kurokawa, H. (2007). Toward a scalable modular robotic system, IEEE
Robotics & Automation Magazine pp. 56–63.
Mutambara, A. G. O. & Durrant-Whyte, H. F. (1994). Modular scalable robot control,
Proceedings of 1994 IEEE International Conference on MFI 94 Multisensor Fusion and
Integration for Intelligent Systems pp. 121–127.
Nakada, K. (2003). An analog cmos central pattern generator for interlimb coordination in
quadruped locomotion, IEEE Tran. on Neural Networks 14(5): 1356–1365.
Nakanishi, J., Morimoto, J., Endo, G., Cheng, G., Schaal, S. & Kawato, M. (2004). Learning
from demonstration and adaptation of biped locomotion, Robotics and Autonomous
Systems 47: 79–91.
Okada, M., Tatani, K. & Nakamura, Y. (2002). Polynomial design of the nonlinear dynamics for the
brain-like information processing of whole body motion, Vol. 2, pp. 1410—1415 vol.2.
Patla, A. E., Adkin, A., Martin, C., Holden, R. & Prentice, S. (1996). Characteristics of
voluntary visual sampling of the environment for safe locomotion over different
terrains, Experimental Brain Research 112: 513–522.
Patla Aftab E., Stephen D., R. C. N. J. (1991). Visual control of locomotion: Strategies for
changing direction and for going over obstacles., Experimental Psychology: Human
Perception and Performance 17: 603–634.
Picado, H., Lau, N., Reis, L. P. & Gestal, M. (2008). Biped locomotion methodologies applied
to humanoid robotics.
Pratt, J. E., Chew, C.-M., Torres, A., Dilworth, P. & Pratt, G. A. (2001). Virtual model control:
An intuitive approach for bipedal locomotion, I. J. Robotic Res. 20(2): 129–143.
Pribe, C., Grossberg, S. & Cohen, M. A. (n.d.).
Susanne, S. & Tilden, M. W. (1998). Controller for a four legged walking machine,
Neuromorphic Systems Engineering Silicon from Neurobiology, World Scientific, Singapore,
pp. 138–148.
Torres, A. L. (1996). Virtual model control of a hexapod walking robot, Technical report.
Torres-Huitzil, C, Girau B. (2008). Implementation of Central Pattern Generator in an
FPGA-Based Embedded System, ICANN (2): 179-187.
Van Der Pol B, V. D. M. J. (1928). The heartbeat considered as a relaxation oscillation, and an
electrical model of the heart, 6: 763–775.
Zielinska, T. (1996). Coupled oscillators utilised as gait rhythm generators of a two-legged
walking machine, Biological Cybernetics 74(3): 263–273.
10
1. Introduction
The Exoskeleton is classified inside of a wider technology known as Wearable Robots, that
group describes a robotic field who studies the interaction between the human body and the
robotics. In those systems, a mechatronics structure is attached to different members of the
human body, and while the wearer commands the mechanical system using physical
signals, like the electronic stimulation produced by the orders of the brain, or the
movements of the body generated by the muscles; the mechanical system do the hard work,
like carrying heavier objects or helping the movement of handicaps members of the body.
Since its conception in the sixties (Croshaw, 1969) the bibliography speaks about two models
of wearable robots: the prosthetics and the orthotics systems (Fig. 1). The first group
replaces the lost body members, while the second one assists to the body movements or, in
other cases, extends the body capabilities.
Fig. 1. (Left), orthotic robot of upper parts (Perry et al., 2007). (Right), prosthetic robot of
upper parts (Rehabilitation Institute of Chicago, 2006)
The exoskeleton take part of the orthotics wearable robots field, it’s a robotic structure
(actually could be considered a mechatronics structure due to the integration of different
184 Robotic Systems – Applications, Control and Programming
electronics equipments like sensors, actuators and an intelligent controller that are
integrated into the mechanical system) fixed to the human body that follows along the
movements of the wearer. At the present time are presented several models of exoskeletons:
those that are attached to the legs was called lower part exoskeletons (Kazerooni, 2006) and
also exists those that are attached to the arms, that are denominated upper part exoskeletons
(Perry et al., 2007). In both structures, its applications goes from the medical camp to
military applications; with functions like help handicap patients assisting on the body
member’s movement recuperation (Gopura & Kiguchi, 2007) or helping to the soldiers to
carry heavier loads (Kazerooni, 2005). Also, exists prototypes that integrates both systems
(Sankai, 2006), those mechanisms like Kazerooni’s structure, used to help the wearer to
extends his body capabilities in order to carry weights that are beyond his normal
possibilities, or help to patients to made normal movements that a sane body could develop.
Our work are focussed into the study of the upper arm exoskeleton and a application of
differential flatness in order to control the system that moves in a determinate path (in this
case a path generated by a polynomial). This document will describe its consecution,
initiating with a morphological analysis that leads to a mathematical model and posterior to
a cinematic simulation. This model is useful to the consecution of the dynamic model, this
study was based on the mechanical structure and the arm movements that were selected to
made. This dynamic model will be applied into the problem of trajectory tracking, in this
part, the arm is controlled using differential flatness, this part was considered our
contribution, therefore, we will discuss about its applications, benefits and problems
founded. All the analysis was supported by a code made in Matlab® and gives the
necessaries simulation to this work, in this way, graphics of the model behaviours was
characterized and captured in order to detail the different parts of this work.
2. Forward kinematics
The first problem to solve is the characterization of a human arm model than could be
implemented for an exoskeleton design. In this part, forward kinematics plays an important
role; offering a mathematical model that allows the prediction of the system’s behaviour,
giving the necessary preliminary information about the structure and movement
capabilities. Case of modelling the human arm for the study of the human motion (Maurel &
Thalman, 1999), (Klopčar & Lenarčič, 2005), or in a implementation study of a rehabilitation
system in an arm (Culmer et al., 2005), or the design of a master arm for a man-machine
interface (Lee et al., 1999), All needs of the forward kinematical analysis in order to initiate a
system’s modelling.
The forward kinematic study initiate with a morphological analysis in the part of the body
that will be attached to the mechanical structure. At is says, the exoskeleton is a system that
accompany the arm on their movements; therefore the design of the structure must not had
interference with it. In order to do this, the morphological analysis gives the information
about movements, lengths of the members, number of segments and its respective masses.
Another justification is that the biological inspirations permits the creation of mechanical
systems that are more compacted and reliable and also, more energy-efficient (Kazerooni,
2006).
In the nature, the biological designs of the living organisms allows the adaptation with the
environment; the evolution mechanisms have made transformation into the body as the
circumstances rules, for this reason the diversity of organic designs and also, the extinction
Tracking Control in an Upper Arm Exoskeleton with Differential Flatness 185
of those species that didn’t could adapt to its environment. In conclusion is possibly to say
that exist optimizations mechanisms that transforms our structure in function of a goal:
survival.
The human arm is a result of this optimization, following the bones that conforms it (Fig. 2)
its seen that the member are divided into three segments: the arm, the forearm and the
hand, united among them by three joints: the shoulder that unites the upper limb to the rest
of the body, the elbow that unites the arm with the forearm and finally the wrist that unites
the forearm with the hand. The mechanical model of the exoskeleton responds to this
configuration, conserving the systems general characteristics, this process of make a copy of
the nature design, is called bioimitation (Pons, 2008).
cos β −senβ 0 0
senβ cos β 0 0
Rz ( β ) = (1)
0 0 1 0
0 0 0 1
cos α 0 senα 0
0 1 0 0
Ry (α ) = (2)
-senα 0 cos α 0
0 0 0 1
Tracking Control in an Upper Arm Exoskeleton with Differential Flatness 187
1 0 0 0
0 cos γ -senγ 0
Rx ( γ ) = (3)
0 senγ cos γ 0
0 0 0 1
1 0 0 dx
0 1 0 dy
T (d) = (4)
0 0 1 dz
0 0 0 1
In (4) is necessary to define in which, or what axis the translation movement will be
effectuated. The cinematic model obeys to a kinematics open chain, because only exists one
sequence of joints together, connecting the two endings of the chain (Siciliano et al., 2009).
Fig. 3 shows the arm structure from the shoulder to the hand, each movement is associated
to a Latin symbol and an orientation respects to the inertial axis. In this way, all the
movements of flexion-extension occur into the Y axis, the rotation movements on the bone
into the Z axis, and the others into the X axis. Its seen that in the shoulder exists a special
case where three rotations intercepts in of one point without the existence of any translation,
this kind of configuration it’s called spherical wrist. All the translations referents to the
segments lengths (l1,l2 and l3) happens in the Z axis.
the path from the origin to the point P1, T02 the path from P0 to P2 and T03 the path from
P0 to P3.
T01 ( β 1 ,α 2 , γ 3 ) = Ry (α 2 ) Rx ( γ 3 ) Rz ( β 1 ) Tz ( l1 )
T02 ( β 1 ,α 2 , γ 3 ,α 4 , β 5 ) = T01 Ry (α 4 ) Rz ( β 5 ) Tz ( l2 ) (5)
T 0
3
( β 1 ,α 2 , γ 3 , α 4 , β 5 ,α 6 , γ 7 ) = T
0
2
Ry (α 6 ) Rx ( γ 7 )
The order of multiplication of each matrix is given according to an observation analysis; the
orientation due to the consecutive operations on the rotation matrix could make variations
into the final response, affecting both the visualization and the position of a point of the
system. It’s important to be clear in what will be the order of the final equation (5), this gives
an analysis of what happened into the arm with the variation of each joint. This analysis is
compatible to both arms, only changes the direction of the rotation, remembering that
according to the established parameters of the homogeneous transformation, every rotation
made counter the clock is considered positive.
This mathematical model was implemented in MATLAB®, and the results graphics Fig. 4,
describes different positions of the arm structure, as the anatomical position with all the
rotations in 0°. The three segments represent the arm divisions, the triangle the body and
the sphere the human head. All the graphics was made for an individual of 1.75 m of
height.
The model obtained of the kinematical analysis give to the researcher a map of the system
behaviour, the restrictions on the joint movements and the extensions of the segments of the
arm can detail the workspace (Fig. 5), a graphical analysis that gives an understanding of
the system capabilities. Analysing the workspace of the human arm, is possible to obtain
information for the design and control of the mechanism, and also, gives an understanding
of the movement properties. In the case of the human arm, this study allows the planning
and programming of rehabilitation process comparing the workspace of a normal patient,
with a person with pathology deficient, this differences could be used for the diagnosis
study (Lenarčič & Umek, 1994).
The workspace is defined by the mobility of the articulations, these being from the human
arm or the mechanical system that support the arm. Therefore, the workspace of a wearer
that carries an exoskeleton could be affected by the mechanical restrictions that the system
imposes, for this reason, better mechanical designs allows the performance of movements
more obedient and with a wider rank.
3. Dynamics
While the kinematics analysis gives an approach of the system, the dynamic model
generates the necessary information about the forces needed to umbalnce the segments an
initiate the movement, and posteriorly the behavior of the system across the differents
possible positions that the arm could reach.
Refers to other autors (Rosen et al. 2005), the conception of upper arm exoskeleton approach
to the behavior of an human arm due to the morphologycal design of the mechanism. In
some designs, the weight of the sytems doesn’t influentiate in the dynamical analysis, and
all the intertial forces are related to the human arm physiollogy (Carignan et al., 2005), in
this case the center of mass is realted to the extenstion of the arm, and its mass depends of
the organical structure (Table 1 and Table 2). In other cases, the mechanical system is more
heavier than the human arm (Croshaw, 1969), in these models the structure must be
included into the analysis, this cause that parameters like intertial center and estructure
change drastically. Nevertheless, with a adecuate code this changes could be easilly
modifcated, because this information could be parametrized or previouslly changed to the
simulation code.
Figure 6 illustrated the model of the arm, C1 and C2 are the possition of the center of mass
on the arm segments, the rotations consdired into the study are the shoulder movements
and the flexion-extension of the foream. The hand movements doesn´t be considered into
this part of the study.
The initial evaluation are focused into the minimal forces necessaries to move an human
arm, in order to do that, is necessary to known its composition . In this way, parameters like
mass, length and position of the inertial center that depends of the wearer stature, are
introduced into the model and simulated to obtain a further analysis.
M ( q )
q + C ( q , q ) + K ( q ) = τ (7)
192 Robotic Systems – Applications, Control and Programming
The mass of the segments was concentrated in its geometrical center, the form of the arm
was approximated into a cylinder (Admiral et al. 2004, Soechting et al., 1995) this gives a
mathematical relationship and evades the use of tables or interpolations to find the inertial
center of the arm. Was demonstrated (Veslin, 2010) that the presence of the inertial tensor
doesn´t influence significantly the system behaviour, this consideration gives a reduction of
the model equation and converts the tensor on a single equation.
Each movement was controlled using a feedback rule (8), it gives to the system the force
needed to move the segments to de desired position, given by qnd, its equivalent to a PD
controller, its applications are seen in industrial robots and lower parts exoskeletons
(Carignan, et al. 2005).
τ n = − β q n − γ ( qn − qnd ) + Wn ( qn ) (8)
The Gain β involve the arm velocities and the energy dissipation, in oscillatory movements
it controls its amplitude and the duration of the movement, γ gives a force that depends of
the distance between the actual position of the segment qn and the desired position, in this
way the equation reduce the force while the arm is reaching the goal. Wn are the forces that
the actuators have to support when the segments are in static equilibrium. The
implementation of these controllers adds a problem called overshoot (a high input that
occurs in a small portion of time) into the beginning of the simulation (Figure 6).
Fig. 9. Comparison between the movement generated by the controller into the shoulder and
the elbow respects the desired path
194 Robotic Systems – Applications, Control and Programming
The absolute error (Fig. 10) between the movements executed and the path is evaluated, the
mayor displacement happens in the middle of the movement around the 8 seconds, being
major in the shoulder, presenting a displacement of 2.5 degrees and 1.4 degrees into the
elbow. The final position also shows a displacement, the error in this position is near to 0,5
degrees in both movements.
Fig. 10. Difference between the shoulder and elbow movement with the desired path
To evaluate the controller, different movements were generated and the result is showed in
Fig. 11, the forearm final position is evaluated because it gives the location of the wrist,
which is the object of control. The graphic evaluates the final position of the forearm
initiating its movement from 0° to 140° during a elevation movement of the arm (that moves
from 0° to 180°), the mayor displacements happens when the final position of the forearm is
90°. The internal forces between the segments cause this variations, the forearm in this
position generates the biggest moment to the arm in movement. The controller assure that
them doesn’t influencing moving the arm to another position, the moments caused by the
mass segments and the energy induced by the velocity is controlled by the model
proposed, with a minimal error as is illustrated that doesn’t overcome the 0.3° in the
worst case.
Therefore, this error could be reduced according with the variation of the parameter γ in the
controller, Fig. 12 and 13 evaluates different errors in the final position with different values
in γ in a range of 100 Nm/rad to 600 Nm/rad, into ascending (Fig. 12) and descending (Fig.
13) movements. Bigger values into the constants reduces the difference in the displacement
generated by the internal forces of the arm, the controller supplies the necessary force and
consecutively the error of displacements change according to the variation of the constant.
Tracking Control in an Upper Arm Exoskeleton with Differential Flatness 195
For this graphics the forearm was conserved into a position of 0° while the arm was moving
in a range of 0° to 180°.
The maximal forces generated by the controller are illustrated in Fig. 14. In this graphics
two movements are compared, in the red line the movements begins from the 0° and
moves into different positions in a range of 0° to 180° with increments of 10°, always
executed in a time of 20 seconds. The blue line effectuates the same movement in a time
that is proportional with the space travelled, in other words, faster than the first move.
This movement generates bigger forces because effectuate the same movement with less
time, the augmentation of the velocities into the generalized coordinate generates an
augmentation of the Coriollis forces between the segments, generating in the controller a
bigger force to counteract the impulse generated by the movement and the mass of the
arm.
Fig. 11. Deviation of the final position of the wrist depending of the position of the forearm
while the arm is rotating into flexion movement
Also this forces changes depending of the height in the individual, as is show in (6) and
Table 2, the mass on the arms differ depending of the stature of the individual and his BMI,
in this case, the maximal forces in the arm (which actuator have to support the weight of the
complete system) are illustrated in Fig 16, where in a range of heights from 1.5 m to 2m, the
forces necessaries to arise the arm increases in almost 10 Nm.
Movements out of the plane, with a rotation of two degrees of freedom in the shoulder are
illustrated in Fig 16, the shoulder was commanded to move the arm in flexion and
abduction, while the forearm rotates from the rest position to 90°, (1) illustrates the
movement according with the generalized coordinates, while (2), (3) y (4) illustrates the
behaviour on the system in the space.
196 Robotic Systems – Applications, Control and Programming
Fig. 12. Reduction of the error according to different values of γ into the controller in a
ascending movement
Fig. 13. Reduction of the error according to different values of γ into the controller in a
descending movement
Tracking Control in an Upper Arm Exoskeleton with Differential Flatness 197
Fig. 14. Variation into the maximal forces for movements in the arm with different velocities
Fig. 15. Variation into the maximal forces for movements in the forearm arm with different
velocities
198 Robotic Systems – Applications, Control and Programming
Fig. 16a. Variation into the maximal forces with different individual heights
(1)
(2)
(4)
(3)
Fig. 16b. Movement in the generalized coordinates generated by the controller (1), trajectory
in the space described by the arm (2), projection in the transversal plane (3) and the sagittal
plane (4)
Tracking Control in an Upper Arm Exoskeleton with Differential Flatness 199
The influence of the gravitational forces in the arm model was evident when was founded
an increase in the maximal forces effectuated by each generalized coordinate, this is caused
by the increasing of the mass in the segments and also to the position of the forearm respects
to the arm, meanwhile and augmentation on the velocities also increase the value on the
applied forces to effectuate the movement. The implementation of the PD controller with a
compensation of the gravitational forces in the arm position allows to made the previously
forces analysis, the constants have an important role in the system behaviour, adding
stabilities and reducing the error in the positioning of the arm wrist.
4. Differential flatness
Introduced by Fliess (Fliess et al., 1994), the differential flatness is a concept of the dynamic
control, where a system is assumed to be flat, if it have a number of outputs known as flat
outputs (9), such that them and its derivatives, could describe the system inputs (that have
the same number of outputs) and the states of the system (10-11).
= ℎ , , ,…, (9)
= , ,…, (10)
= , ,…, (11)
This property called is called by Fliess diffeomorphism, and is very useful in the control of
systems that are needed to achieve a desired behaviour. In order to do that, the flat outputs
are parameterized into functions, and posterior introduced into the system; the result is a
non linear equation system that is transformed into a polynomial, so, the solution of the
problem is reduced significantly because is not necessary the use of integration or numerical
solutions.
The focus of this analysis is to assume than a system is differential flat, work that isn’t easy
to achieve, in fact, not all the systems have this property, and is considered that be flat it’s a
characteristic of the system itself . Many systems are defined as flat and some authors have
studied its implications (Murray et al., 1995). Like Murray´s text, these works are focused
into the demonstration of the flat property and its application, in systems like mobile robots
and cars with n trailers.
The manipulator trajectory tracking, consist in determinate the necessary inputs (forces) in
each system’s generalized coordinate, in order to accomplish that the model moves between
one point to another across a defined path. In this kind of problems the first step consist in
the specification of a sequel of points in the manipulator space work, this points will become
desires positions across the path, and posterior are integrated using an interpolation
function (which is typically in a polynomial form). Actually, different techniques exist for
the trajectory planning: (i) when is given each final and ending position, talk about a point
to point motion, (ii) when is working with a finite point sequence, talk about motion
through a sequence of points. Both techniques give a time function that describes the
desired behavior (Siciliano, 2009, van Nieuwstadt, 1997a).
The second step is the implementation of this function into the system’s dynamical model
and verifies that effectively is given a trajectory tracking. In this part, is necessary to take a
decision, if the problem is going to be worked into the operational space, or in each joint.
200 Robotic Systems – Applications, Control and Programming
Due to his generality, the first solution, could give singularity problems and redundancy
caused by the number of degrees of freedom in each joint, and the nonlinear effects could
made it difficult to predict, meanwhile, a joint position path parameterization shows to be
furthermore suitable, because this study allows to work in each element, considering factors
like the restrictions movements and degrees of freedom on each joint.
Everything joint parameterized trajectory is defined by a function q(t). This function is the
source of a joint desired position collections for the manipulators movement across the time.
Then, using different control techniques: like the PD control with gravity compensation, IDC
(Inverse Dynamic Control) or adaptive control, is possible accomplish a manipulator
movement on the defined path with the minimal deviation (Marguitu, 2009, Spong, 1992,
Wang, 2009).
It was demonstrated that all robotic manipulator (if the upper arm exoskeleton is asumed of
this form) could be flat if the system is fully actuated (Lévine, 2009); it means, that all its
generalized coordinates have an actuator. In this case, from the model of the system (7),
assuming that all the outputs are the position of the generalized coordinates q, and the
inputs are the forces that moves each coordinate τ, the systems is differential flat if the
position of each generalized coordinate and its derivatives (the velocity and the acceleration)
are the flat outputs. It must be realized that all the systems outputs didn’t are flat, so it’s
important that those be differentiated from the normal ones by the letter Z (Van
Nieuwstadt, 1997b). Thus the equation (7), remembering that the joints are considered
ideals, can be represented as:
+ , = (12)
The tracking trajectories problem doesn’t present complications in a flat system; as the
inputs and its states are defined in function of the flat outputs, is possible to create paths
that vary in the time in order to define an behavior on its outputs, and through them and its
respective derivatives, determine which are the inputs needed by the system to generate an
answer that approach to the desired one by the path. This path could be polynomial or a
function of C∞ type as the trigonometric or the exponentials (Rotella & Zambettakis, 2008).
The application of the flat theories allows the transformation of the differential equation
system (12) into a polynomial of order n, making a simplification into the problem solution
and transforming it to a completely algebraic system.
The next step is to applying the differential flat theories into the model; the general
objective is control the behavior of the system in function of a desired action. This
behavior will be parameterized into a trajectory. With knowledge of the flat outputs of the
system, these trajectories could be easily implemented. The model in (12) shows that is
possible to interpret the systems inputs knowing its flat outputs behaviors until the
second derivative, therefore, is necessary add this functions and its derivatives into the
model. By example, if is desired that the arm that is working in a plane movement rise
starting from the origin, with a gradual variation of the positioning, it could be
commanded a behavior using a cubic function in the generalized coordinates of the
shoulder and the elbow, this is:
The equation (13) leads the arm from the origin positioned in 0° to a position of 18°, on
equation (14) the forearm moves from 0° to 30° (Fig. 4), both segments presents null
velocities at the beginning and at the end, executing the translation in a time of 10 seconds.
The equations (13) and (14) and its respective derivatives are introduced in the dynamic
model (12). To determine the inputs τ only is necessary to solve the system in a given time.
The response provided by the Fig. 17 is a set of outputs that are applied into the system and
must allow to displace it according to the behavior implemented in equations (13) and (14),
for each segment.
This response is evaluated into the dynamic model (8), the system response is verified in
Fig. 18, where the trajectory made by the generalized coordinates is superposed on the
desired path define by the cubic equations. Due to the proximity in each equation, Fig. 19
evaluates the error between them, showing an oscillatory trend of error variation, the
maximum difference is the order of 10-2.
Fig. 17a. Set of forces obtained through equation (8) in each segment
system, this is evidenced in Fig. 19, where is tried to move both segments from 0° to 90°
position.
Fig. 19. Trajectory deviation for each generalized coordinate in higher movements
This doesn’t mean that the flat systems theories doesn’t can be applied to certain positions,
only that, for these solutions , flatness not guarantee the tracking and the necessary stability
if applied in open loop, case of the examples seen before. It is necessary then, to change the
focus, in order to take advantage of the benefits that flatness offers. This scheme
implementing a controller that feedback information from real estate and compare it with
the desired output, generated from this difference a control action that finally allows the
system track the trajectory desired with a minimum error, this is made with the
implementation of a controller into the system.
Called two degrees of freedom control (Fig. 20), any closed loop system contains a
controller and a trajectories generator. The controller modifies the output in function of
existing error between the actual output and nominated desired output (Van Nieuwstadt,
1997b).
This system had actually three degrees of freedom, such system contains a higher level that
generate the output depending of the desired paths (output trajectory). An intermediate
Tracking Control in an Upper Arm Exoskeleton with Differential Flatness 203
level generates inputs based on these outputs, and finally the lower level stabilizes the
system around the nominal trajectory. The uncertainties were variations caused by the
gravitational, friction forces, and nonlinear in the models.
Uncertainty
uerro xreal
xnom e +
Controller
-
Fig. 20. Two degree of Freedom Control System (Van Nieuswtadt, 1997b)
The solution gives an input trajectory that could be used into the real model. This
assumption works in theory, however some interferences, could separate the real behaviour
of the desired, thus, is necessary to apply a PD feedback control into each generalized
coordinate, with this, is possibly to enforce the manipulator to describe a desired
performance.
For the controller in Fig. 20 is established a feedback law (Franch & Agrawal, 2003):
= + − + − (15)
Equation (15) ensures that the nominal state Xn follow the desired path Zn over a correct
choice of kp and kd constants. kp generates an action that modifies the inputs from the
flatness system control in a proportional form. The derivative constant kd, removes variations
(oscillations) caused by the corrections in the proportional part. In Fig. 19 are visible the
jumps in the position ranging from 100° to 800° in a small period of time, so hopefully
abrupt change in the action of proportional control and consequently oscillations in the
model. If each segment has its own control loop, with the constant kp=25 and kd=0,5 for the
segment of the arm and kp=2,5 and kd=0,5 to the forearm. The results of the implementation
of these values can be seen in Fig. 21 Obtained paths follow very near to the desired paths.
Fig. 22 shows the performance of the arm according the inputs.
With this control, is easier to move the system using others trajectories, any position that is
within the manipulator workspace, can be parameterized by a polynomial for then of
differentially flatness systems theory, appropriate requirements. For example, is required
that the arm makes a movement oscillating from a cosine function while the forearm
elevate to 90° position following the same methodology of the previous path behavior, is
obtained the behavior described in Fig 23.
204 Robotic Systems – Applications, Control and Programming
Fig. 23. Force behaviour obtained according with the cosine function implemented in the
arm
Tracking Control in an Upper Arm Exoskeleton with Differential Flatness 205
Fig. 25. System behaviour using the entries defined by differential flatness
The concept is also applicable to systems with more degrees of freedom, although the fact that
the equations of motion are more complex it is possible to obtain path tracking and behaviors.
But this implementation presents difficulties, inputs equations in function of four outputs are
extensive and prevent the possibility of achieving an acceptable result for extensive times.
Precisely this error, forced to resort to the use of a controller. Open loop system presented
difficulties during the follow-up. The controller makes modifications imperceptibles on the
inputs, but enough so that there is a stable trajectory. Fig. 26 visualize the behavior of one of
these desired trajectories for a system that controls the four generalized coordinates.
The ability to follow-up on trajectories using differential flatness systems on robotics
manipulators depends on the quality of input values. Was founded that small changes in
these settings offer better performance, proving that the resolution of inputs is an important
part of the results.
The tests carried out previously, the models were analyzed over a time period in increments
of 0.01 seconds. But testing with greater increases do not provide appropriate behavior, as it
may be identified in Fig. 27, this makes it difficult to determine the values of the constants of
the controller or even worse, that do not exist. This restriction to use short times make it
difficult for the calculation of inputs due to the high consumption of computing resources,
therefore, in to reconfigure a strategy for solution of this situation.
206 Robotic Systems – Applications, Control and Programming
Fig. 26. Behavior of system for each trajectory, the dash line describes the actual movement
while continuous line describes the desired trajectory
Fig. 27. Behavior model with higher time period, are evidence a difficulty in the follow-up to
the desired output
5. Conclusions
This study gives to us the initial phases into the consecution of a trajectory control for an
upper arm exoskeleton. Whit the forward kinematics study is possibly to generate a fist
view on the system, this mathematical model achieved by means of a morphological study
of the human arm gives a first view of the system behaviour, being this workspace a tool to
predict the system capabilities in function of the mechanical restrictions and also the human
movements. This one is an important implementation of the code made.
Tracking Control in an Upper Arm Exoskeleton with Differential Flatness 207
Therefore, the dynamical analysis provides the information about the system requirements
in function of the stature of the wearer, the influences of the mechanical model was not
considered into the dynamics, the movements was made considered the human arm as the
objective to make an actuation, and the prototype as the element that actuated on the arm.
Every consideration on the physical model of the system must be considered in the
dynamical design. This dynamical analysis generates a mathematical model that is
implementing into the problem of the trajectory tracking on the system. It was demonstrated
that the arm is differential flat, and with this assumption, was possibly to generate any
behaviour to be implemented into the system, it means a path in each generalized
coordinate. In order to ensure a correct tracking, a PD controller was added into each
generalized coordinate, the simulations shows that the model of the upper arm get a closely
approach to the desired path, and with that is possibly the imposition of behaviours more
specific, only defined by polynomial functions.
6. References
Carignan, C. & Liszka, M. (2005). Design of an Arm Exoskeleton with Scapula Motion for
Shoulder Rehabilitation. Proceedings of 12th International Conference of Advanced
Robotics 2005, pp. 524-531.
Croshaw, P.F. (1969). Hardiman I Arm Test – Hardiman I Prototype Project. General Electric
Co. Shcenectady NY. Specialty Materials Handling Products Operation.
Culmer, P., Jackson, A., Levesley, M.C., Savage, J., Richardson, R., Cozens, J.A. & Bhatka,
BB. (2005) An Admitance Control Scheme for a Robotic Upper-Limb Stroke
Rehabilitation System. Proceedings of 27th Annual International Conference of the IEEE
Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology
Society. Conference. pp. 5081-5084.
Fliess, M. ; Lévine, J. ; Martin, P. & Rouchon, P. (1994). Flatness and defect of nonlinear
systems : Introduction, theory and examples. International Journal of Control. Vol 61,
No. 6, pp. 1327-1361.
Franch, J. & Agrawal, S.K. (2003). Design of differentially Flat Planar Space Robots : A
Stepforward in their Planning and Control, Proceedings of the IEEE/RSJ Interntational
Conference on Intelligence Robots and Systems, pp. 3053-3058.
Gerald, L.G. ; Song, Q. ; Almeida, G.L. ; Hong, D. & Corcos, D. (1997). Directional Control of
Planar Human Arm Movement. The Journal of Neurophysiology. Vol 78, No. 6
(December 1997), pp. 2985-2998.
Gopura, R.A.R.C. & Kiguchi, K. (2007). Development of an Exoskeleton Robot for Human
Wirst and Forearm Motion Assist. Proceeding of the Second International Conference on
Industrial and Information Systems, ICIIS 2007, pp. 535-540, August 2007. Sri Lanka.
Gowitzke, B & Milner, M. (2000). Scientific Base of Human Movements. Ed. Paidotribo. ISBN :
84-8019-418-9. Barcelona, Spain.
Kazerooni, H. (2005). Exoskeletons for Human Power Augmentation. Proceedings of the
IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3120-3125.
August 2005.
Kazerooni, H. (2006). The Berkeley Lower Extremity Exoskeleton Project. In : Experimental
Robotics Vol IX. Ang, M.H.; Khatib, O. pp. 291-301, Springer-Verlag Berlin Heidelberg.
Klopčar, N. & Lenarčič, J. (2005) Kinematic Model for Determination of Human Arm
Recheable Workspace. Meccanica. Vol 40. No. 2. Pp 203-219. Springer.
208 Robotic Systems – Applications, Control and Programming
Lee, S., Lee, J., Kim, M. & Lee, C. (1999) A New Masterarm for Man- Machine Interface.
Proceedings of 1999 IEEE International Conference on Systems, Man and Cybernetics.
Vol 4. pp 1038-1043.
Lenarčič, J. & Umek, A. (1994). Simple Model of Humarn Arm Reachable Workspace. IEEE
Transactions on Systems, Man and Cybernetic. Vol 24, No 8, pp. 1239-1246.
Lévine, J. (2009). Analysys and Control of Non Linear Systems : A Flatness Based Approach.
Springer-Verlag Berlin Heilderberg, ISBN 978-3-642-00838-2, Germany.
Marguitu, D. B. (2009). Mechanics and Robot Analysis with MATLAB. Springer-Verlag London
Limited. ISBN 978-1-84800-390-3.London, England. pp. 183-204.
Maurel, W & Thalman, D. (1999) A Case Study on Human Arm Upper Limb Modelling for
Dynamic Simulation. Computer methods in biomechanics and biomedical engineering,
Vol. 2, No. 1. pp. 65 – 82.
Murray, R. M.; Rathinam, M. & Sluis, W. (1995). Differential Flatness of Mechanical Control
Systems : A catalog of Prototype Systems. ASME International Mechanical Engineer.
Perry, J.C. ; Rosen, J. & Burns, S. (2007). Upper Limb Powered Exoskeleton Design.
IEEE/ASME Transactions on Mechatronics, Vol 12, No. 4, pp. 408-417.
Pons, J.L. (2008). Wearable Robots : Biomechatronic Exokeletons, John Willey & Sons, Ltd, ISBN
978-0-470-5194-4, England.
Rocon, E. & Pons, J.L. (2005). Case study : Study of tremor charcateristics based on a
biomechanical model of the upper limb. In : Pons, J.L. (2008) Wearable Robots :
Biomechatronic Exokeletons, John Willey & Sons, Ltd, ISBN 978-0-470-5194-4,
England.
Rosen, J. ; Perry, J. C. ; Manning, N. ; Burns, S. & Hannaford, S. B. (2005). The Human Arm
Kinematics and Dynamics during daily activities – Towar to a 7 DOF Upper Limb
Powered Exoskeleton. Proceedings fo the 12th Internatinal Conference on Advance
Robotics, pp. 532-539.
Rotella F. & Zambettakis, I. (2008) Comande des Systèmes par Platitude Available:
http://www.techniquesingenieur.fr/book/s7450/commande-des-systemes-par-
platitude.html.Site Access: August 2008, pp. 5-6.
Sankai, Y. (2006). Leading Edge of Cybernics : Robot Suit HAL. Proceedings of SICE-ICASE
International Joint Conference, October 2006. Busan, Korea.
Siciliano, B. ; Sciavicco, L. ;Villani, H. & Oriolo, G. (2009). Robotics : Modeling Planning and
Control, Springer-Verlag London Limited, ISBN 978-1-84628-641-4, England.
Spong, M.W. (1992). On the Robust Control of Robot Manipulators, Transactions on
Automatic Control of Robot Manipulators, Vol 37 No. pp. 1782-1786.
Tözerem, A. (2000). Human Body Dynamics : Classical Mechanis and Human Movements,
Springer Verlag Berlin Heidelberg. Germany.
Van Nieuwstadt, M.J. (1997a). Trajectory Generation for Nonlinear Control Systems,
Technical Report CS 96-011 del Department of Mechanical Engineering, California
Institute of Technology, Pasadena, California, pp. 54-57.
Van Nieuwstadt, M.J. (1997b). Trajectory Generation of Non Linear Control Systems, Ph. D.
dissertation, University of California at Berkeley, Berkeley, California, USA.
Veslin E. (2010). Implementação dos Sistemas diferencialmente planos para o contrôle de um
manipulador robótico tipo braço humano. M.Sc. Dissertation, Universidade Federal do
Rio de Janeiro, Rio de Janeiro, RJ, Brazil.
Wang H. & Xie, Y. (2009). Adaptative Inverse Dynamics Control of Robot with Uncertain
Kinematics and Dynamics, Automatica 45, pp. 2114-2119.
0
11
1. Introduction
Robotic systems are beginning to interact with humans and are increasing in dimensionality,
control complexity, task difficulty, and dynamic capability. It is therefore becoming
increasingly imperative that control be performed in real-time (i.e. on or faster than the
timescale of the system dynamics), intelligently making decisions on demand, and then
executing them in a dependable fashion. In addition, high-dimensional bio-inspired systems
are becoming more common (Butterfab & et al., 2001; Movellan et al., 2005; Simpkins
et al., 2011; Todorov et al., 2010; Vandeweghe et al., 2004; Wilkinson et al., 2003), and
are based upon a different strategy for control (and therefore control system design) than
traditional engineered systems - coordinated, compliant, and often coupled control rather
than orthogonal, stiff, and independent control of joints. As a result, issues related to
real-time capability, more critical now than ever, must be addressed with a coherent, measured
approach.
In order to deal with this challenge appropriately, a number of sub-problems must be
addressed and will be discussed in this chapter. These are timing, dimensionality,
computational capabilities, system bandwidth, and safety. There are fields devoted to each
problem independently, and the background of each will be given as each one is discussed.
Rarely does a researcher or practitioner in one field work in any of the other fields, let alone
all of them. However, ignoring any of these sub-problems will likely lead to a failure of a
complex real-time system. Therefore, familiarization with these issues and methods will help
the researcher and practitioner design, implement, and troubleshoot a real-time system most
effectively. This chapter will present current solution methods for each component of the
problem and show how they can be integrated together effectively.
• This unifies and expands upon approaches in the fields of real-time control systems,
control engineering, mechatronics, dynamics, robotics, embedded systems, and computer
science to address new design challenges.
2. The sub-problems
By defining the sub-problems we can begin to address the overall issue of real-time control
in robotic systems. The timing problem is centered around what is defined as real-time for
a particular application. For example, an energy-producing chemical process may occur
on the order of several hours, or the coordination of joints required for locomotion may
occur on the order of milliseconds. Microprocessors are digital devices, and commands
to the actuators are updated periodically. Thus timing issues become guaranteeing that
sample time is short enough, that delays in the system are minimal (or properly addressed),
that samples are not missed (or if sample time is variable, this is accounted for), and that
information bottlenecks are avoided or part of the control strategy. Timing leads to issues of
computational capability required versus what is available. This defines what dimensionality
and control complexity is possible, and limits the overall system bandwidth. Traditionally
designers have considered processing power versus requirements as an afterthought, with
little concern over the criticality. This is due to the fact that, for simple problems, most
modern processors are more than powerful enough. In complex systems, however, this
quickly becomes a poor assumption. When one considers the actual realities of controlling
a fifty degree of freedom (DOF) system, for example, where each DOF possesses multiple
sensors and actuators, the required information flow bandwidth becomes large quickly. A
system with such limitations can effectively lose control of some or all DOFs under dynamic
load, though it may be guaranteed to be controllable in theory from a dynamical systems
perspective. When considering these issues, and that in the future robots will be interacting
directly with humans more often in everyday activities, safety becomes a major consideration.
A robot which can exert large forces, is not back-drivable, and/or possesses significant mass is
inherently unsafe for human interaction. Making the robot small and weak is not the only or
best solution as this still does not guarantee that the system will not be dangerous. There are
ways to minimize risk of human injury in the case of loss of control, even during a catastrophic
failure of the entire primary control system.
when setting sample rate for the control algorithm, one possible constraint could be that this
must not cause a need to communicate data more rapidly than the data bandwidth allows.
dx (t)
= Ax (t) + Bu(t − τ ) (1)
dt
y(t) = Cx (t) (2)
then the sampled-data system becomes
τ = ( d − 1) h + τ , 0 < τ ≤ h (8)
Variable delays can be dealt with by simply computing time varying system matrices. Then
the model described in Equation 3 is still valid. As long as the variation in sampling is less
than a delay in duration, the model described is still valid, as the control signal does not
vary with the delay. If the delay varies longer than a sample period, then a different model
may be derived which causes the control signal to not be held constant, but rather to vary
along with the delay. This allows the control signal to be scaled appropriately, and mitigates
instabilities. Excessively large variations in sampling period can lead to unavoidable (or rather
’very difficult to avoid’) instability due to violation of controllability guarantees. In other
words, when the system finally processes the delayed sample, too much time may have passed
to react properly, and a large error may already have been incurred from the previous control
action. In the case that this type of problem is likely, it is possible to create robust controls,
which guarantee certain bounded behaviors even in the event of large errors due to sample
time variability, which may be lumped into disturbances. Another way to effectively alter
212
4 Robotic Systems – Applications, Control and Programming
TBD
Fig. 1. For complex control systems, it is effective to split time-critical aspects of control from
non-critical aspects. In this way a complex algorithm can be computed in near real-time, but
may not be guaranteed, while the critical aspects which require such guarantees can be
designed at the low level to handle variable updates to its reference. L stands for ’Low level’
while H stands for ’High level.’ y represents output, and u represents control input.
control parameters to match the delay is to include the delay as a parameter of the control
equation (Åström & Wittenmark, 1990).
Thus we can attempt to model delays by measurement, and then compute an appropriate
control, but that does not tell us everything that might happen, and how do we, in the context
of complex systems, measure for every possible permutation? Additionally, it may not be
trivial to make measurements of some features needing to be measured! That leads us to the
next section regarding computational capabilities.
compared to cluster computing) and highly effective. This approach is useful not only to run
single-thread algorithms, but also to run multiple model predictive algorithms in parallel. In
this way, several potential control decisions or trajectories can be explored very rapidly, and
the one with the best outcome selected.
Measuring execution time for all the computations required, including communication delays,
and delays from peripherals is an important ability to have. This is not a trivial problem,
however.
Two effective approaches which can be used on complex real-time robotic control systems, of
the techniques discussed are either to split the time-critical components from the non-critical
components, or to use a measurement technique which will not necessarily guarantee perfect
timing, but may provide fairly good estimates on bounds of timing. In safety-critical
applications it is suggested that having a low level which has timing guarantees is more
appropriate than bounded estimates, in combination with a watchdog (discussed in Section
3.5). In some experimental robotic systems it may be sufficient to simply design controllers
which deal with temporal variability without instability, but write code which minimizes
delays. This latter method may be effective for rapid prototyping of new designs and testing
experimental theories, while the hierarchical structure with some components real-time may
be a more effective general strategy, as long as communication between levels is designed
carefully, such as in (Simpkins et al., 2011). Standard timing analysis tools should be used at
the programming stage to test algorithms and ensure their real-time performance. Even in the
best cases, there are always limitations, and in complex high dimensional systems we must
consider carefully what is the maximum capability available, and how much is required.
Fig. 2. First four principle components of postural movement data for humans undergoing a
’variability maximizing experiment,’ scaled appropriately for joint angles and applied such
that the size of the component for a joint determines its deviation from a neutral posture. It is
interesting to note that the components, which represent the dimensions that were most
significant in terms of accounting for the variability in the data, and the direction of
variability, comprise certain basic components of manipulation - gripping with all fingers,
using a wavelike pattern of all fingers or using the first few primary fingers and thumb,
precision manipulation with all fingers in fine patterns, and finally the thumb dominating
the movement. This may suggest that combining these synergies, or using subsets of fingers,
many manipulations can be performed.
the subspace is quite dynamic, and changes depending on the task. In other words, joints
are coupled together in a fairly optimal fashion for each task(Todorov & Ghahramani, 2003;
2004). In fact, the author has performed a series of studies attempting to maximize the
dimensionality of hand movement control, and there is still a notion of synergies(Simpkins,
2009). The maximum dimensionality is approximately ten DOFs. The obvious advantage
is that the less individual trajectories that must be computed, or more generally, control
decisions to be individually made, the faster those decisions can happen, and the more
complexity can go into computing what exactly those decisions will be. So couple joints
together when possible into synergies such as Figure 2 and compute a global trajectory for
them to follow, such as open vs. closed hand (See Figure 3), and this can be done quite
quickly and efficiently computationally. A number of approaches can be taken to build control
systems upon this general methodology, but optimal control has been found to effectively
model much biological movement control data, and provides an intuitive means of building
control systems. Optimal control (Stengel, 1986) is a methodology which makes decisions
based upon a notion of optimality for a task. This notion comes in the form of a cost (or
reward, hereafter referred to only as cost for simplicity) function. Behaviors are created by
specifying the dynamics of the system as constraints, then actions are chosen which minimize
the cost computed by the cost function. There is evidence that the sensorimotor system ignores
task-irrelevant deviations (Todorov, 2004; Todorov & Jordan, 2003), which tends to reduce
dynamic load and uses less system bandwidth to compute controls than actively controlling
everything individually.
Bandwidth is a term which is used in many fields for different purposes. In control theory it
relates the ability to track a reference signal (Franklin & et al., 1994); in communications and
216
8 Robotic Systems – Applications, Control and Programming
TBD
Fig. 3. Images of a hand gripping an object. Though there are many degrees of freedom in the
hand, it is clear that there is a low dimensional synergy in this case. We can control the entire
hand from this synergy for this task (open-close). In that way, more complex algorithms can
be computed in real-time if needed. A sliding complexity (in terms of dimensionality and
dynamics) can be used to guarantee real-time control even for a complex system.
0
Magnitude (dB)
System
-3dB boundary
−20
−40
0
Phase (deg)
−45
−90 −2 −1 0 1 2
10 10 10 10 10
Frequency (rad/sec)
Fig. 4. The bandwidth cutoff for a system tracking a sinusoidal input is considered to be the
point at which the amplitude of the output drops below -3dB, along with a corresponding
phase shift. This is the bode plot for G (s) = 1/(s + 1).
networking it refers to how much data throughput is possible. However, the general concept
is very similar in all instances. We will use as the following as the definition of bandwidth:
Definition: Bandwidth shall be defined as the rate at which a system, open-loop or closed-loop, may
successfully track a reference. The boundary between successful tracking and unsuccessful tracking is
the point at which the amplitude of the output drops below -3 Decibels (dB) relative to the input signal
in the frequency domain, such as Figure 4.
The bandwidth requirement for a system is also reduced by synergistic movement control. It
is also significant that a designer consider not what the maximum possible bandwidth of the
overall system can be, but more what the maximum bandwidth should be. In other words,
what is required (along with a factor of safety) to succeed at all tasks the system is designed
to accomplish? Too high of a (dynamic) mechanical bandwidth may lead to instability, high
forces, and other problems affecting system safety. Too high of a data bandwidth leads to
higher probabilities of lost samples, delays, and related issues. It is also possible to have
a sliding bandwidth which varies depending on computational complexity versus dynamic
timing of a particular task. In fact, this appears to be closer to what biological systems do -
Real-Time
Real-Time Control Control in Robotic Systems
in Robotic Systems 2179
Fig. 5. Block diagram of hierarchical control scheme. This approach breaks a single complex
problem into several smaller and more tractable problems.
use some subset of features of the complex system for any given task, but have the capability
for a variety of tasks (Todorov & Jordan, 2002).
(a) (b)
Fig. 6. (a) A modular bio-mimetic robotic system developed by the author at the University
of California, San Diego and the University of Washington in the Movement Control
Laboratory. (b) A simplified 2-dimensional representation of the robot and object, used in the
algorithm described in this section.
partner, and the human being as the other. The human acts on the keys by applying forces
to the keychain, the keychain changes state, and by nature of the contacts with the human’s
hand, alters the state of the human, who gains information about the state of the keys. This
also helps simplify the control problem in various ways, as the human hand conforms to the
shape of the key, rather than the hand attempting to move in spite of the key.
Creating a robot which interacts with the world in this fashion involves solving a number
of new design challenges without compromising on any one specification. For example, in
order to dynamically manipulate objects, a robotic finger must be able to move rapidly, be
completely compliant and back-drivable, communicate at a high rate with a host processor,
be capable of implementing complex control in real-time, be sensitive, measure appropriate
variables (such as force, velocity, and position), and it must be sufficiently robust. This is to
the author’s knowledge the first design which solves all these constraints and incorporates
modularity (each finger is completely self-contained and has wireless capabilities, with
provisions to be attached to any base with the appropriate hole pattern). Controlling such
a device to perform dynamic manipulation in the way previously described is a new type of
control problem, and requires a significantly new approach.
The goal is to apply forces to an object to cause it to behave in a desired fashion. This can be
tracking a trajectory, or any other goal of control. An important note is that the two problems
(manipulation and locomotion) are related in that we either have a small object relative to
manipulators such as a hand, or the ground, an object of infinite size (in either case, it is often
that one side of the problem is significantly larger than the other side - manipulators or object
manipulated), as in Figure 7.
Forces are applied to the object by manipulators, described as linkages, as shown in Figure
6(b), which are a simplified model of the physical robots developed by the author, shown in
Figure 6(a) and described in detail in (Simpkins et al., 2011). Torques (τi ) are applied to either
Real-Time
Real-Time Control Control in Robotic Systems
in Robotic Systems 219
11
Fig. 7. Image sequences showing manipulation of an object (top) and locomotion (bottom).
Note the parallels between the two. In fact, the two can be thought of as essentially the same
problem - this is a significant insight.
of the two links shown for a particular manipulator in order to generate an output force at
the endpoint. This is similar to human joints in terms of the way tendons pull on joints to
cause an output force. However, here for design simplicity and robustness, the complex inner
musculoskeletal structure is reduced to a four bar linkage model. The motors which produce
the torque are coupled to the joints through a cable drive system which has ultra-low friction
and little if any backlash - essential components for back-drivability. The cables act as the
tendons would in biological systems in the sense of a pull-pull system, as well as providing
some spring characteristics. As much mass as possible is kept at the base, and decoupled
from the output, which is another reason for the linkage design. This helps keep inertia small
relative to the objects being manipulated, and relative to the output force capabilities of the
finger. This is significant because the finger needs to be capable of dynamic movement (rapid
accelerations and decelerations), and must avoid wasting most of the force output capability
of the motors in overcoming the structure’s inertia (imagine going through your day with an
extra 10kg of weight strapped to your wrists; it would indeed be very difficult to perform
simple tasks because nearly all of your strength would be used to overcome the weight and
inertia of the additional mass). Large inertia also is very detrimental to back-drivability.
The concept of these devices is not to attempt to duplicate the structure of a biological
system. Rather, the concept is to capture the capabilities which are significant for developing
controllers that solve the problems of manipulation and locomotion similarly to biological
systems. Each finger is capable of producing a few Newtons of peak output force in each
axis, and requires a fraction of that to be backdriven (frictional and inertial loads are small),
enough for fine dynamic manipulation. Each can perform closed loop control at over 1kHz,
communicating between all fingers and a higher level processor (such as a personal computer)
all relevant data (position, velocity, force, actuator states, etc).
Consider for simplicity the manipulation or locomotion problem within a 2D context. Objects
are represented by center of mass, position and angle, mass (m), inertia (J), and a boundary
(d()), which may be described by splines, allowing any shape to be represented, convex or
concave. The sum of forces on the object (Fx , Fy , and M) are given by
220
12 Robotic Systems – Applications, Control and Programming
TBD
⎧ ⎫ ⎡ ⎤ ⎡ ⎤⎧ ⎫
⎨ ∑ Fx,o ⎬ 1 0 mo 0 0 ⎨ a x,o ⎬
∑ i f xi
∑ Fy,o = ⎣ 0 1 ⎦ − ⎣ 0 mo 0 ⎦ ( ay,o + g) (10)
⎩ ⎭ ∑ i f yi ⎩ ⎭
∑ Mo −dy (θ ) d x (θ ) 0 0 J θ̈o
Where f i representing the force applied by manipulator i at the surface location relative to the
object center of mass as a function of angle (θ) relative to horizontal in the global coordinate
system (d(θ )), g representing gravitational acceleration, a representing acceleration of either
manipulator i’s endpoint or the manipulated object, depending on the subscript, and ()o
representing ’with respect to the object being manipulated.’ Time derivatives are represented
by an over-dot, for example, the time derivative of x has the notation ẋ.
Contact dynamics can be complex to simulate accurately, and can bring about bottlenecks for
otherwise lightning fast algorithms (important in real-time contexts). Though with the rigid
body assumption it is true that accurate contact dynamics appear difficult to ’get right,’ real
objects are not infinitely rigid. This provides a means by which contacts are smooth, soft,
and damped. The reader is encouraged to consider his or her hands as an example- note the
flexibility of the tissue. In general we, as human beings, do very little manipulation of rigid
objects with rigid objects. When was the last time you attempted to pick up and manipulate
a coffee cup or other fairly rigid body while wearing rigid steel armor which has no padding
over your hand? It is unlikely that you ever have, or ever will. In fact, it is important to have
damped interactions to avoid damaging the manipulators and objects - consider shoes, which
prevent joint and limb damage due to shocks over time. This makes the problem significantly
simpler, and we can make strong assumptions which do not necessarily reproduce contact
dynamics of rigid bodies perfectly, but instead assume that objects interacting with each other
do so with damping. In addition, the concepts we are presenting here do not necessarily
depend upon the contact algorithm, being a modular method, and so the physics and contact
model can easily be switched for others, even on the fly. And it makes sense to have a more
flexible strategy for simulating dynamic interactions than making one assumption and basing
the entire methodology upon this such that the algorithm becomes ’brittle’ to simple changes,
or becomes computationally infeasible due to complexity.
Given this discussion we create a constraint for the form of the algorithm presented here
(Simpkins & Todorov, 2011). We assume that there is no slip during contacts, effectively
creating a sticky contact algorithm. The advantage here is that it allows a constraint on
acceleration of the manipulators relative to the object,
⎧ ⎫
⎨ a x,o ⎬
a x,i 1 0 d x ( θi )
= a (11)
ay,i 0 1 dy (θi ) ⎩ y,o ⎭
θ̈o
where a represents the acceleration of the object o or point mass i, and θ̈o is the angular
acceleration of the object. This does create an instantaneous change of velocity when a contact
occurs, which can be modeled as an impulsive force that causes the change in velocity which
is required. There is also an assumption that the point masses are small enough in mass
relative to the object that their contribution to the momentum of the system when in contact
is insignificant.
When not in contact, the point masses experience accelerations due to a force which is applied
by the robot fingers. The entire robot finger assembly is modeled as a point mass with external
Real-Time
Real-Time Control Control in Robotic Systems
in Robotic Systems 221
13
forces applied to the point mass in order to cause accelerations. This assumption is made
because the rotor inertias of the motors dominate the overall finger inertia. It is also makes
the system more simple to model and control than one with complex nonlinear dynamics, but
this would be possible as well and is a trivial extension. The affine point mass dynamics for
mass i are given by
⎧ ⎫ ⎡ 1 ⎤⎧ ⎫
⎨ a x,i ⎬ mi 0 0 ⎨ bx,i ⎬
⎢ ⎥
a = ⎣ 0 m − g ⎦ by,i
1 (12)
⎩ y,i ⎭ i ⎩ ⎭
1 0 0 1 1
and the object dynamics, after rearranging the terms in Equation 10 are given by
⎧ ⎫ ⎧ ⎫ ⎧ ⎫
⎨ ∑i f x,i ⎬ ⎨ mo a x,o ⎬ ⎨ 0 ⎬
∑i f y,i − m a = m g (13)
⎩ ⎭ ⎩ o y,o ⎭ ⎩ o ⎭
− ∑i f x,i dy (θi,p ) + ∑i f y,i d x (θi,p ) J θ̈o 0
and the end effector dynamics when in contact with the object can be similarly derived to give
f x,i a x,o θ̈o d x (θi ) bx,i
+ mi + mi = . (14)
f y,i ay,o θ̈o dy (θi ) by,i + mi g
We combine the object and end effector dynamics into a single equation, which allows us to
solve for the unknown forces and accelerations in one operation
Aw = b, (15)
where A is given by
⎡ ⎤
1 0 −mo 0 0
⎢0 1 0 −mo 0 ⎥
⎢ ⎥
A=⎢
⎢0 0 0 0 −J ⎥
⎥ (16)
⎣I 0 mi 0 mi θ̈o d x (θi ) ⎦
0 I 0 mi mi θ̈o dy (θi )
w is given by,
T
w= f ( x,i) , f (y,i) , a( x,o) , a(y,o) , θ̈o (17)
and b is given by
T
b= 0, g, 0, b( x,i) , b(y,i) + mi g . (18)
Then, we can compute a solution, or an approximation to the solution quite quickly using the
solution method of choice. One method that works well is simply to use the pseudoinverse of
A, which will yield the forces the point masses apply on the object and thus the forces on the
point masses in response, and the acceleration of the object.
w = A+ b (19)
Compute the optimal forces and contact locations to track the reference if in contact, and if not, compute
the trajectories for those manipulators not in contact in order to move them to new contact points, or
into a useful position (if, for example, the object moves out of the particular manipulator’s workspace).
These two trajectory classes form a pattern, which could be referred to as a gait for
manipulation or locomotion.
There are several sub-problems to consider in order to not only solve the above overall
problem, but to do so in a useful way - for example, manipulate objects without dropping
them, make a robot walk without falling over. Additionally, it should be noted that most
manipulation problems involve redundancy - usually one has more fingers than one needs
to for any given task, so how does one decide what to do with each finger? There are many
solutions (Simpkins & Todorov, 2011). Here we will state that this can be formulated within
the optimal control framework, and solved using the method described below. In that way
the solution which ’best’ solves the problem at hand is selected from the set. It remains to
further describe what ’best’ means, and how the set of possible actions is generated.
It is possible to formulate this problem within the optimal control framework as a single
level problem, however it is nonlinear, high dimensional, and stochastic. We are interested
in real-time solutions, and it is difficult to solve these problems with an unprescribed length
of time and computational capabilities, let alone limited resources and time. There is also
support for the notion that the human brain solves such problems in a hierarchical but
interconnected fashion. Thus we draw from nature to address this problem - break the
one complex problem down into several more simple and tractable problems, organizing
challenging aspects of the problems into groups which sum to the solution of the more difficult
problem.
The three levels are the high level, which assumes one can apply an ideal force to the object
directly at its center of mass, a mid-level, which takes as input the requirement for the
ideal force, and decides how to break the force up into individual forces to apply with the
manipulators, and where these forces should be applied, and finally a low level which deals
with mapping endpoint forces to apply with manipulators to joint torques. The mid-level
also, in computing optimal configurations, weighs out the difference between remaining in
the same contact posture and breaking contact to create a new grip.
3.4.2 High-level
Another advantage of this approach is that the high level can employ more complex
algorithms, as it will be lower dimensional overall. In fact, predicting individual trajectories of
the manipulators would be prohibitively computationally expensive, however implemented
in this way it is not.
To implement this high level controller, one can implement any desired strategy: optimal
(Stengel, 1986), robust (Zhou & Doyle, 1998), risk-sensitive, model-predictive (Erez et al., 2011;
Li & Todorov, 2004), classical (Franklin & et al., 1994), and adaptive (Krstic et al., 1995), to list
a small subset of possibilities.
In an opposing notion, it is simpler to implement classical control using ideal forces, and the
lower level controls may be implemented similarly as, for example, Proportional Derivative
(PD) control which, in its discrete form, is given by, with ΔT the time increment, k the current
time-step, Kd the differential gain and K p the proportional gain, e the error, y the output state
Real-Time
Real-Time Control Control in Robotic Systems
in Robotic Systems 223
15
Kd
ukPD = K p (ek ) + ( e − e k −1 ) (20)
ΔT k
where the error is given by
ek = rk − yk (21)
3.4.3 Mid-level
Given the required force to apply to the object, the mid-level splits this force into (given
the number of manipulators) what forces each manipulator should apply and where they
should apply them. This level deals also with workspace limitations, what to do if the object
moves outside the manipulator workspace, and contact breaking is handled as well, all in one
equation.
This is done differently, depending on which of the manipulators are in contact and which
are not. Given a particular contact grouping, the overall ideal force is distributed among the
in-contact fingers with a fast quadratic constrained optimization (forces constrained to only
have a negative surface normal component).
The forces ( f x ) acting upon the object (at location specified by d(θ )) are in the x, y, and angle θ
coordinates, and are linear forces and the resulting moments, given in matrix form (at timestep
k) , for manipulator i acting on the object as
f xi f yi d x ( θ i ) − f xi d y ( θ i )
f ti ,k = (22)
f yi d x ( θ i ) − f xi d y ( θ i ) f yi
The objective J ( f i )k is to minimize the error between the ideal force and the sum of total forces
acting on the object, which tends to keep any force from being excessively large,
2
J ( f i )k = f r,k − ∑ f ti ,k + ∑ f i,k
2
(23)
i ( x,y),i
We can pose this as a constrained optimization problem by stating (with n representing the
normal to the surface of the object at the point of contact),
opt
f i = argmin J ( f i )k , s.t. f • n < 0, (24)
fi
So given the manipulators in contact, we can extremely quickly determine a set of forces which
sum (as closely as possible) to an equivalent of the ideal desired force.
The behavior of the fingers that are not in contact (and the contact breaking behavior) can be
encoded in a field equation which combines several different elements to create a pattern of
action. This is easier to build from scratch or data than a cost function, since it is more direct
and actions can be easily shaped. This is similar to directly constructing a policy - imagine
a virtual force acting on a finger, so when you hold a coffee cup in your hand and rotate it
continuously in one direction, there is a moment where you feel an increasing urge to change
your grip (in such a way that you do not drop your cup). A virtual force field translates
well to constructing these types of behaviors from observations. In the case presented here it
is desirable to change grip when a manipulator is nearing the boundary of its workspace (i.e.
224
16 Robotic Systems – Applications, Control and Programming
TBD
Fig. 8. Virtual force field rendering. Here one sees the forces acting on the manipulator
endpoint. Note how the object (white) affects the field as it enters the workspace - the further
the object is from the center of the workspace, the narrower the contact region becomes. This
is a fast way of generating trajectories for the endpoint, and is very flexible for a variety of
behaviors or adapting behaviors, as it is simply a matter of altering parameters of an
equation.
apply a force which pulls the finger away from the object near the boundary of the workspace),
and find a new contact which is closer to the middle of the workspace, in order to maximize
potential for varied actions. This force should fall off near the line between the center of the
workspace and the center of the object.
( x − xc )
Fe = Ke (25)
1 + || x − xw ||2
Equation 25 pulls the manipulator toward the object, but falls off the further the object is from
the center of the workspace - in other words if the object is too far away, the term which
pulls the manipulator towards the center of the workspace (Equation 26, the spring element,
dominates (here xw represents the center of the workspace, xc represents the center of the
object, x the endpoint of the manipulator, Ke and Ks gains, and a is a constant determining the
fall-off of equation 26 as a function of distance from the object, surface normal n,
Fs = Ks n|| x − xw ||e− a( x− xc )
2
(26)
Ft = Fe + Fs (27)
Outside of the workspace, biological system tendons and joint assemblies act as springs,
pulling toward the center of the workspace, as in Equation 28.
Ft = Kk ( x − xw ) (28)
These equations can be visualized as in Figure 8. Here it is clear that as the object enters the
workspace, the virtual force field acting on the manipulator endpoint correspondingly alters.
In the case just described we use (for clarity) circular objects and workspaces, but different
shapes can be used by replacing the simple subtraction by a function of any shape.
Real-Time
Real-Time Control Control in Robotic Systems
in Robotic Systems 225
17
3.4.4 Low-level
There are advantages to working in endpoint coordinates (Simpkins & Todorov, 2011), one of
the most notable being the flexibility in actual structures the control is solving the problem
for. In addition, the more challenging parts of the problem are solved for dynamics which are
linear.
We here implement a bounded Newton’s method (Ferziger, 1998) to compute the optimal
feasible torques to apply which map as closely as possible to the desired forces at the
endpoints. Each state is initialized with the current state rather than randomly in order to be
near the goal initially, and this is quite effective and efficient in practice. When implemented
on a physical system, accuracy is further improved by implementing a low level PD controller
to track these joint torques more closely, and deal with issues of friction and un-modeled
dynamics of the structure or system.
(d) (e)
Fig. 9. (a) A typical engineered control system. It consists of a mechanical structure, sensors,
actuators, power input/supply, and some computer system. (b)-(e) Some examples of
failures of the typical system, and how they can be classified into one of the four cases (power
supply omitted for clarity). (b) Case 1 : sensor failure - a sensor stops functioning properly.
(c) Case 2 : actuator failure - in this case an actuator breaks, but it could be that it has an
internal electrical failure. (d) Case 3 : mechanical failure - here we see that a main mechanical
component has dislocated completely. (e) Case 4 : Computer failure - here it seems that the
computer has been destroyed, but this type of failure also can be a software crash.
to function abnormally (a wire breaks, it wears out, shock destroys it, a bad ground, etc). The
robot control algorithm assumes it is still being fed sensor information, and acts accordingly.
This will likely cause a disastrous acceleration until a boundary is struck with maximum force,
and the user may be injured.
In the case of a haptic system, it is better to have a motor turn off rather than accelerate
uncontrollably, however how does one integrate available information to react properly?
There actually is information available to tell us that there is a hardware failure, even if the
sensor failure itself is literally undetectable.
Case 2: Actuator failure
There are a number of ways an actuator can fail (Figure 9(c)). An electrical actuator may
fail such that it becomes passive, unresponsive, sluggish, or lacking in power output. It may
also lock or jam due to some component failure. The internal circuit of the actuator may fail
such that it shorts (which tends to make the actuator act as a viscous brake) or has an open
circuit, which leads to a non-responsive, passive actuator. A hydraulic actuator may lock, or it
may leak fluid which leads to pressure loss and either stiffness or loss of power, depending on
where the leak occurs. It may also suffer a compressor failure (which may lead to a total failure
of the actuator) or component wear (which may lead to inefficiency or lack of functionality as
well). Pneumatic actuators may experience a lock-up, but due to gas compressibility this
is less common. Often the failure of pneumatics lead to rapid functionality loss as the gas
Real-Time
Real-Time Control Control in Robotic Systems
in Robotic Systems 227
19
Fig. 10. One possible safety configuration for a watchdog system. In this system there is a
simple microprocessor overseeing all safety-critical functions of the main device - sensors,
actuators, and boundaries. If anything is incorrect, it can disable the motors and enable a
brake directly.
pressure loss may be more rapid than for a fluid-based system. Many pneumatic, electrical,
and hydraulic systems include integrated sensors which may fail, as previously discussed in
Case 1.
Case 3: Mechanical failure
A mechanical failure is defined by the author as a failure in one or more mechanical aspects
of the system. This can be a component breaking or yielding (deforming such that it does
not return to its original shape), a mechanical jam, or a physical disconnection which is
undesirable, as in Figure 9(d).
Case 4: Computer or electronic failure
A computer failure may occur through a software crash, electrical short or disconnection, a
software bug or error, memory corruption, or computational insufficiency, as in Figure 9(e).
Any of these can lead to the control system not acting as designed.
Achieving the goal regardless
Ultimately, many failures can be detected and ’handled’ in such a way that potentially
devastating effects are mitigated. It may be convenient to include as a safety device a
redundant low level microprocessor-based ’watchdog’ which is independent of the standard
system, as in Figure 10. It is connected to all the sensor information and actuators, and has
a programmed set of boundaries to maintain. Its job is to oversee all the components of the
system and ensure they are functioning. The watchdog and main computer system ping each
other frequently in a simple way such as with a pulse train. If there is a failure in either system
component (main or watchdog) the system, depending on its function, may be shut down. The
watchdog may be connected to the actuators such that it may disable the power output and
engage some braking action if desired. It is also generally a high bandwidth system (since it
is not performing many computations). If the system is mission-critical, such as in a military
application or in a flight control or other situation where it may be dangerous for the system to
be suddenly disabled entirely, it may be more effective to determine the safest course of action
which still achieves the objective. In the case of a walking robot, where a Case 1-4 failure
has occurred in the leg, it may still be possible to walk. Consider humans with an injured
joint - they can often offload much of the dynamic load to anther leg or joint. It is simply a
matter of an adaptive control algorithm which recomputes a solution for a different system
configuration. Consider sending a robot into a situation where it will be gradually melting - a
radioactive disaster or extreme conditions. The robot needs to be capable of adapting to loss
of actuators, sensors, and some computational capabilities. An algorithm such as discussed
228
20 Robotic Systems – Applications, Control and Programming
TBD
Fig. 11. Suggested flowchart for real-time system design. Note that safety is to be part of each
stage of the design process. In addition, bandwidth, dimensionality, and timing are
somewhat interdependent, which is the reason that feedback is incorporated in this chart.
The first thing to start with as a question is ’what are the dynamics of interest?’ This leads to
the computations outlined in Section 3.1. Second, one must determine just how many
degrees of freedom the system needs to perform all its tasks (Section 3.3), and how complex
the control algorithms need to be (and can be, depending upon hardware available). This
determines dimensionality, which affects the timing. Bandwidth limitations impose
constraints on both dimensionality and timing (slower timing allows for higher
dimensionality, and vice versa for faster timing).
in Section 3.4.1 can easily handle these sorts of challenges. Identification of the failure is a
challenge. This can be achieved through the integration of the watchdog. A watchdog can
be used to compare the measurements with expected outputs - for example a sensor with
a zero voltage output which measures velocity of a motor normally, while a motor is being
commanded to a high velocity, and is currently not drawing very much current (meaning
it probably is not stalled), and has significant back-emf being generated can be a flag for a
sensor or actuator failure. All the components of a system work together in unison, and the
dynamics of a system express themselves through all the sensors and actuators. Expected
values can be integrated into the watchdog in the form of a classifier which can output error
states very quickly and efficiently, if well-trained and carefully coded. The watchdog and
computer compare error signals, and can eliminate (from the control algorithm), freeze, or
otherwise remove from or adjust the dynamics as much as possible to minimize failure impact.
Then the control algorithm is simply updated with a new number of actuators or in the case
of something similar to locking an injured joint, a new number of degrees of freedom, and it
continues to solve the problem in the optimal fashion given the new system. Since everything
is solved online this is possible.
Fig. 12. Animation sequence of three manipulators tracking a rotating stationary trajectory
(Frames advance starting from top-left in sequence to the right in the same fashion as normal
english text). Note the multiple contact breaks in order to allow the object to be continuously
rotated.
depend upon others, Figure 11 suggests a sequence which is typically effective. Each
component is interdependent with each other component, however an effective starting point
is to define the problem, or class of problems first. This gives a sense of the requirements
for timing. Then the designer can compute required bandwidth and dimensionality of the
system (as well as constraining the design by the available computational power), or generate
something like the sensorimotor system of a human, with a high mechanical dimensionality,
but limitations on overall bandwidth and dimensionality of control, leading to optimal actions
for a particular task. At each stage of the process, safety engineers or the primary designer
can design checks such as a secondary safety system and impose safety-based constraints.
Real-time control itself, in order to be most effective, should not be decoupled from the
design of the actual system. Otherwise the system may be unnecessarily limited due to
designed-in bottlenecks. However, being aware of the concepts presented in this chapter, a
control engineer can effectively maximize the capabilities of a system which already exists
and must merely be controlled.
1 Application of force at the output can about a change of state at the input of a back-drivable system.
230
22 Robotic Systems – Applications, Control and Programming
TBD
1
Reference
Actual 0.055
0.05
0.045
Time (msec)
ϕ1
0.04
0.035
0.03
0 0.025
0 1
ϕ2 200 300 400
Sample
500 600
(a) (b)
Fig. 13. (a) The robot, controlled hierarchically from a PC running matlab and
communicating over bluetooth, with a local low level PID controller, is capable of tracking a
cyclical motion very rapidly. The timing of the loop for this plot is approximately 1msec,
with some variability in timing. (b) In this plot, the loop (which here includes contact
dynamics and a more complex simulation) was deliberately executed in a non-optimal
fashion in order to vary the timing of the sampling and updates, for demonstration purposes.
When such variability is handled properly, as described in this chapter, the system can be
built to remain stable.
and a summary is a 1kHz maximum bandwidth for the control of five fingers through one
bluetooth or USB connection, ability to apply approximately 1N of endpoint force, minimal
friction and endpoint inertia, resolution above one-tenth of a degree, and measures of force in
addition to position. These requirements pose a challenge as communicating position, force,
and current for each joint of each finger for five fingers at 1kHz becomes 9000 variables per
second, and at 12-bits of resolution, for five fingers, this becomes 540kbps. If the robot is
receiving, for example, no more than three variables for command of either position, velocity,
or force output, then the most data must be sent from the low to the higher levels, as the serial
interface used can simultaneously send and receive data.
There have been a few biologically-based robots recently developed, however they all suffer
from some limitation imposed by a lack of an integrative real-time approach such as discussed
in this chapter. It is not unusual for a well-designed hand mechanism to have a maximum
bandwidth of one-tenth of a Hz to move from open-to-closed. Consider attempting most daily
tasks where every motion could be performed in no shorter time than ten seconds. Dynamic
manipulation would indeed be impossible! We omit the discussion of the design itself here,
shown in Figure 6(a), instead focusing on how to maximize the real-time capabilities.
Figure 12 demonstrates a simulation based upon the real-time concepts discussed in previous
sections. This is an implementation of a real-time hierarchical optimal control of a number
of manipulators tracking a reference trajectory, and can be applied to the physical robots.
In this case the high level is a simple proportional-derivative controller tracking a ’rotate
continuously’ command. It is clear that the algorithm is capable of multiple contact breaks
required in order to allow continuous rotation.2
2 Further results and animations can be found at the author’s home page,
http://casimpkinsjr.radiantdolphinpress.com/pages/papersv2.html. In addition, this is the location
for all the author’s publications, with most available for download.
Real-Time
Real-Time Control Control in Robotic Systems
in Robotic Systems 231
23
Fig. 14. Image sequence of a bio-mimetic robotic finger designed by the author undergoing
real-time control as described in this chapter. The finger is tracking a cyclical shape in
3-Dimensional space at approximately a 10Hz frequency. The robot is controlled wirelessly,
through bluetooth, and is communicating 9x12bit values for position, velocity, and coil
current at 1kHz, and assuming it is receiving the same amount of data as it is sending (as a
worst-case), required bandwidth is 9x12x1000x2 = 216,000 bits per second, or 216kbps. The
bandwidth of the connection is 2e6bps, or 2Mbps, which is nearly 10 times greater. In this
case the algorithm controlling the finger is not computing any contacts, and so executes in an
average of less than 1msec at the PC side.
This simulation runs entirely in real-time, with no offline components. With code executing
in Matlab R2008b for Macintosh upon an Intel Core 2 Duo processor at 2.33GHz with 2GB of
RAM, each loop takes on average 30msec, as measured from within Matlab (See Matlab help
on tic-toc), which is fast enough for real-time manipulation. Human beings have an update
rate perceptually of approximately 30Hz, and dynamically of only a few Hz (Nigg & Herzog,
1994). Additionally, it would be straightforward to optimize this code to execute far faster by
implementing it in C, for example.
The fully implemented control algorithm including hardware executes in approximately the
same time, as the hardware is provided the command computed by the simulation, and
the actual object states are fed back into the system to adjust for the difference between
simulation and reality. Essentially including the hardware in the loop in parallel with the
model adds a model reference control element to the hierarchical optimal control strategy. The
majority of the processor time is spent on the contact dynamics computations, and could be
further improved to minimize some calculations without changing the algorithm significantly.
Therefore when some manipulators are not in contact (as in Figure 14), computation time
is reduced dramatically. The 30Hz or even a 200Hz sample rate are well within the 1kHz
bandwidth of the robot manipulator and wireless communication system. It is notable that,
in Figure 13(b), the variability in timing is less than one sample, and so the state space
representations described in Section 3.1 hold. In the case that the manipulator is tracking a
trajectory, as in Figure 13(a), we see that this hierarchical strategy is capable of smooth rapid
control of the manipulator. If the current Matlab implementation were to be used, and the
system were required to perform manipulation at a faster rate than 30Hz, the concepts in
Section 3.3 could be more explicitly included in the control strategy of Section 3.4.1 by adding
a cost on the number of simultaneous manipulator control inputs. A lower dimensionality
means the algorithm will execute faster and be capable of a higher bandwidth, while still
being able to increase dimensionality on demand.
232
24 Robotic Systems – Applications, Control and Programming
TBD
6. Conclusion
Modern robotic systems are becoming more and more complex, and requiring far more
sophisticated controllers to be designed and implemented than ever before in order to deal
with the demand for them to perform more complex tasks. These changes have raised a
new challenge for engineers and roboticists. This chapter has presented a first step toward
a unifying theory which combines and expands upon modern real-time design control
techniques. It broke the problem down into sub-problems which each have a significant
body of literature, while describing the fact that these seemingly separate fields are in fact
intertwined, are part of a whole, and complement each other. By being at least aware
of this method of organizing the problem, and the potential pitfalls associated with the
sub-problems, it is possible to improve the end result of a design by making revisions and
requirements which are more appropriate far earlier in the process than otherwise possible.
A number of results were presented for the control strategy discussed, as well as an
implementation on a real dynamic bio-mimetic robot. Currently, many of the Matlab functions
associated with this robot are being implemented in C and made available as MEX files which
Matlab can still make use of. This should improve the execution time significantly of the
simulation, as well as all other aspects of the computations. These techniques will be part
of the development of new skills for these dynamic manipulators. The two dimensional
representation has been extended to three dimensions, and will be part of a series of papers to
be released soon by the author.
These new challenges require a different overall strategy for problem solving than ever before
in engineered systems, but have the potential to truly revolutionize all aspects of human
experience. By developing and refining methodologies, we will transition from individual
designs which take years of iterative slow improvements with many dead ends to clear paths
with fewer roadblocks, and new tools to illuminate the way.
7. References
Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis, John Wiley and Sons.
Åström, K. & Wittenmark, B. (1990). Computer controlled systems - theory and design, 2nd edn,
Prentice Hall, Englewood cliffs, NJ.
Bernstein, N. (1967). The coordination and regulation of movements, Pergamon, London.
Butterfab, J. & et al. (2001). Dlr-hand ii: Next generation of a dextrous robot hand, Proc. of the
IEEE International Conference on Robotics and Automation pp. 109–114.
Erez, T., Todorov, E. & Tassa, Y. (2011). Fast model predictive control for complex robotic
behavior, Robotics Science and Systems (RSS’11) .
Ermedahl, A. (2003). A modular tool architecture for worst-case execution time analysis, PhD thesis,
Uppsala University.
Ferziger, J. (1998). Numerical Methods for Engineering Application, 2nd edition edn, John Wiley
and Sons, New York, NY.
Franklin, G. F. & et al. (1994). Feedback control of dynamic systems, 3rd edn, Addison Wesley,
Reading, MA.
Grob, H.-G. (2001). A prediction system for evolutionary testability applied to dynamic
execution time analysis, Information and Software Technology (43): 855–862.
Hansen, J., Hissam, S. & Moreno, G. (2009). Statistical-based wcet estimation and validation,
9th International Workshop on Worst-Case Execution Time (WCET) Analysis .
Real-Time
Real-Time Control Control in Robotic Systems
in Robotic Systems 233
25
Krstic, M., Kanellakopoulos, I. & Kokotovic, P. (1995). Nonlinear and Adaptive Control Design,
John Wiley and Sons.
Li, W. & Todorov, E. (2004). Iterative linear quadratic regulator design for nonlinear bio-logical
movement systems, Proc. of the 1st International Conference on Informatics in Control,
Automation and Robotics 1: 222–229.
Mardia, K., Kent, J. & Bibby, J. (1992). Multivariate Analysis, Academic Press.
Movellan, J., Tanaka, F., Fortenberry, B. & Aisaka, K. (2005). The rubi/qrio project: Origins,
principles, and first steps, IEEE International Conference on Development and Learning
1: 80 – 86.
Nigg, B. & Herzog, W. (1994). Biomechanics, John Wiley and Sons, New York.
Nilsson, J. (1998). Real-time control systems with delays, PhD thesis, Dept. of automatic control,
Lund institute of technology, Sweden.
Puschner, P. & Nossal, R. (1998). Testing the results of static worst-case execution-time
analysis, Proceedings of the 19th IEEE Real-Time Systems Symposium (RTSS98) Madrid.
Santello, M., Flanders, M. & Soechting, J. (1998). Postural hand synergies for tool use, Journal
of Neuroscience (18): 10105–15.
Simpkins, A. (2009). Exploratory studies of sensorimotor control and learning with system
identification and stochastic optimal control, PhD thesis, University of California, San
Diego, La Jolla, CA.
Simpkins, A., Kelley, M. & Todorov, E. (2011). Modular bio-mimetic robots that can interact
with the world the way we do, Proceedings of the IEEE International Conference on
Robotics and Automation .
Simpkins, A. & Todorov, E. (2011). Complex object manipulation with hierarchical optimal
control, IEEE Symposium of Adaptive Dynamic Programming and Reinforcement Learning,
IEEE Computer Society, Paris, France.
Stengel, R. (1986). Stochastic Optimal Control: Theory and Application, John Wiley and Sons.
Todorov, E. (2004). Optimality principles in sensorimotor control, Nature Neuroscience
7: 907–915.
Todorov, E. & Ghahramani, Z. (2003). Unsupervised learning of sensory-motor primitives,
IEEE, In Proceedings of the 25th Annual International Conference of the IEEE Engineering
in Medicine and Biology Society.
Todorov, E. & Ghahramani, Z. (2004). Analysis of synergies underlying complex hand
manipulation, IEEE, Proceedings of the 26th Annual International Conference of the IEEE
EMBS, San Francisco, CA, USA.
Todorov, E., Hu, C., Simpkins, A. & Movellan, J. (2010). Identification and control of a
pneumatic robot, Proc. of the fourth IEEE RAS / EMBS International Conference on
Biomedical Robotics and Biomechatronics .
Todorov, E. & Jordan, M. (2002). Optimal feedback control as a theory of motor coordination,
Nature Neuroscience 5(11): 1226–1235.
Todorov, E. & Jordan, M. (2003). A minimal intervention principle for coordinated movement,
Advances in Neural Information Processing Systems 15: 27–34.
Vandeweghe, J. M., M.Rogers, M.Weissert & Matsuoka, Y. (2004). The act hand: Design of the
skeletal structure, Proc. of the IEEE International Conference on Robotics and Automation.
Wilhelm, R., Engblom, J., Ermedahl, A., Holsti, N., Thesing, S., Whalley, D., Bernat, G.,
Ferdinand, C., Heckmann, R., Mitra, T., Mueller, F., Puaut, I., Puschner, P., Staschulat,
234
26 Robotic Systems – Applications, Control and Programming
TBD
1. Introduction
In this chapter, the prediction algorithm Predictive Sequence Learning (PSL) is presented and
evaluated in a robot Learning from Demonstration (LFD) setting. PSL generates hypotheses from
a sequence of sensory-motor events. Generated hypotheses can be used as a semi-reactive
controller for robots. PSL has previously been used as a method for LFD (Billing et al., 2010;
2011) but suffered from combinatorial explosion when applied to data with many dimensions,
such as high dimensional sensor and motor data. A new version of PSL, referred to as
Fuzzy Predictive Sequence Learning (FPSL), is presented and evaluated in this chapter. FPSL
is implemented as a Fuzzy Logic rule base and works on a continuous state space, in contrast
to the discrete state space used in the original design of PSL. The evaluation of FPSL shows a
significant performance improvement in comparison to the discrete version of the algorithm.
Applied to an LFD task in a simulated apartment environment, the robot is able to learn to
navigate to a specific location, starting from an unknown position in the apartment.
Learning from Demonstration is a well-established technique for teaching robots new
behaviors. One of the greatest challenges in LFD is to implement a learning algorithm
that allows the robot pupil to generalize a sequence of actions demonstrated by the teacher
such that the robot is able to perform the desired behavior in a dynamic environment. A
behavior may be any complex sequence of actions executed in relation to sensor data (Billing
& Hellström, 2010).
The LFD problem is often formulated as four questions, what-to-imitate, how-to-imitate,
when-to-imitate and who-to-imitate which leads up to the larger problem of how to evaluate
an imitation (Alissandrakis et al., 2002). Large parts of the literature approach the learning
problem by trying to find the common features within a set of demonstrations of the same
behavior. A skill is generalized by exploiting statistical regularities among the demonstrations
(e.g. Calinon, 2009). This is reflected in the what-to-imitate question, originally introduced in a
classical work by Nehaniv & Dautenhahn (2000) and is in a longer form described as:
An action or sequence of actions is a successful component of imitation of a
particular action if it achieves the same subgoal as that action. An entire sequence
of actions is successful if it successively achieves each of a sequence of abstracted
subgoals.
The problem is difficult since a certain behavior can be imitated on many different abstraction
levels. Byrne & Russon (1998) identified two levels; the action-level imitation copying the
surface of the behavior and a program-level imitation copying the structure of the behavior. A
236
2 Robotic Systems – Applications, Control and Programming
Robot Control
third level, the effect-level imitation, was introduced by Nehaniv & Dautenhahn (2001) in order
to better describe imitation between agents with dissimilar body structures. Demiris & Hayes
(1997) proposed three slightly different levels: 1) basic imitation with strong similarities to
the notion of action-level imitation, 2) functional imitation that best corresponds to effect-level
imitation and 3) abstract imitation that represents coordination based on the presumed internal
state of the agent rather than the observed behavior. Demiris and Hayes give the example of
making a sad face when someone is crying.
The necessity to consider the level of imitation in LFD becomes apparent when considering
two demonstrations that look very different considered as sequences of data, but that we as
humans still interpret as examples of the same behavior since they achieve similar results on
an abstract level. This would correspond to a functional or program-level imitation. In these
situations it is very difficult to find similarities between the demonstrations without providing
high level knowledge about the behavior, often leading to specialized systems directed to LDF
in limited domains.
A related problem is that two demonstrations of the same behavior may not have the same
length. If one demonstration takes longer time than another, they can not be directly compared
in order to find common features. Researchers have therefore used techniques to determine
the temporal alignment of demonstrations. One common technique is dynamic time warping
(Myers & Rabiner, 1981), that can be used to compensate for temporal differences in the data.
Behaviors can be demonstrated to a robot in many different ways. Argall et al. (2009)
outline four types of demonstrations: A direct recording of sensor stimuli, joint angles, etc.,
is referred to as an identity record mapping. In this case, the robot is often used during the
demonstration and controlled via teleoperation or by physically moving the robot’s limbs
(kinestetic teaching). An external observation, e.g. a video recording of the teacher, is called
a non-identity record mapping. This type of demonstrations poses a difficult sensing problem
of detecting how the teacher has moved, but also allows much more flexible demonstration
setting. The teacher may have a body identical to that of the pupil (identity embodiment) or a
body with a different structure (non-identity embodiment). In the latter case, the demonstration
has to be transformed into corresponding actions using the body of the pupil, a difficult
problem known as the correspondence problem (Nehaniv & Dautenhahn, 2001). In this work
we focus on LFD via teleoperation. Sensor data and motor commands are in this setting
recorded while a human teacher demonstrates the desired behavior by tele-operating the
robot, producing demonstrations with identity in both record mapping and embodiment.
encoded relative object displacement and the relevant aspects of the behavior were in this way
extracted as the common features in the demonstration set. Calinon et al. (2007) developed
this approach by encoding the demonstration set using a mixture of Gaussian/Bernoulli
distributions. The Gaussian Mixture Model approach is attractive since the behavior is
divided into several distributions with different covariance, and different metrics can in this
way be selected for different parts of the demonstrated behavior. More recently, similar
encoding strategies have been evaluated for learning of a robot navigation task (de Rengervé
et al., 2010).
signal for each module. Based on the bottom-up signal, a top-down responsibility signal or
confidence value is computed and propagated to each module. The output of the system is
a combination of the actions produced by each inverse model, proportional to their current
responsibility. The responsibility signal also controls the learning rate of each module, such
that modules are only updated when their responsibility is high. In this way, modules are
tuned to a specific behavior or parts of a behavior. Since the prediction error of the forward
model is used as a measure of how well the specific module fits present circumstances, it can
be seen as a metric of imitation that is learnt together with the controller. The architecture
can be composed into a hierarchical system where modules are organized in layers, with the
lowest layer interacting with sensors and actuators. The bottom-up signal constitutes sensor
input for the layer above and actions produced by higher levels constitutes the top-down
responsibility signal.
One motivation for this architecture lies in an efficient division of labor between different
parts of the system. Each module can be said to operate with a specific temporal resolution.
Modules at the bottom layer are given the highest resolution while modules higher up in the
hierarchy have decreasing temporal resolution. State variables that change slowly compared
to a specific module’s resolution are ignored by that module and are instead handled by
modules higher up in the hierarchy. Slowly changing states that lead to high responsibility for
the module is referred to as the module’s context. In a similar fashion, variables that change
fast in comparison to the temporal resolution are handled lower in the hierarchy. This allows
each module to implement a controller where the behavior depends on relatively recent states.
Long temporal dependencies are modeled by switching between modules, which removes
the requirement for each model to capture these dependencies. Furthermore, updates of a
single behavior or parts of a behavior will only require updates of a few modules and will
not propagate changes to other modules. See Billing (2009) for a longer discussion on these
aspects of hierarchical architectures.
The HAMMER and MOSAIC architectures make few restrictions on what kind of controllers
each module should implement. We argue however, that modules should be semi-reactive,
meaning that action selection and predictions of sensor events should be based on recent
sensor and motor events. Strictly reactive modules are not desirable since each module must
be able to model any dependency shorter than the temporal resolution of modules in the layer
directly above.
The division of behavior into modules is however also producing a number of drawbacks.
The possibility for the system to share knowledge between behaviors is limited. Moreover,
the system has to combine actions produced by different modules, which may be difficult in
cases when more than one module receives high responsibility.
One architecture with similarities to HAMMER and MOSAIC able to share knowledge
between different behaviors is RNNPB (Tani et al., 2004). RNNPB is a recurrent neural network
with parametric bias (PB). Both input and output layer of the network contains sensor and
motor nodes as well as nodes with recurrent connections. In addition, the input layer is given
a set of extra nodes, representing the PB vector. The network is trained to minimize prediction
error, both by back-propagation and by changing the PB vector. The PB vector is however
updated slowly, such that it organizes into what could be seen as a context layer for the rest
of the network. In addition to giving the network the ability to represent different behaviors
that share knowledge, the PB vector can be used for behavior recognition.
Another architecture known as Brain Emulating Cognition and Control Architecture (BECCA)
(Rohrer & Hulet, 2006) heavily influenced our early work on the PSL algorithm. The focus
Robot Learning
Robot Learning from Demonstration
from Demonstration Using
using Predictive Sequence LearningPredictive Sequence Learning 2395
of BECCA is to capture the discrete episodic nature of many types of human motor behavior,
without introducing a priori knowledge into the system. BECCA was presented as a very
general reinforcement learning system, applicable to many types of learning and control
problems. One of the core elements of BECCA is the temporal difference (TD) algorithm
Sequence Learning (SL) (Rohrer, 2007). SL builds sequences of passed events which is used
to predict future events, and can in contrast to other TD algorithms base its predictions on
many previous states.
Inspired by BECCA and specifically SL, we developed the PSL algorithm as a method for LFD
(Billing et al., 2010; 2011). PSL has many interesting properties seen as a learning algorithm
for robots. It is model free, meaning that it introduces very few assumptions into learning and
does not need any task specific configuration. PSL can be seen as a variable-order Markov
model. Starting out from a reactive (first order) model, PSL estimates transition probabilities
between discrete sensor and motor states. For states that do not show Markov property, the
order is increased and PSL models the transition probability based on several passed events.
In this way, PSL will progressively gain memory for parts of the behavior that cannot be
modeled in a reactive way. In theory, there is no limitation to the order of the state and hence
the length of the memory, but PSL is in practice unable to capture long temporal dependencies
due to combinatorial explosion.
PSL has been evaluated both as a controller (Billing et al., 2011) and as a method for
behavior recognition (Billing et al., 2010). Even though the evaluation overall generated
good results, PSL is subject to combinatorial explosion both when the number of sensors
and actuators increase, and when the demonstrated behavior requires modeling of long
temporal dependencies. PSL can however efficiently model short temporal dependencies in a
semi-reactive way and should thus be a good platform for implementing a hierarchical system
similar to the HAMMER and MOSAIC architectures.
In this chapter, we present and evaluate a new version of PSL based on Fuzzy Logic. While
keeping the core idea of the original PSL algorithm, the new version can handle continuous
and multi dimensional data in a better way. To distinguish between the two, the new
fuzzy version of the algorithm is denoted FPSL, whereas the previous discrete version is
denoted DPSL. A detailed description of FPSL is given in Section 2. An evaluation with
comparisons between the two algorithms is presented in Section 3, followed by a discussion
and conclusions in section 4.
Υi is the event variable and Eh (e) is a fuzzy membership function returning a membership
value for a specific e. The right hand side Ēh is a membership function comprising expected
events at time t + 1. | h| denotes the length of h, i.e., the number of left-hand-side conditions
of the rule. Both E and Ē are implemented as standard cone membership functions with base
width ε (e.g. Klir & Yuan, 1995).
A set of hypotheses can be used to compute a prediction êt+1 given a sequence of passed
sensory-motor events η, defined up to the current time t:
240
6 Robotic Systems – Applications, Control and Programming
Robot Control
η = ( e1 , e2 , . . . , e t ) (2)
The process of matching hypothesis to data is described in Section 2.1. The PSL learning
process, where hypotheses are generated from a sequence of data, is described in Section 2.2.
Finally, a discussion about parameters and configuration is found in Section 2.3.
|h
|−1
αt ( h) : Eih (et−i ) (3)
i =0
where ∧ is implemented as a min-function.
Hypotheses are grouped in fuzzy sets C whose membership value C (h) describes the
confidence of h at time t:
t
∑ αk (h) Ēh (ek+1 )
C ( h) = k=t
h
t
(4)
∑ αk ( h)
k=th
.
th is the creation time of h or 1 if h existed prior to training. Each C represents a context and can
be used to implement a specific behavior or part of a behavior. The responsibility signal λt (C )
is used to control which behavior that is active at a specific time. The combined confidence
value C̃t (h) is a weighted sum over all C:
∑C ( h) λt (C )
C
C̃t (h) = (5)
∑λt (C )
C
.
C̃t can be seen as a fuzzy set representing the active context at time t. Hypotheses contribute
to a prediction in proportion to their membership in C̃ and the match set M̂. M̂ is defined in
three steps. First, the best matching hypotheses for each Ē is selected:
M = h | α (h) ≥ α h f or all h | Ēh = Ēh (6)
.
The longest h ∈ M for each RHS is selected:
M̃ = h | | h| ≥ h f or all h ∈ M | Ēh = Ēh (7)
.
Finally, the match set M̂ is defined as:
α (h) C̃ (h) h ∈ M̃
M̂ (h) = (8)
0 otherwise
.
The aggregated prediction Ê (et+1 ) is computed using the Larsen method (e.g. Fullér, 1995):
Robot Learning
Robot Learning from Demonstration
from Demonstration Using
using Predictive Sequence LearningPredictive Sequence Learning 2417
The amount of entropy in M̂ also bears information about how reliable a specific prediction
is, referred to as the trust ĉ:
⎧
⎨ 0 M̂ = ∅
ĉ M̂ = (11)
⎩ exp ∑ M̂ (h) log2 M̂ (h) otherwise
h
The trust is important since it allows a controller to evaluate when to rely on PSL, and when
to choose an alternate control method. The proportion of time steps in η for which ĉ > 0 and
PSL is able to produce a prediction is referred to as the coverage φ (η ):
t 1 ĉi > 0
∑
i =1 0 otherwise
φ (η ) = (12)
t
3. Evaluation
Two tests were performed to evaluate the performance of FPSL and compare it to the previous
version. A simulated Robosoft Kompai robot (Robosoft, 2011) was used in the Microsoft RDS
simulation environment (Microsoft, 2011). The 270 degree laser scanner of the Kompai was
used as sensor data and the robot was controlled by setting linear and angular speeds.
Demonstrations were performed via tele-operation using a joypad, while sensor and motor
data were recorded with a temporal resolution of 20 Hz. The dimensionality of the laser
scanner was reduced to 20 dimensions using an average filter. Angular and linear speeds
were however fed directly into PSL.
The first test (Section 3.1) was designed to compare FPSL and DPSL as prediction algorithms,
using sensor data from the simulated robot. The second test (Section 3.2) demonstrates the
use of FPSL as a method for LFD.
Fig. 1. Simulation Environment (Microsoft, 2011) used for evaluations. Blue stars and yellow
dots represent starting positions used for demonstrations and test runs, respectively. The
green area marked with a G represents the target position. The white area under star 10 is the
robot.
over 0.8 m for the finest resolution, up to 8 m for the lowest resolution. Motor data was
discretized over 0.06m/s for the finest resolution up to 0.6 m/s for the lowest resolution.
Similarly, 10 ε values were chosen, corresponding to a cone base ranging from 0.8 m to 8 m
for laser data, and 0.06 m/s up to 0.6 m/s for motor data. α̂ was given a constant value of 0.9,
corresponding to a error tolerance of 10% of ε.
10 data files were used, each containing a demonstration where the teacher directed the robot
from a position in the apartment to the TV, see Figure 1. A rotating comparison was used,
where PSL was tested on one demonstration at a time and the other nine demonstrations
were used as training data. Prediction performance was measured in meters on laser range
data.
3.1.1 Results
The results from the evaluation are illustrated in Figure 2. While the ε value of FPSL cannot
directly be compared to the discretization level used for DPSL, the two parameters have
similar effect on coverage. Prediction error is only calculated on the proportion of the data for
which prediction are produced, and consequently, prediction error increases with coverage.
Robot Learning
Robot Learning from Demonstration
from Demonstration Using
using Predictive Sequence LearningPredictive Sequence Learning 245
11
Fig. 2. Results from the prediction evaluation (Section 3.1). Upper plot shows prediction
errors for FPSL (solid line) and DPSL (dashed line). Lower plot shows coverage , i.e. the
proportion of samples for which the algorithm generated predictions, see Equation 12.
Vertical bars represent standard deviation.
not compete with FPSL in a more realistic setting. We therefore chose to not make a direct
controller comparison, but rather show the behavior of FPSL in an applied and reproducible
setting.
FPSL was trained on 10 demonstrations showing how to get to the TV from various places in
the apartment, see Figure 1. The performance of FPSL as a method for LFD was evaluated
by testing how often the robot reached the target position in front of the TV starting out
from 10 different positions than the ones used during training. FPSL controlled the robot
by continuously predicting the next sensory-motor event based on the sequence of passed
events. The motor part of the predicted element was sent to the robot controller. A standard
reactive obstacle avoidance controller was used as fallback in cases where FPSL did not find
any match with observed data. The task was considered successfully executed if the target
position was reached without hitting any walls or obstacles. The experiment was repeated ten
times, producing a total of 100 test runs.
3.2.1 Results
FPSL successfully reached the target position in front of the TV (the green area in Figure 1) in
79% of the test runs. In 68 runs, it stopped in front of the TV as demonstrated, but in 11 runs it
failed to stop even though it reached the target position. The distribution over the 10 starting
positions illustrated in Figure 3.
4. Discussion
Applied as a robot controller, PSL is a semi-reactive generative model that produces both
actions and expected observations, based on recent sensory-motor events. We believe that this
approach to robot learning has great potential since the behavior can be learnt progressively
and previous knowledge contributes to the interpretation of new events. It is also general
in the sense that very little domain specific knowledge is introduced. Memories are stored
as sequences of sensory-motor events that in principle can represent any behavior. While
PSL efficiently can represent behaviors with short temporal dependencies, it is subject to
combinatorial explosion when the behavior requires representations over longer time spans.
We argue that the gradually extending memory of PSL, from being purely reactive to
containing representations over longer time when needed, provides a good bias in learning.
It will however make learning of behaviors that do require long temporal dependencies slow.
The fuzzy version of PSL presented in this work does not directly provide a solution to this
problem, but is one step towards integrating PSL in a hierarchical structure as discussed in
Section 1.2.
The seeds to FPSL came from the observation that a lot of training was required in order
to cover the state space of DPSL with satisfying resolution. A better tradeoff between high
precision in prediction and coverage would make PSL a more competitive alternative for real
world LFD scenarios. Without sacrificing the strong attributes of the original PSL algorithm,
such as the model free design, few parameters and progressively growing representations,
FPSL was designed.
Expressing PSL with Fuzzy Logic is in many ways a natural extension and generalization
of the discrete algorithm. By using a discrete uniform membership function E and a max
operator for defuzzification, FPSL becomes very similar to DPSL. Even though the processing
of continuous values does add significant processing requirements in comparison to DPSL,
FPSL can still be efficiently implemented as a fuzzy rule controller.
Robot Learning
Robot Learning from Demonstration
from Demonstration Using
using Predictive Sequence LearningPredictive Sequence Learning 247
13
Fig. 3. Results from test runs in simulated environment (Section 3.2). Each bar corresponds to
one starting position, see Figure 1. The green part of the bar represents number of successful
runs where the robot reached and stopped at the target position in front of the TV. The
yellow part represents runs when the robot successfully reached the target, but did not stop.
The test was executed 10 times from each starting position.
The evaluation shows that FPSL produces significantly smaller prediction errors in relation to
the coverage than DPSL (Section 3.1). This was expected since FPSL can be trained to produce
small prediction errors by keeping a high precision constant α̂, while the coverage is still kept
high by using a large ε. In contrast, when using DPSL, one must choose between a small
prediction error with low coverage or a high coverage at the price of an increased prediction
error. As can be seen in Figure 2, FPSL is also affected by the precision/coverage tradeoff, but
not nearly as much as DPSL. Furthermore, the number of generated hypotheses will increase
with α̂, which also has a positive effect on coverage for multidimensional data.
While FPSL performs much better than DPSL on large and multidimensional state spaces, it
should not be seen as a general solution to the dimensionality problem. The increased number
of hypotheses results in increased processing and memory requirements. Furthermore, FPSL
is still not able to ignore uncorrelated dimensions in data, making it subject to the curse
of dimensionality. One potential solution is to modify the size of membership functions in
relation to the information content of the dimension. However, initial tests did not produce
satisfying results and further experiments in this direction were postponed to future work.
248
14 Robotic Systems – Applications, Control and Programming
Robot Control
We found the results from the controller evaluation very promising (Section 3.2). The
application environment has been scaled up significantly in comparison to previous work
(Billing et al., 2011) and we are now able to perform learning in a fairly realistic setting.
When observing these results one should remember that PSL does not solve a spatial task.
There is no position sensor or internal map of the environment and the robot is still able to
navigate from almost any position in the environment, to a specific target location. The goal
is better described as an attractor in a dynamic system, where the robot in interaction with the
environment finally reaches a stable state in front of the TV (Figure 1).
An interesting observation is that it is often difficult to predict how well PSL will be able
to handle a specific situation. For example, starting position 6 was not passed during any
demonstration, but PSL still managed to control the robot such that it reached the target in 7
out of 10 test runs. On the other hand, position 5 and 10 produced worse results than expected.
Even though these starting positions were spatially close to several positions passed during
the demonstrations, the directions at which the robot reached these positions were different,
producing different laser scans, and PSL could consequently not find a suitable match. In
some of the cases, inappropriate matches were found and the robot turned in the wrong
direction. In other cases, no match at all was found causing the robot to fall back on the
reactive controller for a longer period and usually getting stuck in a corner.
The amount of training data used in this evaluation was fairly small. Only one demonstration
from each starting position was performed. One reason why FPSL is able to solve the task
despite the small amount of training is that all data potentially contribute to every action
selection, independently of where in the demonstration it originally took place. Techniques
that represent the whole behavior as a sequence with variations, typically require more
training since information from the beginning of the demonstration does not contribute to
action selection in other parts of the behavior. PSL does not relies on common features within
a set of demonstrations and consequently does not require that demonstrations are compared
or temporally aligned, see Section 1. In its current design, PSL is of course unable to perform
a program-level imitation since it always rely on sensory-motor events, but it does not suffer
from a large diversity in the demonstration set as long as the recent sensory-motor events bear
necessary information to select a suitable action.
5. References
Alissandrakis, A., Nehaniv, C. L. & Dautenhahn, K. (2002). Imitation With ALICE: Learning
to Imitate Corresponding Actions Across Dissimilar Embodiments, IEEE Transactions
on Systems, Man and Cybernetics, Part A: Systems and Humans 32: 482–496.
Alissandrakis, A., Nehaniv, C. L., Dautenhahn, K. & Saunders, J. (2005). An Approach
for Programming Robots by Demonstration: Generalization Across Different Initial
Configurations of Manipulated Objects, Proceedings of 2005 International Symposium on
Computational Intelligence in Robotics and Automation, Ieee, Espoo, Finland, pp. 61–66.
Argall, B. D., Chernova, S., Veloso, M. & Browning, B. (2009). A survey of robot learning from
demonstration, Robotics and Autonomous Systems 57(5): 469–483.
Arkin, R. C. (1998). Behaviour-Based Robotics, MIT Press.
Billard, A., Epars, Y., Cheng, G. & Schaal, S. (2003). Discovering imitation strategies through
categorization of multi-dimensional data, Proceedings of the 2003 IEEE/RSJ Intl.
Conference on Intelligent Robots and Systems, Vol. 3, Las Vegas, Nevada, pp. 2398–2403
vol.3.
Billing, E. A. (2009). Cognition Reversed - Robot Learning from Demonstration, Lic. thesis, Umeå
University, Department of Computing Science, Umeå, Sweden.
Billing, E. A. (2011). www.cognitionreversed.com.
Billing, E. A. & Hellström, T. (2010). A Formalism for Learning from Demonstration, Paladyn:
Journal of Behavioral Robotics 1(1): 1–13.
Billing, E. A., Hellström, T. & Janlert, L. E. (2010). Behavior Recognition for Learning
from Demonstration, Proceedings of IEEE International Conference on Robotics and
Automation, Anchorage, Alaska.
Billing, E. A., Hellström, T. & Janlert, L. E. (2011). Predictive learning from demonstration, in
J. Filipe, A. Fred & B. Sharp (eds), Agents and Artificial Intelligence, Springer Verlag,
Berlin, pp. 186–200.
Brass, M., Bekkering, H., Wohlschläger, A. & Prinz, W. (2000). Compatibility between
observed and executed finger movements: comparing symbolic, spatial, and
imitative cues., Brain and cognition 44(2): 124–43.
Brooks, R. A. (1986). A Robust Layered Control System For A Mobile Robot, IEEE Journal of
Robotics and Automation 2(1): 14–23.
Brooks, R. A. (1991). New Approaches to Robotics, Science 253(13): 1227–1232.
Byrne, R. W. & Russon, A. E. (1998). Learning by Imitation: a Hierarchical Approach, The
Journal of Behavioral and Brain Sciences 16(3).
Calinon, S. (2009). Robot Programming by Demonstration - A Probabilistic Approach, EFPL Press.
Calinon, S., Guenter, F. & Billard, A. (2007). On Learning, Representing and Generalizing a
Task in a Humanoid Robot, IEEE Transactions on Systems, Man and Cybernetics, Part B.
Special issue on robot learning by observation, demonstration and imitation 37(2): 286–298.
de Rengervé, A., D’halluin, F., Andry, P., Gaussier, P. & Billard, A. (2010). A study
of two complementary encoding strategies based on learning by demonstration
for autonomous navigation task, Proceedings of the Tenth International Conference on
Epigenetic Robotics, Lund, Sweden.
Demiris, J. & Hayes, G. (1997). Do robots ape?, Proceedings of the AAAI Fall Symposium on
Socially Intelligent Agents, pp. 28–31.
Demiris, J. & Hayes, G. R. (2002). Imitation as a dual-route process featuring predictive and learning
components: a biologically plausible computational model, MIT Press, pp. 327–361.
Demiris, Y. (1999). Movement Imitation Mechanisms in Robots and Humans, PhD thesis,
University of Edinburgh.
250
16 Robotic Systems – Applications, Control and Programming
Robot Control
1. Introduction
In biology, biarticular muscles are muscles that are passing two joints. One of the biarticular
muscles that is passing the knee and the ankle joints is the Gastrocnemius muscle (see Fig.
1). Functions of the biarticular muscles during human movement have been studied
extensively by many researchers but these functions still remain unclear. One of such
functions is the transportation of mechanical energy from proximal to distal points (Ingen
Schenau, 1989). It is believed that this transportation causes an effective transformation of
rotational motion of body segments into translation of the body centre of gravity. The
mechanism by which this is achieved is the timely activation of biarticular muscles before
the end of the explosive phase of the movement. The activation of the biarticular muscle
prior to the end of the explosive phase of the motion enables the transportation of the power
generated by the proximal joint extensors from the proximal joint to the distal joint. This
transfer of mechanical energy can be explained using the following example of the
gastrocnemius muscle activation. During the push-off phase of the human jump, the knee
joint is rapidly extended as a result of the positive work done by the knee extensor muscles.
If the biarticular gastrocnemius muscle contracts isometrically (its length does not change),
the additional mechanical work is done at the ankle joint because of the gastrocnemius
muscle, which contributes no mechanical work by itself. A part of the energy generated by
the knee extensors appears as mechanical work at the ankle joint and the height of the jump
is significantly increased. This is because, as the jump proceeds and the knee straightens, the
angular position changes of the knee have progressively less effect on vertical velocity of the
jumper centre of gravity. By gastrocnemius muscle activation, a rapid extension of the foot is
produced. This extension has a greater effect on the vertical velocity than the extension of
the almost straightened knee. The energy is more effectively translated into vertical velocity
and a greater height of the jump is achieved. However, the timing of the gastrocnemius
muscle activation is critical to obtain a maximum effect. This was demonstrated by an
articulated physical model of the vertical jump by Bobbert et al (1986).
Besides biarticularity, the gastrocnemius muscle has one more interesting feature. It is
connected to the foot by an elastic tendon (see Fig. 1). The elasticity in the muscle fibres and
tendons plays an important role in enhancing the effectiveness and the efficiency of human
performance. An enhanced performance of human motion has been most effectively
demonstrated for jumping and running (Cavagna, 1970; Bobbert et al., 1996; Shorten, 1985;
Hobara, 2008). An important feature of elastic tissues is the ability to store elastic energy
when stretched and to recoil this energy afterwards as a mechanical work (Asmussen and
252 Robotic Systems – Applications, Control and Programming
Bonde-Petersen, 1974). Beside this feature, oscillatory movements performed at the natural
frequency of muscle-tendon complex could maximize the performance. A countermovement
vertical jump can be treated as one period of oscillatory movement and from this point of
view the natural frequency, as well as parameters that define the natural frequency, can be
determined.
Lower extremities of today’s humanoid robots are mostly serial mechanisms with simple
rotational joints that are driven directly or indirectly by electrical servo drives. Such design
of humanoid robot mechanism allows only rotational motion in joints to occur. This means
that translations of the robot’s centre of gravity are solely a result of the transformation of
rotations in joints into translations of the robot centre of gravity. Especially in ballistic
movements such as fast running or jumping where the robot centre of gravity is to be
accelerated from low or zero velocity to a velocity as high as possible, this transformation is
handicapped. The transfer of the angular motion of the lower extremity segments to the
desired translational motion of the robot centre of gravity is less effective the more the joints
are extended. When the joint is fully extended, the effect of this joint on the translational
motion of the robot centre of gravity in a certain direction equals zero. Besides, the motion
of the segments should decelerate to zero prior to the full extension to prevent a possible
damaging hyperextension. Where relatively large segments which may contain considerable
amounts of rotational energy are involved, high power is necessary to decelerate the angular
motion.
Fig. 1. Biarticular muscle gastrocnemius passing the knee and the ankle joints. It acts as a
knee flexor and ankle extensor. Gastrocnemius muscle is connected to the foot by an elastic
tendon
The purpose of this chapter is, first, to make an extensive review of the research of the role
of biarticular muscles in motion of the humans and to present this biomechanical body of
knowledge from an engineering perspective, and second, to describe the research results
where biarticular actuation was used to achieve energy efficient and high performance
motion of a humanoid robotic leg. The chapter is based mostly on the previous journal
publications of the author and on the yet unpublished research results.
The prime criterion for vertical jump efficiency is the height of the jump that depends on the
speed of the jumper’s centre of gravity (COG) in the moment when the feet detach from the
ground. Besides maintaining the balance, the task of the muscles during the push-off phase
of the jump is to accelerate the body’s COG up in the vertical direction to the extended body
position. During the push-off phase of the jump, the jumper’s centre of gravity must be
above the supporting polygon that is formed by the feet (Babič et al., 2001). In contrast to the
humans, today’s humanoid robots are mostly unable to perform fast movements such as the
vertical jump. They can mostly perform only slow and statically stable movements that do
not imitate the human motion. Besides, these slow and statically stable movements are
energy inefficient. With the understanding of the anatomy and the biomechanics of the
human body, one can find out that, beside the shape, majority of today’s humanoid robots
and human bodies do not have a lot of common properties. To achieve a better imitation of
the human motion and ability to perform fast movements such as the vertical jump or
running, other properties and particularities, beside the shape of the body, should be
considered in the design of the humanoid robot.
In this section we will first analyse the human vertical jump and show that for each and
every subject there exists an optimal triceps surae muscle-tendon complex stiffness that
ensures the maximal possible height of the vertical jump. We define the influence of the m.
gastrocnemius activation timing and the m. gastrocnemius and Achilles tendon stiffness on
the height of the vertical jump and establish the methodology for analysis and evaluation of
the vertical jump. We monitored kinematics, dynamics and m. gastrocnemius electrical
activity during the maximum height countermovement jump of human subjects and
measured viscoelastic properties of the m. gastrocnemius and Achilles tendon using the
free-vibration technique. Based on the findings of the biomechanical study of the human
vertical jump we performed a simulation study of the humanoid robot vertical jump. As a
result we propose a new human inspired structure of the lower extremity mechanism by
which a humanoid robot would be able to efficiently perform fast movements such as
running and jumping.
joint between the tip of the foot and the ground. A model, whose foot is connected to the
ground by a rotational joint, is applicable only for the push-off and landing phases of the
vertical jump and is not applicable for the flight. As the motion of the COG during the flight
is simply described and depends only on the speed vector of the COG just prior to the take-
off, this simplification does not present any limitations.
Fig. 2 shows the planar model of the musculoskeletal system, composed of four rigid bodies
that represent the foot, shank, thigh and trunk together with the head and both upper
extremities. The origin of the base coordinate system is in the centre of the virtual joint that
connects the foot with the ground.
f = k ⋅ ( BC − BC0 ) − b ⋅ BC , (1)
where k represents the stiffness of the m. gastrocnemius and Achilles tendon connected in
series, BC is the vector between the insertions of the biarticular link on the foot and thigh.
BC0 is the vector BC in the moment of the gastrocnemius muscle activation. Force in the
biarticular link f causes a torque in the ankle joint
Q bl 2 = − rB × f , (2)
where rB is the moment arm vector from the centre of the ankle joint to the insertion of the
biarticular link on the foot and a torque in the knee joint
Q bl 3 = rC × f , (3)
where rC is the moment arm vector from the centre of the knee joint to the insertion of the
biarticular link on the thigh.
Motion of the musculoskeletal system is written with a system of dynamic equations of motion
where Qpas is the vector of joint torques caused by the passive constraints in the joints and
Qbl is the vector of joint torques caused by the biarticular link. Qmov is the vector of joint
torques caused by muscles and represents the input to the direct dynamic model of the
musculoskeletal system. The output from the direct dynamic model is the vector of joint
displacements q. We determined parameters of (4) H ( q ) , h ( q , q ) , G ( q ) using the equations
published by Asada and Slotine (1986). Simulation diagram of the direct dynamic model of
the musculoskeletal system is shown in the shaded rectangle of the Fig. 3.
xTd (t ) = 0 , (5)
where xTd (t ) is the distance between the desired perpendicular projection of the body’s
COG on the ground from the origin of the base coordinate system in time t.
2. Motion controller assures the desired vertical movement of the body’s COG relative to
d
the ankle yTA (t ) . By controlling the movement of the body’s COG relative to the ankle,
d
we excluded the influence of the biarticular link on the motion yTA (t ) . Therefore
parameters of the biarticular link can be varied and optimized for a certain desired
motion of the body’s COG relative to the ankle.
256 Robotic Systems – Applications, Control and Programming
3. Motion controller assures a constant angle of the foot relative to the ground q1 before the
biarticular link activation occurs. Thus the number of degrees of freedom of the model
remains constant during the push-off phase of the vertical jump
4. In the moment, when the biarticular link activates, motion controller sets the torque in
the ankle joint Q2 to zero and thus enable a free motion of the foot relative to the
ground. By setting the torque in the ankle joint to zero, the motion in the ankle joint is
only a function of the motion in the knee joint that is transmitted to the ankle by the
biarticular link.
Motion controller that considers the requirement (5) and enables the desired vertical motion
d
of the COG relative to the ankle yTA (t ) in the base coordinate system is
0 0 0
xTc = d + kp d − xT + kd d − x T , (6)
yTA + yA yTA + y A y TA + y A
where kp and kd are coefficients of the PD controller, xTc is the vector of the control
acceleration of the COG in the base coordinate system, yA is the current height of the ankle
joint relative to the ground, x T , xT are the vectors of the current speed and position of the
COG in the base coordinate system.
The relation between the vector of the control speed of the COG in the base coordinate
system x Tc and the vector of the control angular velocities in the joints q c is
x Tc = JT q c , (7)
where JT is the Jacobian matrix of the COG speed in the base coordinate system. Equation (7)
represents an under determined system of equations. From the requirement that the motion
controller assures a constant angle of the foot relative to the ground q1 before the biarticular
link activation occurs, follows the condition
qc 1 = 0 . (8)
An additional condition that abolishes the under determination of (7) is the relationship of
the knee and hip joint angles
q c 4 = n ⋅ q c 3 , (9)
x Tc = J′T q ′c , (10)
where J′T is a new Jacobian matrix of the centre of gravity speed in the base coordinate
system. Differentiation of (10) with respect to time yields
qc 2
q Tc − J′T q ′) ,
′c = = J′T −1 ( x (11)
qc 3
Biarticular Actuation of Robotic Systems 257
where
q
q ′ = 2 . (12)
q 3
On the basis of conditions (8) and (9) and relation (11) we define control angular
accelerations in all four joints
0
qc 2
c =
q . (13)
qc 3
nqc 3
By substitution of (13) into a system of dynamic equations of motion (4) we get control
torques in joints Qmov that we need to control the model of the musculoskeletal system
c + h( q , q ) + G( q ) .
Qmov = H( q )q (14)
Direct dynamic model of the musculoskeletal system together with the motion controller
compose the biorobotic model of the vertical jump. Simulation diagram for the simulation of
the vertical jump is shown in Fig. 3. Inputs into the simulation diagram are the desired
d
trajectory of COG relative to the ankle yTA and a signal for biarticular link activation a.
Output from the simulation diagram is the vector of body’s COG position xT.
Fig. 3. Simulation diagram for the simulation of the vertical jump. Inputs into the simulation
diagram are the desired trajectory of the COG relative to the ankle and a signal for
biarticular link activation. Output from the simulation diagram is the vector of jumper’s
COG position
258 Robotic Systems – Applications, Control and Programming
t t
1
m 0 0
yTm (t ) = F(t )dtdt + yTm (0) (15)
where m is the body mass of the subject and yTm (0) is the initial height of the body’s COG
relative to the ground. To determine the vertical position of the body’s COG relative to the
m
ankle in time t yTA (t ) we measured the motion of the ankle during the vertical jump by
means of the contactless motion capture system (eMotion Smart). The vertical position of the
body’s COG relative to the ankle is therefore
Biarticular Actuation of Robotic Systems 259
m
yTA (t ) = yTm (t ) − y mA (t ) (16)
2.2.4 Electromyography
The activity of the m. gastrocnemius was recorded using a pair of surface electrodes put
over the medial head of the m. gastrocnemius. Analogue EMG signals were amplified and
filtered with a band-pass filter with cut off frequencies at 1 Hz and 200 Hz. The signals were
then digitalized with 1000 Hz sampling frequency and full-wave rectified. To reduce the
variability of sampled EMG signal, the signal was then smoothed with a digital low-pass
Butterworth filter. Finally the EMG signal was normalized with respect to the maximum
value attained during the vertical jump.
2.2.7 Results
To determine the optimal timing of the biarticular link activation that results in the highest
vertical jump, a series of the countermovement vertical jump simulations have been
performed for each subject. Each simulation was performed with a different timing of the
biarticular link activation.
All subjects activated their m. gastrocnemius slightly before the optimal moment,
determined by means of simulations. In average, the difference between the optimal and
measured knee angle when the m. gastrocnemius was activated was 6.4 ± 2.22˚. Because the
260 Robotic Systems – Applications, Control and Programming
dynamic model of the musculoskeletal system does not include the monoarticular muscle
soleus, the measured heights of the jumps were higher than the jump heights determined
with the simulations for 4.3 ±1.12 % in average. The primary motive for omitting the m.
soleus from the modelling is that we wanted to control the motion of the body’s COG
relative to the ankle so that the parameters of the biarticular link could be varied and
optimized for a certain measured motion of the subject’s COG relative to the ankle. If the
dynamic model of the musculoskeletal system would include the m. soleus, the motion of
the ankle would be fully determined by the force of the m. soleus and we would not be able
to control it with regard to the desired body’s COG relative to the ankle. Moreover if the
dynamic model of the musculoskeletal system would include the m. soleus, force produced
by the m. soleus would be another input into the biomechanical model of the vertical jump
and we would have to accurately measure the force produced by the m. soleus of subjects
performing the vertical jump. An additional cause for the differences between the measured
heights of the jumps and the jump heights determined by means of simulations can be the
simplified model of the foot that we modelled as one rigid body. The arch of the foot is
linked up by elastic ligaments that can store elastic energy when deformed and later
reutilize it as the mechanical work (Alexander, 1988). Ker et al. (1987) measured the
deformation of the foot during running and determined the amount of the energy stored
during the deformation. They showed that the elasticity of the foot significantly contribute
to the efficiency of the human movement. To be able to compare the measurements and the
simulation results, we corrected the simulation results by adding the contribution of the m.
soleus to the height of the vertical jump. Such corrected heights of the vertical jumps at the
optimal moments of m. gastrocnemius activation are insignificantly larger than the
measured heights of the vertical jumps for all subjects. In average the height difference is
only 1.6 ± 0.74 cm.
To determine the optimal stiffness of the Achilles tendon regarding to the height of the
vertical jump, a series of the countermovement vertical jump simulations have been
performed, each with a different stiffness of the Achilles tendon. Optimal timing of the
biarticular link has been determined for each stiffness of the Achilles tendon as described in
the previous paragraph. The measured values of the Achilles tendon stiffness for all subjects
were always higher than the optimal values determined by means of simulations. By
considering the elastic properties of the arch of the foot, we can assume that the optimal
values of the Achilles tendon stiffness would increase and therefore the differences between
the optimal and measured values would decrease.
Results of the measurements, simulations and optimizations of the human vertical jumps
are presented in Fig. 4. Subjects are arranged with regard to the ratio between the optimal
stiffness of the Achilles tendon determined by means of simulations and the measured
stiffness of the Achilles tendon. If we want to evaluate the contribution of the viscoelastic
properties to the height of the jump, ranking of the subjects with regard to the height of the
jump is not appropriate because the main parameter that influences the height of the vertical
jump is the power generated by muscles during the push-off phase of the jump. The
elasticity of the Achilles tendon has the secondary influence on the height of the jump.
Results show that for the same power generated by an individual subject during the push-
off phase of the jump, the height of the vertical jump varies with the Achilles tendon
stiffness. Therefore the appropriate criterion for ranking the subjects have to consider the
elasticity of the Achilles tendon.
Biarticular Actuation of Robotic Systems 261
Fig. 4. Results of the measurements, simulations and optimizations of the vertical jumps.
Whole bars represent the maximal heights of the vertical jumps achieved with the optimal
values of the Achilles tendon stiffness and determined by means of simulations. The dark
shaded parts of the bars represent the measured heights of the vertical jumps while the light
shaded tops represent the differences between the maximal and measured heights of the
vertical jumps. The differences are also shown as percentages relative to the measured
heights. Bold line represents the ratio between the optimal stiffness of the Achilles tendon
determined by means of simulations and the measured stiffness of the Achilles tendon for
each subject
system. Besides, we describe the controller of the robot and show the results of the vertical
jump experiments performed by the hardware prototype of the robot.
3.1.1 Construction
The real robotic system was constructed considering the CAD model built in I-DEAS. The
real robotic system and its comparison with the CAD model is shown in Fig. 5. All
mechanical parts are made of aluminium except the gears and axes that are made of
titanium. The total weight of the robot is approximately 4.4 kg and its height is 972mm.
To better understand the biarticular actuation mechanism, consider a jumping motion
starting from a squat position. Until the instant of activation, the biarticular actuation
mechanism has no effect on the system, essentially acting as a passive prismatic joint. When
the biarticular actuation mechanism is activated, the robot essentially becomes redundantly
actuated by the biarticular force. Immediately after robot pushes off the ground, the
Biarticular Actuation of Robotic Systems 263
biarticular actuator is deactivated. This sequence must be executed over a short time in an
optimal fashion to maximize jump performance.
the foot and the ground is modeled as a rotational hinge joint between the foot tip and the
ground. With the assumption that the foot tip does not slip and does not bounce, the robot
has four degrees of freedom during stance. While in the air, the robot has two more degrees
of freedom. The generalized coordinates that describe the state of the robot and its motion
are denoted by qi. The dynamic model of the robot is
+ C (q, q ) + g (q ) = τ ,
H (q )q (17)
where H, C and g denote the inertia matrix, the vector of Coriolis and centrifugal forces
and the vector of gravity forces, respectively. τ is the vector of the joint torques and q is
the vector of the joint positions. The dynamic equation does not consider the friction in
the gears. For this purpose, an additional controller was implemented in the control
strategy.
in the controller. The output of the inverse dynamics block depends on whether the robot is
in the contact with the ground or in the air.
To get the requested output of the block, the following SD/FAST functions were used: sdpos
for the position, sdrel2cart to calculate the Jacobian matrix and sdmassmat, sdequivht, sdfrcmat
to calculate the H, C and g matrices..
Fig. 6. Inverse and direct dynamic model blocks in simulation environment Simulink
Special attention should be paid to switching between the dynamic models. Usually all
movements start with the foot touching the ground. In case of vertical jumping the switch
occurs when the ground reaction forces exerted by the robot in the vertical direction change
direction from downwards to upwards. In the moment of the switch all joint states have to
maintains their values. Hence continuity and dynamic consistency is achieved.
On the other hand when the robot lands, a force impact occurs and the joint velocities
change their values. The joint velocities after the impact have to be calculated with regard to
the robot configuration and the joint velocities before the impact using following equation
(Zheng & Hemami, 1985):
-1 T -1 T -1
Δq = H air S (SH air S ) Δv, (18)
where Hair represents the H matrix of the system in the air, Δv represents the velocity
change in the moment of the impact and Δq represents the resulting changes of the joint
velocities at the impact. Matrix S defines the constraints of the impact and is equal to
1 0 0 0 0 0
S= . (19)
0 1 0 0 0 0
To get the ground reaction force and the position of the toe, the following functions are
used: sdreac to calculate the reaction force and sdpos to calculate the position.
266 Robotic Systems – Applications, Control and Programming
H 11 H 12 q 1 C1 g 1 τ 1
H + + = , (20)
21 H 22 q
2 C2 g 2 τ 2
where τ 1 represents the degrees of freedom that cannot be actuated. This condition leads to
H11q 1 + H12q 2 + C1 + g 1 = 0,
(21)
1 + H 22q
H 21q 2 + C2 + g 2 = τ 2 .
−1
1 = − H 11
q 2 + C1 + g 1 )
( H 12q (22)
and
−1
− H 21 H 11 2 + C1 + g 1 ) + H 22q
( H 12q 2 + C2 + g 2 = τ 2 . (23)
Finally we get
−1
( − H 21 H 11 2 +
H 12 + H 22 ) q
H new
−1
( − H 21 H 11 C1 + C 2 ) + (24)
Cnew
−1
( − H 21 H g + g 2 ) = τ 2 .
11 1
g new
We have acquired a new dynamic model with three degrees of freedom. The other degrees
of freedom cannot be controlled directly; however they can be controlled indirectly using
other conditions such as COM and ZMP.
control the vertical jump we used ground model of the robot in the former case and air
model of the robot in the latter case. Switching between the models was performed by the
controller on the basis of the foot switch.
In the push-off phase of the jump we controlled the position of the COM and ZMP in such a
way that the robot performed a vertical jump. The task of the controller in the flight phase of
the jump was to stop the motion in all joints and to prevent the mechanical damage by over
extension of the joints. At this point we did not consider any strategies for the landing phase
of the jump.
To control all joints we used the controller described by
2c + Cnew + g new = τ 2c ,
H newq (25)
Here, q 2 d and q2 are the desired and the actual joint positions of the joints, respectively.
The desired joint trajectory during the jump is defined with regard to the COM and ZMP as
described in (Babič et al. 2006).
Fig. 7 schematically shows the complete system consisting of the controller and the direct
dynamic model of the robot. The controller includes both ground and air controllers of the
vertical jump. These two controllers are merged together as shown in Fig. 8.
Fig. 7. Block diagram of robotic system consisting of controller and direct dynamic model of
robot
Fig. 8. Block diagram of vertical jump controller made of two separate controllers for push-
off and flight phases of vertical jump
268 Robotic Systems – Applications, Control and Programming
Fig. 9. A sequence of the biarticular legged robot performing a vertical jump experiment
4. Acknowledgement
This investigation was supported by the Slovenian Ministry of Education, Science and Sport.
Thanks to Borut Lenart, Jožef Stefan Institute for designing the robot prototype and to
Damir Omrčen, Jožef Stefan Institute for designing the vertical jump controller and for the
help with the experiments.
Biarticular Actuation of Robotic Systems 269
5. References
Alexander, R.McN. (1988). Elastic mechanisms in animal movement, Cambridge University
Press, Cambridge
Asada, H. & Slotine, J.J.E. (1986). Robot analysis and control, John Wiley and sons, New York
Asmussen, E. & Bonde-Petersen, F. (1974). Storage of elastic energy in skeletal muscles in
man. Acta Physiologica Scandinavica, 91, 385-392
Audu, M.L. & Davy, D.T. (1985). The influence of muscle model complexity in
musculoskeletal motion modeling. Journal of Biomechanical Engineering, 107, 147-157
Babič, J. & Lenarčič, J. (2004). In vivo determination of triceps surae muscle-tendon complex
viscoelastic properties. European Journal of Applied Physiology, 92, 477-484
Babič, J., Omrčen, D. & Lenarčič, J. (2006). Balance and control of human inspired jumping
robot, Advances in Robot Kinematics, J. Lenarčič, B. Roth (Eds.), pp. 147-156, Springer
Babič, J.; Karčnik, T. & Bajd, T. (2001). Stability analysis of four-point walking, Gait &
Posture, 14, 56-60
Bobbert, M.F.; Gerritsen, K.G.M.; Litjens, M.C.A. & Soest, A.J. van (1996). Why is
countermovement jump height greater than squat jump height? Medicine & Science
in Sports & Exercise, 28, 1402-1412
Bobbert, M.F.; Hoed, E.; Schenau, G.J. van.; Sargeant, A.J. & Schreurs, A.W. (1986). A model
to demonstrate the power transportating role of bi-articular muscles, Journal of
Physiology, 387 24
Bogert, A.J. van den; Hartman, W.; Schamhardt, H.C. & Sauren, A.A. (1989). In vivo
relationship between force, EMG and length change in the deep digital flexor
muscle in the horse, In Biomechanics XI-A, G. de Groot; A.P. Hollander; P.A.
Huijing; G.J. van Ingen Schenau (Eds.), pp. 68-74, Free University Press,
Amsterdam
Brand, R.A.; Crowninshield, R.D.; Wittstock, C.E.; Pederson, D.R.; Clark, C.R. & Krieken,
F.M. van (1982). A model of lower extremity muscular anatomy. Journal of
Biomechanics, 104, 304-310
Cavagna, G.A. (1970). Elastic bounce of the body, Journal of Applied Physiology, 29, 279-282
Davy, D.T. & Audu, M.L. (1987). A dynamic optimization technique for predicting muscle
forces in the swing phase of gait. Journal of Biomechanics, 20, 187-201
Delp, S.L. (1990). Surgery simulation: A computer graphics system to analyze and design
musculoskeletal reconstructions of the lower limb, Doctoral Dissertation, Stanford
University, Palo Alto
De Man, H., Lefeber, F., Daerden, F. & Faignet, E. (1996). Simulation of a new control
algorithm for a one-legged hopping robot (using the multibody code mechanics
motion), In Proceedings International Workshop on Advanced Robotics and Intelligent
Machines, pp.1-13
Hyon, S., Emura, T. & Mita, T. (2003). Dynamics-based control of a one-legged hopping
robot, Journal of Systems and Control Engineering, 217, 83–98
Hobara, H. (2008). Spring-like leg behavior and stiffness regulation in human movements,
Coctoral Dissertation, Waseda University, Japan
Iida, F., Minekawa, Y., Rummel, J., Seyfarth, A. (2009). Toward a human-like biped robot
with complient legs. Robotics and Autonomous Systems, 57, 139-144
Ker, R.F.; Bennett, M.B.; Bibby, S.R.; Kester, R.C. & Alexander, R.McN. (1987). The spring in
the arch of the human foot. Nature, 325, 147-149
270 Robotic Systems – Applications, Control and Programming
Legreneur, P.; Morlon, B. & Hoecke, J. van (1997). Joined effects of pennation angle and
tendon compliance on fibre length in isometric contractions: A simulation study.
Archives of Physiology and Biochemistry, 105, 450-455
Leva, P. de (1996). Adjustments to Zatsiorsky-Seluyanov’s segment inertia parameters.
Journal of Biomechanics, 29, 1223-1230
Raibert, M. (1986). Legged Robots That Balance, MIT Press.
Schenau, G.J. van. (1989). From rotation to translation: constraints of multi-joint movements
and the unique action of bi-articular muscles, Human Movement Science, 8, 301-337
Shorten, M.R. (1985). Mechanical energy changes and elastic energy storage during
treadmill running, In: Biomechanics IXB, D.A. Winter; R.W. Norman; R. Wells; K.C.
Hayes; A. Patla (Eds.), pp. 65-70, Human Kinetics Publ, Champaign
Zatsiorsky, V. & Seluyanov, V. (1983). The mass and inertia characteristics of the main
segments of the human body, In Biomechanics VIII-B, H. Matsui; K. Kobayashi
(Eds.), pp. 1152-1159, Human Kinetics, Illinois
Zheng, Y.F. & Hemami, H. (1985). Mathematical modeling of a robot collision with its
environment, Journal of Robotic Systems, 2, 289-307
14
1. Introduction
Inverse method algorithm for investigation of mechatronic systems in vibration technology
is used for robotic systems motion control synthesis. The main difference of this method in
comparison with simple analysis method is that before synthesis of a real system the
optimal control task for abstract initial subsystem is solved [1 - 4]. As a result of calculations
the optimal control law is found that allows to synthesize series of structural schemes for
real systems based on initial subsystem. Is shown that near the optimal control excitation
new structural schemes may be found in the medium of three kinds of strongly non–linear
systems:
- systems with excitation as a time function;
- systems with excitation as a function of phase coordinates only;
- systems with both excitations mixed [2 – 4].
Two types of vibration devices are considered. The first one is a vibration translation
machine with constant liquid or air flow excitation. The main idea is to find the optimal
control law for variation of additional surface area of machine working head interacting
with water or air medium. The criterion of optimization is the time required to move
working head of the machine from initial position to end position. The second object of the
theoretical study is a fin type propulsive device of robotic fish moving inside water. In that
case the aim is to find optimal control law for variation of additional area of vibrating tail
like horizontal pendulum which ensures maximal positive impulse of motion forces acting
on the tail. Both problems have been solved by using the Pontryagin’s maximum principle.
It is shown that the optimal control action corresponds to the case of boundary values of
area. One real prototype was investigated in linear water tank.
S1 ≤ S(t ) ≤ S2 , (1)
272 Robotic Systems – Applications, Control and Programming
where S1 - lower level of additional area of mass m; S2 - upper level of additional area of
mass m, t - time.
The criterion of optimization is the time T required to move object from initial position to
end position.
Then the differential equation for large water velocity V 0 ≥ x is (2):
c, b
S1, S2
x Flow V
S1, S2
Flow
b x
To assume t 0 = 0; t 1 = T , we have K = T .
From the system (2), we transform x1 = x ; x 1 = x 2 or
1
x 1 = x2 ; x 2 = ⋅ ( −c x − b x − u(t ) ⋅ (V0 + x )2 )
m
and we have Hamiltonian (3):
1
H = ψ 0 + ψ 1x2 + ψ 2 (
m
(
⋅ −cx1 − bx2 − u(t ) ⋅ (V0 + x2 )2 , ) (3)
ψ 0 0
ψ = ψ 1 ; X = x 1 . (4)
ψ
2 x 2
Scalar product of those vector functions ψ and X in any time (function H) must be
maximal. To take this maximum, control action u(t) must be within limits
u(t ) = u1; u(t ) = u2 , depending only on function ψ 2 sign (5):
H = max H ,
(5)
if ψ 2 ⋅ ( −u(t ) ⋅ (V0 + x2 )2 ) = max.
x2
S2
S1 x(T)
x(0)
x1
x2
S2
x(T)
x1
S1
S1 x(0)
Fig. 4. Example of optimal control with two switch points in the system with damping
x2
S1
S1
x1
x(0)
S2
S2
x(T)
Fig. 5. Example of optimal control with three switch points in the task of system excitation
x
x = −c ⋅ x − b ⋅ x − k ⋅ (V 0 + x )2 ⋅ A1 ⋅ (0, 5 − 0, 5 ⋅ ) −
m ⋅
x
(6)
x
+ k ⋅ (V 0 + x )2 ⋅ A2 ⋅ (0, 5 + 0, 5 ⋅ ) ,
x
0.500132
0.6
Pa
n
0.8
0.999687 1
0 1 2
0 t 2
n
Fig. 6. Full control action in time domain
0.1
0.019987
0
x
n
0.1
0.110112 0.2
0 2 4
0 t 4
n
Fig. 7. Displacement in time domain
1
0.822123
v 0
n
0.820635 1
0.15 0.1 0.05 0
0.110112 x 0.019987
n
Fig. 8. Motion in phase plane starting from inside of limit cycle
A search for the case with more than one limit cycle was investigated in a very complicated
system with linear and cubic restoring force, linear and cubic resistance force and dry
friction. It was found that for a system with non-periodic excitation (like constant velocity of
water or air flow) more than one limit cycles exist (Fig. 10., 11.).
276 Robotic Systems – Applications, Control and Programming
2
1.164209
v 0
n
1.500808 2
0.6 0.4 0.2 0 0.2
0.496229 x 0.01
n
Fig. 9. Motion with initial conditions outside of limit cycle
x0 0.09
v0 0
v 0
n
2
0.2 0.1 0 0.1
x
n
x0 9
v0 0
1000
500
v 0
n
500
1000
15 10 5 0 5 10
x
n
tn n .s
0.06
0.05
A
n
0.04
0.03
0 5 10
t
n
Fig. 13. Area change function in time domain when blades rotate with constant angular
velocity ω
Investigation shows that system is very stable because of air excitation and damping forces
depending from velocity squared.
278 Robotic Systems – Applications, Control and Programming
c
m
=1
ω
0.5
v 0
n
0.5
1
0.5 0.4 0.3 0.2 0.1
x
n
Fig. 14. Motion in phase plane for parameters close to the first resonance
2.4 Synthesis of real system inside tube with variable hole area - A
Adaptive control where analyzed by next area exchange functions f1(A) (Fig. 15, 16.):
v⋅x
f 1( A) = A2 ⋅ (1 + a ⋅ ),
v⋅x
b1
A2
m2
c
0.5
0.352604
v 0
n
0.278263 0.5
0.2 0.1 0 0.1
0.17494 x 0.04685
n
Fig. 16. Motion in phase plane
It is shown that adaptive systems are also very stable because of water or air excitation and
damping forces depending from velocity squared.
In order to get values of parameters in equations, experiments were made in the wind
tunnel. It should be noted that the air flow is similar to the water flow, with one main
difference in the density. It is therefore convenient to perform synthesis algorithm
experiments done with air flow in the wind tunnel. Experimental results obtained in the
wind tunnel to some extent may be used to analyze water flow excitation in robot fish
prototypes. Using “Armfield” subsonic wind tunnel several additional results were
obtained. For example, drag forces, acting on different shapes of bodies, were measured and
ρ ⋅V 2
drag coefficients were calculated by formula Fd = C d ⋅ S ⋅ , where ρ – density, V – flow
2
velocity, S – specific area, Cd – drag coefficient. These coefficients depend on bodie’s
geometry, orientation relative to flow, and non-dimensional Reynolds number.
Fig. 17. Subsonic wind tunnel used for experiments at Mechanics Institute of Riga TU
280 Robotic Systems – Applications, Control and Programming
All experiments were done at constant air flow velocity in the range of 10 – 20 m/s. This
corresponds to Re value of about 40 000.
Drag coefficient for a plate perpendicular to the flow depends on plate’s length and chord
ratio. For standard plates Cd it is about 1.2. For infinitely long plates Cd reaches the value of
2. For a cone Cd = 0,5; for a cylinder along the flow - about 0,81; for both cylinder and cone in
one direction - 0,32. All these forms can be used for robot fish main body form designing,
(together with the tail) in real robot fish models (see part 3.4, where cone and cylinder
were used). Depth control of prototypes was provided by two front control wings (see
part 3.4).
Parameters of subsonic wind tunnel were as follows: length 2.98m, width 0.8m., height
1.83m. Variable speed motor drive unit downstream of the working section permits stepless
control of airspeed between 0 and 26 ms-1. Experiments prove that air flow excitation is
very efficient.
z
y
Tail
Stop
S1, S2 x
Spring of
pendulum Ay
Stop
ϕ
x
Tail Ax
S1, S2
B1 ≤ B(t ) ≤ B2 , (7)
where B1 - lower level of additional area of a tail; B2 - upper level of additional area of a
tail, t - time.
The criterion of optimization is full reaction Ax* impulse in pivot, acting to a hull (indirect
reaction to a tail) which is required to move an object from one stop - initial position (
ϕmax ,ϕ = 0 ) to another stop - end position ( ϕmin ,ϕ = 0 ) in time T(8).
T
K = − Ax * ⋅dt , (8)
0
Differential equation of motion of the system with one degree of freedom (including a
change of angular momentum around fixed point A) is (9):
L
J Aϕ = M(t ,ϕ ,ϕ ) − c ⋅ ϕ − kt ⋅ B ⋅ sign(ϕ ⋅ ϕ ) ⋅ (ϕ ⋅ ξ ) ⋅ ξ ⋅ dξ .
2
(9)
0
Here J A - moment of inertia of the tail around pivot point; ϕ - angular acceleration of a
rigid straight tail; M(t ,ϕ ,ϕ ) - excitation moment of the drive; c - angular stiffness; kt -
L
constant coefficient; B - area; ( (ϕ ⋅ ξ ) ⋅ ξ ⋅ dξ ) - component of the moment of resistance
2
force expressed as an integral along tail direction; ϕ - angular velocity; L - length of the
tail.
From the principle of the motion of mass center C it follows (10):
282 Robotic Systems – Applications, Control and Programming
L L L
m ⋅ ϕ 2 ⋅ ⋅ cos ϕ + ϕ ⋅ ⋅ sin ϕ = Ax − kt ⋅ B ⋅ sin(ϕ ) ⋅ sign(ϕ ⋅ ϕ ) ⋅ (ϕ ⋅ ξ ) ⋅ dξ
2
(10)
2 2 0
After integration from equations (9) and (10) we have (11, 12):
1 L4
ϕ = ⋅ M(t ,ϕ ,ϕ ) − c ⋅ ϕ − kt ⋅ B ⋅ sign(ϕ ) ⋅ ϕ 2 ⋅ . (11)
JA 4
L 1 L4 L
Ax = m ⋅ {ϕ 2 ⋅ ⋅ cos(ϕ ) + ⋅ M(t ,ϕ ,ϕ ) − c ⋅ ϕ − kt ⋅ B ⋅ sign(ϕ ) ⋅ ϕ 2 ⋅ ⋅ ⋅ sin(ϕ )} +
2 JA 4 2
(12)
3
L
+ kt ⋅ B ⋅ sin(ϕ ) ⋅ sign(ϕ ⋅ ϕ ) ⋅ ϕ 2 ⋅ .
3
2 L 1 2 L L
4
T m ⋅ {ϕ ⋅ ⋅ cos(ϕ ) + ⋅ M(t ,ϕ ,ϕ ) − c ⋅ ϕ − kt ⋅ B ⋅ sign(ϕ ) ⋅ ϕ ⋅ ⋅ ⋅ sin(ϕ )} +
2 J A 4 2 ⋅ dt ,
K = − (13)
L3
0
+ kt ⋅ B ⋅ sin(ϕ ) ⋅ sign(ϕ ⋅ ϕ ) ⋅ ϕ 2 ⋅
3
T
K = − Ax ⋅ dt .
0
2. Transformation of the equation (9) and equation (13) in the three first order equations
with new variables φ0, φ1 and φ2 (phase coordinates):
2 L 1 2 L L
4
m ⋅ {ϕ 2 ⋅ ⋅ cos(ϕ 1) + ⋅ M(t ,ϕ 1,ϕ 2) − c ⋅ ϕ 1 − kt ⋅ B ⋅ sign(ϕ 2) ⋅ ϕ 2 ⋅ ⋅ ⋅ sin(ϕ 1)} +
2 J A 4 2
ϕ 0 = ( −1) ⋅
3
2 L
+ kt ⋅ B ⋅ sin(ϕ 1) ⋅ sign(ϕ 1 ⋅ ϕ 2) ⋅ ϕ 2 ⋅ ,
3
ϕ 1 = ϕ 2;
1 L4 (14)
ϕ 2 = ⋅ M(t ,ϕ 1,ϕ 2) − c ⋅ ϕ 1 − kt ⋅ B ⋅ sign(ϕ 2) ⋅ ϕ 2 2 ⋅ .
JA 4
L 1
m⋅}ϕ 2 2 ⋅ ⋅ cos(ϕ 1) + ⋅ [ M(t ,ϕ 1,ϕ 2) − c ⋅ ϕ 1 −
2 JA
2 L L
4
H = ψ 0 ⋅ ( −1 ) ⋅ − kt ⋅ B ⋅ sign(ϕ 2) ⋅ ϕ 2 ⋅ ⋅ ⋅ sin(ϕ 1)}) + +
4 2
2 L
3
+ k t ⋅ B ⋅ sin(ϕ 1) ⋅ sign(ϕ 1 ⋅ ϕ 2) ⋅ ϕ 2 ⋅
3
(15)
1 L 4
+ψ 1 ⋅ϕ 2 +ψ 2 ⋅ ⋅ M(t ,ϕ 1,ϕ 2) − c ⋅ ϕ 1 − kt ⋅ B ⋅ sign(ϕ 2) ⋅ ϕ 2 2 ⋅ .
JA 4
ψ 0
ψ = ψ 1 ; (16)
ψ 2
Φ 0
Φ = Φ 1 , (17)
Φ 2
= − ∂H ; Φ
Φ = − ∂H ; Φ
= − ∂H , (18)
0 1 2
∂φ0 ∂φ1 ∂φ2
dΦ 0,1,2
where left side is the derivation in time t: Φ 0 ,1,2 = .
dt
From equations (15) and (18) we get nonlinear system of differential equations to find
functions ψ 0 ,ψ 1 ,ψ 2 . Solution of such system is out of the scope of this report because it
depends from unknown moment M(t,φ,ω). But some conclusions and recommendations
may be given from Hamiltonian if excitation moment M(t,φ,ω) does not depend from phase
coordinates φ = φ1, ω = φ2:
M = M(t).
In this case scalar product of two last vector functions ψ and Φ in any time (Hamiltonian
H [11]) must be maximal (supremum - in this linear B case) [8 – 12]. To have such maximum
(supremum), control action B(t) must be within limits B(t ) = B1 ; B(t ) = B2 , depending only
from the sign of a function (19) or (20):
284 Robotic Systems – Applications, Control and Programming
m L5 1 L3
ψ 0 ⋅ ⋅ kt ⋅ 1 ⋅ sign(ϕ 2) ⋅ ⋅ ⋅ sin(ϕ 1) + kt ⋅ 1 ⋅ sin(ϕ 1) ⋅ sign(ϕ 1 ⋅ ϕ 2) ⋅ +
J A 4 2 3
sign ≥ 0, (19)
+ψ 2 ⋅ 1 ⋅ − k ⋅ 1 ⋅ sign(ϕ 2) ⋅ L
4
t
J A 4
B = B2;
m L5 1 L3
ψ 0 ⋅ ⋅ kt ⋅ 1 ⋅ sign(ϕ 2) ⋅ ⋅ ⋅ sin(ϕ 1) + kt ⋅ 1 ⋅ sin(ϕ 1) ⋅ sign(ϕ 1 ⋅ ϕ 2) ⋅ +
J A 4 2 3
sign ≤ 0. (20)
+ψ 2 ⋅ 1 ⋅ − k ⋅ 1 ⋅ sign(ϕ 2) ⋅ L
4
t
JA 4
B = B1;
From inequalities (19) and (20) in real system synthesis following quasi-optimal control
action may be recommended (21):
or
3.2 Synthesis of mixed system with time-harmonic excitation and area adaptive
control
In the case of time-harmonic excitation moment M in time domain is (see equations (9) and (11)):
M(t) = M0٠sin(k٠t).
Results of modeling are shown in Fig. 20. – 25. Comments about graphics are given under
all Fig. 20. – 25.
0.5
0.25
ωn 0
0.25
0.5
0.2 0.1 0 0.1 0.2
ϕn
Fig. 20. Tail angular motion in the phase plane – angular velocity as function of angle
Optimization and Synthesis of a Robot Fish Motion 285
εn 0
2
0.15 0.08 0.01 0.06 0.13 0.2
ϕn
Fig. 21. Angular acceleration of the tail as function of angle. At the end of transition process
graph is symmetric
1.33
0.67
εn 0
0.67
1.33
2
0 1.67 3.33 5 6.67 8.33 10
tn
Fig. 22. Angular acceleration ε = ϕ of a tail in time domain (see equation (9)). Practically
angular acceleration of the tail reaches steady-state cycle after one oscillation
0.02
0.01
0
Axn 0.01
0.02
0.03
0.04
0 1.67 3.33 5 6.67 8.33 10
tn
Fig. 23. Impulse Ax(t) in time domain (see equation (12)). Impulse is non-symmetric against
zero level (non-symmetry is negative)
286 Robotic Systems – Applications, Control and Programming
0.02
0.0117
0.0033
Axn 0.005
0.0133
0.0217
0.03
0.2 0.13 0.067 0 0.067 0.13 0.2
ϕn
IAx :=
n Axn
n
9.06
IAxn
9.07
9.08
0 2000 4000 6000
n
Fig. 25. Negative mean value of Ax(t) in time domain.
Force of pivot to push a hull (necessary for robotic fish motion)
3.3 Synthesis of a system with adaptive excitation and adaptive area control
In a case of adaptive excitation a moment M may be used as the function of angular velocity
in the form [3. 4]:
M(t) = M0٠sign(ω).
Results of modeling are shown in Fig. 26. – 31. Comments about graphics are given under
all Fig. 26. – 31.
Optimization and Synthesis of a Robot Fish Motion 287
0.5
0.25
ωn
0
0
0.25
0.5
0.3 0.15 0 0.15 0.3
ϕn
Fig. 26. Tail angular motion in phase plane – angular velocity as the function of angle. Due
to large water resistance the transition process is very short. Adaptive excitation is more
efficient than harmonic excitation (see Fig.20.)
2.67
1.33
εn 0
1.33
2.67
4
0.3 0.2 0.1 0 0.1 0.2 0.3
ϕn
Fig. 27. Angular acceleration of a tail as the function of angle. At the end of transition
process graph is more symmetric than the graph shown in Fig. 22
εn 0
4
0 2.5 5 7.5 10
tn
Fig. 28. Angular acceleration ε = ϕ of a tail in time domain (see equation (9)). Typically
angular acceleration of the tail reaches steady-state cycle after half oscillation
288 Robotic Systems – Applications, Control and Programming
0.05
0.025
0
Axn 0.025
0.05
0.075
0.1
0 1.67 3.33 5 6.67 8.33 10
tn
Fig. 29. Impulse Ax(t) in time domain (see equation (12)). Impulse is non-symmetric against
zero level (non-symmetry is negative)
0.05
0.025
Ax n 0.025
0.05
0.075
0.1
0.3 0.2 0.1 0 0.1 0.2 0.3
ϕn
IAx :=
n Axn
n
37.8
IAxn 37.85
37.9
0 2000 4000 6000
n
Fig. 31. Negative mean value of Ax(t) in time domain. Force of pivot to push fish hull to the
right (in order to have robotic fish motions to the right). This graph shows that adaptive
excitation of a moment is about four times more efficient than harmonic excitation (see Fig. 25.)
Optimization and Synthesis of a Robot Fish Motion 289
3 2 1
6 5 4
Fig. 34. Prototype was investigated in a large storage lake with autonomy power pack and
distance control system, moving in real under water conditions with waves
4. Conclusion
Motion of robotic systems vibration by simplified interaction with water or air flow can be
described by rather simple equations for motion analysis. That allows to solve mathematical
problem of area control optimization and to give information for new systems synthesis.
Control (or change) of object area under water or in air allows to create very efficient
mechatronic systems. For realization of such systems adapters and controllers must be used.
For this reason very simple control action have solutions with use of sign functions.
Examples of synthesis of real mechatronic systems are given. As one example of synthesis is
a system with time-harmonic moment excitation of the tail in the pivot. The second example
of synthesis is a system with adaptive force moment excitation as the function of phase
coordinates. In both systems area change (from maximal to minimal values) has control
action as the function of phase coordinates. It is shown that real controlled systems vibration
motion is very stable.
5. References
[1] E. Lavendelis: Synthesis of optimal vibro machines. Zinatne, Riga, (1970). (in Russian).
[2] E. Lavendelis and J. Viba: Individuality of Optimal Synthesis of vibro impact systems.
Book: Vibrotechnics”. Kaunas. Vol. 3 (20), (1973). (in Russian).
[3] J. Viba: Optimization and synthesis of vibro impact systems. Zinatne, Riga, (1988). (in
Russian).
[4] E. Lavendelis and J. Viba: Methods of optimal synthesis of strongly non – linear (impact)
systems. Scientific Proceedings of Riga Technical University. Mechanics. Volume
24. Riga (2007).
[5] http://en.wikipedia.org/wiki/Lev_Pontryagin. (2008).
Optimization and Synthesis of a Robot Fish Motion 291
[6] http://en.wikipedia.org/wiki/Pontryagin%27s_maximum_principle.(February
(2008).
[7] http://en.wikipedia.org/wiki/Hamiltonian_%28control_theory%29. (2007).
[8] V.G. Boltyanskii, R.V. Gamkrelidze and L. S. Pontryagin: On the Theory of Optimum
Processes (In Russian), Dokl. AN SSSR, 110, No. 1, 7-10 (1956).
[9] L. S. Pontryagin, V.G. Boltyanskii, R.V.Gamkrelidze and E.F. Mischenko: (Fizmatgiz). The
Mathematical Theory of Optimal Processes, Moscow (1961).
[10] L.S. Pontryagin, V.G. Boltyanskii, R.V. Gamkrelidze, and E.F. Mishchenko: The
Mathematical Theory of Optimal Processes, Wiley-Interscience, New York. (1962).
[11]V. G. Boltyanskii: Mathematical Methods of Optimal Control, Nauka, Moscow. (1966).
[12] V.G. Boltyanskii: Mathematical methods of optimal control: Authorized translation
form the Russian. Translator K.N. Trirogoff. Balskrishnan-Neustadt series. New
York. Holt, Rinehart and Winston. (1971).
[13] E.B. Lee and L. Markus: Foundations of Optimal Control Theory, Moscow: Nauka, (1972).
(in Russian). 14. Sonneborn, L., and F. Van Vleck (1965): The Bang-Bang Principle for
Linear Control Systems, SIAM J. Control 2, 151-159.
[14] http://en.wikipedia.org/wiki/Bang-bang_control. (2008).
[15] Patent LV 13928, Republic of Latvia, Int.Cl. B25 D9/00, B25 D11/00. Method for control
of operation condition of one-mass vibromachine on elastic suspension / J. Viba, V.
Beresnevich, M. Eiduks, L. Shtals, E. Kovals, G. Kulikovskis. – Applied on
03.03.2009, application P-09-38; published on 20.09.2009 // Patenti un preču zīmes,
2009, No. 9, p. 1209 – 1210.
[16] Patent LV 14033, Republic of Latvia, Int.Cl. B63 H 1/00. Method for tractive force
forming in fin propulsive device / J. Viba, V. Beresnevich, S. Cifanskis, G.
Kulikovskis, M. Kruusmaa, W. Megill, J.-G. Fontaine, P. Fiorini. – Applied on
04.09.2009, application P-09-151; published on 20.02.2010 // Patenti un preču
zīmes, 2010, No. 2, p. 203.
[17] Patent LV 14034, Republic of Latvia, Int.Cl. B63 H1/00. Hydrodynamic fin-type
vibration propulsive device / S. Cifanskis, J. Viba, V. Jakushevich, O. Kononova, J.-
G. Fontaine, W. Megill. – Applied on 29.09.2009, application P-09-160; published
20.02.2010 // Patenti un preču zīmes, 2010, No. 2, p. 203.
[18] Patent LV 14054, Republic of Latvia, Int.Cl. B63 H1/00. Fin of hydrodynamic vibration
propulsive device / S. Cifanskis, J. Viba, V. Jakushevich, V. Beresnevich, O.
Kononova, G. Kulikovskis. – Applied on 15.10.2009, application P-09-174;
published 20.03.2010 // Patenti un preču zīmes, 2010, No. 3, p. 428.
[19] Patent LV 14055, Republic of Latvia, Int.Cl. B63 H1/00. Method for control of operating
condition of hydrodynamic fin-type vibration propulsive device / J. Viba, S.
Cifanskis, V. Jakushevich, O. Kononova, J.-G. Fontaine, E. Kovals. – Applied on
02.11.2009, application P-09-191; published 20.03.2010 // Patenti un preču zīmes,
2010, No. 3, p. 428.
[20] Patent LV 14076, Republic of Latvia, Int.Cl. B63 H1/00. Hydrodynamic fin-type
vibration propulsive device / S. Cifanskis, J. Viba, V. Jakushevich, O. Kononova, M.
Kruusmaa, G. Kulikovskis. – Applied on 24.11.2009, application P-09-205;
published 20.04.2010 // Patenti un preču zīmes, 2010, No. 4, p. 631.
292 Robotic Systems – Applications, Control and Programming
[21] Patent LV 14077, Republic of Latvia, Int.Cl. B63 H 1/00. Method for adjustment of
operation condition of fin-type vibration propulsive device / S. Cifanskis, V.
Beresnevich, J. Viba, V. Yakushevich, J.-G. Fontaine, G. Kulikovskis. – Applied on
10.12.2009, application P-09-219; published on 20.04.2010 // Patenti un preču
zīmes, 2010, No. 4, p. 631.
[22] Patent LV 14175, Republic of Latvia, Int.Cl. B63 H 1/00. Method for adjustment of
operation condition of fin-type vibration propulsive device / S. Cifanskis, J. Viba,
V.Beresnevich, V. Jakushevich, M. Listak, T. Salumäe. – Applied on 28.04.2010,
application P-10-63; published 20.10.2010 // Patenti un preču zīmes, 2010, No. 10,
p. 1515 – 1516.
[23] Patent LV 14237, Republic of Latvia, Int.Cl. B63 H1/00. Hydrodynamic fin-type
vibration propulsive device / S. Cifanskis, J. Viba, V. Beresnevich, V. Jakushevich,
J.-G. Fontaine, W. Megill. – Applied on 07.10.2010, application P-10-141; published
20.02.2011 // Patenti un preču zīmes, 2011, No. 2, p. 188.
0
15
1. Introduction
Elastic robot joints gain in importance since the nowadays robotics tends to the lightweight
structures. The lightweight metals and composite materials deployed not only in the links
but also in the joint assemblies affect the overall stiffness of robotic system. The elastic
joints provide the loaded robot motion with additional compliance and can lead to significant
control errors and vibrations in the joint as well as operational space. A better understanding
of the compliant joint behavior can help not only to analyze and simulate robotic systems but
also to improve their control performance.
Elastic robot joints are often denoted as flexible joints or compliant joints as well. The former
modeling approaches aimed to describe the dynamic behavior of elastic robot joints lead back
to Spong (1987). Spong extended the general motion equation of a rigid robotic manipulator to
the case of joint elasticity captured by a linear connecting spring. Remember that the general
motion equation derived either from Lagrange or Newton-Euler formalism for a class of rigid
robotic manipulators (Sciavicco & Siciliano (2000)) can be expressed as
Here, q ∈ R n is the vector of Lagrangian (also joint) coordinates and u ∈ R n is the vector
of generalized input forces. The configuration dependent matrixes M ∈ R n×n , C ∈ R n×n ,
and G ∈ R n constitute the inertia, Coriolis-centrifugal, and gravity terms correspondingly.
The model introduced by Spong (1987) has been widely used in the later works which
deal with a compliance control (Zollo et al. (2005)) and passivity-based impedance control
(Ott et al. (2008)) of robots with joint elasticities. Despite a good acceptance for the control
applications this modeling strategy misses the damping factor related to the connecting
spring. In this regard, the approach proposed by Ferretti et al. (2004) which provides two
masses connected by a linear spring and damper arranged in parallel is more consistent
with the physical joint structure. Also the inverse dynamic model used for vibration
control of elastic joint robots (Thummel et al. (2005)) incorporates a torsional spring with
both stiffness and damping characteristics. From a slightly diversing point of view the
modeling of elastic robot joints was significantly influenced by the studies of harmonic drive
gear transmission performed in the end-nineties and last decade. The harmonic drives are
widely spread in the robotic systems due to their compact size, high reduction ratios, high
torque capacity, and low (nearly zero) backlash. However, a specific mechanical assembly
294
2 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
provides the harmonic drive gears with additional elasticities which are strongly coupled
with frictional effects. A structure-oriented dissipative model of the harmonic drive torque
transmission was proposed by Taghirad & Bélanger (1998). The model composes multiple
component-related frictional elements and an additional structural damping attributed to
the flexspline. Another notable modeling issue provided by Dhaouadi et al. (2003) presents
the torsional torque in harmonic drives as an integro-differential hysteresis function of both
angular displacement and angular velocity across the flexspline. Later, Tjahjowidodo et al.
(2006) describe the dynamics in harmonic drives using nonlinear stiffness characteristics
combined with distributed Maxwell-slip elements that capture the hysteresis behavior. A
complex phenomenological model of elastic robot joints with coupled hysteresis, friction
and backlash nonlinearities was proposed by Ruderman et al. (2009). However, realizing the
complexity of decomposing and identifying the single nonlinearities a simplified nonlinear
dynamic model was later introduced in Ruderman et al. (2010).
The leitmotif provided in this Chapter is to incorporate a combined physical as well as
phenomenological view when modeling the joint transmission with elasticities. Unlike
the classical approaches which rather operate with linear stiffness and damping elements
arranged either in series or in parallel the structure-oriented effects are emphasized here.
The inherently nonlinear compliance and damping of elastic robot joints are tackled from
the cause-and-effect point of view. It is important to note that such approaches can rapidly
increase the number of free parameters to be identified, so that the model complexity has to
be guarded carefully. The Chapter is organized as follows. Section 2 introduces the robot
joint topology which captures equally the dynamic behavior of both rigid and elastic revolute
joints. Three closed subsystems are described in terms of their eigenbehavior and feedforward
and feedback interactions within the overall joint structure. Section 3 provides the reader with
description of the developed joint model capable to capture two main nonlinearities acting in
the joint transmission. First, the dynamic joint friction is addressed. Secondly, the nonlinear
stiffness combined with hysteresis map is described in detail. An experimental case study
provided in Section 4 shows some characteristical observations obtained on a laboratory setup
of the joint with elasticities and gives some identification results in this relation. The main
conclusions are derived in Section 5.
joint transmission. Although those hardware solutions are often not technically or (and)
economically profitable, a prototypic laboratory measurement performed on the load side
of robotic joints can yield adequate data sufficient for analysis and identification. Here, one
can think about highly accurate static as well as dynamic measurements of the joint output
position performed by means of the laser interferometry or laser triangulation. The load
torques can also be determined externally either by applying an appropriate accelerometer
or by using locked-load mechanisms equipped by the torque (load) cells. However, the latter
solution is restricted to the quasi-static experiments with a constrained output motion.
Now let us consider the topology of an elastic robot joint as shown in Fig. 1. In terms
of the input-output behavior the proposed structure does not substantially differ from a
simple fourth-order linear dynamic model of two connected masses. An external exciting
torque u constitutes the input value and the relative position of the second moving mass θ
constitutes the output value we are interested in. Going into the input-output representation,
u q W T
joint gear joint
actuator transmission load
let us subdivide the robotic joint into three close subsystems connected by the feedforward
and feedback actions realized by appropriate physical states. The joint actuator loaded by
the feedback torque τ provides the output angular displacement q of a rigid shaft. This
value is an inherent determinant of the relative motion entering the gear transmission and
mostly measurable prior to that one. Since the gear transmission captures the intrinsic joint
elasticity, the angular output displacement of the joint load constitutes the second feedback
state. Assuming that the gear with elasticities behaves like a torsion spring with a certain
stiffness capacity its output value represents the transmitted torque which drives the joint
load. When cutting free the forward and feedback paths the joint model decomposes into three
stand-alone submodels, each one describing the specified physical subsystem. Note that from
energy conversion point of view we obtain three dissipative mappings with two different sets
of the input values. The first input set constitutes the forward propagation of the energy fed
to the system. The second input set represents the system reaction with a negative or positive
energy feedback depending on the instantaneous operation state. The superposition of the
feed-in and reactive joint torque provides the internal angular motion (u − τ ) → q damped
by the friction. The constrained relative displacement across the gear transmission provides
the elastic tension in the joint (q − θ ) → τ damped by its structure. Finally, the transmitted
torque provides the output relative motion τ → θ whose damping depends on the topology
of subsequent load.
The actual joint topology represents a general case which covers both elastic and rigid
transmission. The actuator submodel can be equally used for a rigid joint modeling, where τ
will appear as an input disturbance fed back directly from the joint load and acting upon the
overall torque balance. The stand-alone gear transmission modeling is however meaningful
only when the relative displacement occurs, i.e. q = θ. When considering a rigid robotic joint
296
4 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
the transmission submodel will describe a particular case with an unlimited stiffness. At this,
each infinitesimal change in q will lead to an immediate excitation of the load part, so that the
eigendynamics of the joint transmission disappears.
The following analytical example of two equivalent linear joint models explains the upper
mentioned ideas in more detail. For instant, consider a simple motion problem of two
connected masses m and M with two damping factors d and D, once with a rigid and once
with an elastic joint. Assume that the last one is represented by the spring with the stiffness
K. The first case can be described by the simple differential equation
(m + M ) q̈ + (d + D ) q̇ = u . (2)
It is evident that the system (2) provides the single dynamic state, and the mass and damping
factors appear conjointly whereas q = θ. The more sophisticated second case with elasticity
requires two differential equations
m q̈ + d q̇ + K (q − θ ) = u ,
M θ̈ + D θ̇ − K (q − θ ) = 0 (3)
which constitute the fourth-order dynamic system. For both systems, let us analyze the step
response in time domain and the amplitude response in frequency domain shown in Fig. 2
(a) and (b). Here, H1 (s) denotes the transfer function H(s) = θ̇ (s)/U (s) of Eq. (2), and
H2 (s) as well as H3 (s) denote the one of Eq. (3). At this, the stiffness K is set to 1e3 for
H2 (s), and to 1e8 for H3 (s). The case described by H3 (s) should approximate a rigid system
at which the stiffness increases towards unlimited. Note that the residual parameters of Eqs.
(2) and (3) remain constant. It is easy to recognize that the step response of H3 (s) coincides
well with those one of the absolute rigid joint H1 (s). When analyzing the frequency response
function it can be seen that the resonance peak of H3 (s) is shifted far to the right comparing
to H2 (s). Up to the resonance range the frequency response function H3 (s) coincides exactly
with H1 (s). Note that the shifted resonance of H3 (s) provides the same peak value as H2 (s).
Hence, during an exceptional excitation exactly at the resonance frequency the oscillatory
output will arise again. However, from the practical point of view such a high-frequently
excitation appears as hardly realizable in a mechanical system. Even so, when approaching
the resonance range, the high decrease of the frequency response function provides a high
damping within the overall system response.
(a) (b)
0
1 10
dθ / dt (−)
0.75 −2
10
|H|
0.5
−4
10
0.25 H (s) H (s) H (s)
1 2 3
H (s) H (s) H (s)
1 2 3
−6
0 10 0 2 4 6
0 0.01 0.02 0.03 0.04 0.05 10 10 10 10
t (s) ω (rad/s)
Fig. 2. Comparison of linear joint approaches; step response (a), frequency response (b)
The performed qualitative example demonstrates the universal character of the proposed joint
topology. At this, the principal mechanisms conducting the joint dynamics are independent
Modeling ofRobot
Modeling of Elastic Elastic Robot
Joints with Joints
Nonlinear with
Damping Nonlinear Damping and Hysteresis
and Hysteresis 2975
from the complexity of a particular robotic system. Being conditional upon the gear and
bearing structure an optimal level of detail for modeling can be determined using a top-down
approach. Starting from the simplest rigid case with a single actuator damping, additional
compliant and frictional elements can be included hierarchically in order to particularize the
case-specific system behavior.
In the following, we consider the single modeling steps required to achieve an adequate
description of each subsystem included in the topology of a nonlinear elastic joint.
m q̈ + f (q̇) + τ = u , (4)
in which the input torque is assumed to be linear to the regulated motor current i. The fast
current controls of the common servo motors operate at sampling rates about or larger than 10
kHz. Hence, the transient behavior of the current control loop can be easily neglected taking
into account the time constants of the mechanical system part. The latter amount to several
tens up to some hundred milliseconds. With an appropriate current regulation an electrical
servo motor can be considered as a nearly reactionless power source which provides the joint
actuator with the driving input torque u (t) ∼= k m i ( t ).
The friction f (·) acting in the bearings is in large part load-independent. However, since a
particular joint actuator can be mounted on the robotic manipulator with multiple degrees
of freedom (DOF) its frictional characteristics can vary dependent on the actual robot
configuration. Particularly, the orientation of the supported motor shaft to the gravity field
can influence the breakaway and sliding friction properties to a certain degree. Aside of the
dependency from the robotic configuration the most significant frictional variations, as well
known, are due to the thermal effects. Both the external environment temperature and the
internal temperature of contacting elements play a decisive role, whereas the last one increases
usually with the operating time and intensity. However, the thermal frictional effects can be
rather attributed to a slow time-variant process and a corresponding adaptivity of the model
parameters. For reason of clarity an explicit temperature dependency is omitted here just as
in most known approaches of modeling the robot dynamics. In the following, we also assume
that no significant eccentricities are present in the system, so that almost no periodic torque
ripples occur on a shaft revolution. Otherwise, the harmonic disturbances partially attributed
to the position-dependent friction can be efficiently estimated and compensated as proposed
e.g. by De Wit & Praly (2000).
The complexity of the obtained actuator model can differ in number of the free parameters to
be identified, mostly dependent on the selected mapping of the friction behavior. The lumped
298
6 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
moment of inertia can be easily estimated by decoupled load using the large steps and (or)
free run-out experiments. However, in doing so a linear damping has to be assumed. Since
the motion is essentially damped by a nonlinear friction it appears as more reasonable to
identify the actuator inertia together with the corresponding friction parameters. For these
purposes more elaborated and specially designed experiments either in time or frequency
domain can be required (see e.g. by Ruderman & Bertram (2011a)). Often, it is advantageous
to identify the actuator friction together with the friction acting in the gear transmission. The
friction effects in the actuator and gear assembly are strongly coupled with each other since
no significant elasticities appear between both. When identifying the dynamic friction one has
to keep in mind that the captured torque value, mostly computed from the measured motor
current, represents a persistent interplay between the system inertia and frictional damping.
Nq
An idealized rigid gear with a transmission ratio N provides the angular motion reduction
1
θ= q (5)
N
and the corresponding torque gain
T = Nτ. (6)
Due to the gear teeth meshing the disturbing torsional compliance and backlash can occur
during a loaded motion. Assuming the rotationally symmetrical gear mechanisms, i.e.
excluding any rotary cam structures, the cumulative effects of the gear teeth meshing can be
Modeling ofRobot
Modeling of Elastic Elastic Robot
Joints with Joints
Nonlinear with
Damping Nonlinear Damping and Hysteresis
and Hysteresis 2997
represented as shown schematically in Fig. 3. Here in (a), the angular relative displacement
δ = Nq−θ (7)
is additionally subject to a backlash. However, the latter one can be neglected when the play
size stays below the resolution of q and θ measurement. Thereafter, the case (b) represents
a pure elastic deflection of the teeth meshing, whereat the corresponding compliance is not
mandatory linear. In fact, it is rather expected that the teeth interaction exhibits a hardening
stiffness property. Due to internal frictional mechanisms within the teeth engagement area
the overall torsional joint compliance can behave piecewise as elasto-plastic and thus give
rise to substantial hysteresis effects. The backlash, even when marginal, is coupled with
an internal teeth friction and thus provides a damped bedstop motion. Due to a mutual
interaction between the mentioned nonlinear phenomena the resulting hysteresis is hardly
decomposable in proper frictional, structural and backlash elements. Hence, it must be rather
considered as a compound input-output nonlinearity, while keeping in mind the nature of its
internal mechanisms.
The overall hysteresis torque can be considered as a general nonlinear function
T ( t ) = h δ ( t ), δ (0) (8)
of the relative displacement, and that under impact of the initial condition δ(0). The latter is
poorly known when the absolute angular position is captured neither on the input nor output
side of the joint, that is the most typical case in the robotic praxis. In this regard, it could
be necessary first to saturate the transmission load in order to determine a proper hysteresis
state which offers a well known memory effect. Here, the hysteresis memory manifests itself
in the transmitted torque value which depends not only on the recent relative displacement
between the gear input and output but equally on the history of the previous values. Thus,
the dynamic hysteresis map has to replace the static stiffness characteristic curve of the
gear transmission. However, the last one remains a still suitable approximation sufficient
for numerous practical applications. Well understood, the hysteretic torque transmission
includes an inherent damping which is characterized by the energy dissipation on a closed
load-release cycle. The enclosed hysteresis loop area provides a measure of the corresponding
structural losses, where the damping ratio is both amplitude- an frequency-dependent. Thus,
the structural hysteresis damping differs from the linear viscous one and can lead to the limit
cycles and multiple equilibrium states.
The following numerical example demonstrates the damping characteristics of the joint
transmission in a more illustrative way. For instance, the single mass with one DOF is
connected to the ground, once using a linear spring with linear viscous damping, and once
using a nonlinear hysteretic spring. At this, the linear stiffness is selected so as to coincide
with the average value of the nonlinear stiffness map included in the hysteresis model.
When exciting the system by the Dirac impulse (at time t=1) the eigenmotion behaves as
a damped oscillation shown in Fig. 4. The linear case (a) provides a typical second-order
oscillatory response which is dying out towards zero equilibrium state, having an exponential
enveloping function. In case (b), the nonlinear hysteretic spring is higher damped at the
beginning, though does not provide a final equilibrium state. Instead of that, the motion
enters a stable limit cycle up from a certain amplitude. The last one indicates the hysteresis
cancelation close to zero displacement that is however case-specific and can vary dependent
300
8 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
q (−)
q (−)
q (−)
0 0 0
−1 −1 −1
0 5 10 15 20 0 5 10 15 20 0 5 10 15 20
t (−) t (−) t (−)
Fig. 4. Damped response of the excited eigenmotion when applying linear and nonlinear
hysteretic spring
on the selected hysteresis map. From a physical point of view an autonomous mechanical
system will surely converge to a static idle state due to additional damping mechanisms not
necessarily captured by the hysteresis. However, theoretically seen the hysteresis damping
does not guarantee the full decay of all eigenmotions. An interesting and more realistic
case (c) represents a combination of the linear and nonlinear hysteretic damping. Due to an
additional velocity-dependent damping term the eigenmotion does not stay inside of a limit
cycle and converges towards a non-zero idle state, thus providing the system response with
memory. The characteristical nonzero equilibrium states denote the forces remaining in the
gear transmission after the input and output loads drop out.
(a) (b)
n
T T D T T ¦Di
i 1
d d1
M M dn
L1
W k L W k1 Ln
kn
For reasons of clarity let us analyze here the case of a concentrated mass shown in Fig. 5 (a). At
this point it is important to say that the load model with distributed masses shown in Fig. 5 (b)
can behave more accurately and capture more complex vibration modes. At the same time, the
identification of distributed mass-stiffness-damper parameters can rapidly exceed the given
technical conditions of the system measurement. Besides, the achieved gain in accuracy of
capturing the reactive load related to vibrations is not necessarily high. Recall that the primary
objective here is to determine the reactive torque fed back from the oscillating elastic link and
thus to improve the prediction of θ. An accurate computation of the link end-position due
to link elasticities is surely also an important task in robotics. However, this falls beyond
the scope of the current work whose aim is to describe the robot joint transmission with
nonlinearities. Introducing the state vector x = (θ θ̇ θ L θ̇ L ) T and the vector of nonlinearities
T
h = sin(θ ) sgn(θ̇ ) 0 0 one can obtain the state-space model of the concentrated joint load
given by
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
0 1 0 0 0 0 0 0 0
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎜ − k/M −(d f + d)/M k/M d/M ⎟ ⎜ − lg/M − Fc /M 0 0 ⎟ ⎜ N/M ⎟
⎜
ẋ = ⎜ ⎟ ⎜
x+⎜ ⎟ ⎜
h+⎜ ⎟ τ.
⎝ 0 0 0 1 ⎟ ⎠ ⎝ 0 0 0 0⎟⎠ ⎝ 0 ⎠
⎟
k/L d/L − k/L − d/L 0 0 0 0 0
(10)
The 4×4 matrices in Eq. (10) constitute the system matrix of a linear part and the coupling
matrix of nonlinearities correspondingly. For all physically reasonable parameter values the
system proves to be controllable whereas the input torque τ occurs as always lagged due to
preceding dynamics of the joint actuator and gear transmission.
The following numerical example demonstrates the disturbing link vibrations during the
torque step whose unavoidable lag is approximated by a first-order low pass filter with cut-off
frequency 1000 rad/s. The parameter values are selected so as to provide the relation between
both inertia M/L = 0.1 and damping d f /d = 10000. Further, two relative stiffness values
1k and 10k are considered. Since the gravity term is not directly involved into vibrational
characteristics it can be omitted at this stage. In the same manner the Coulomb friction
nonlinearity which constitutes a constant torque disturbance during an unidirectional motion
is also excluded from the computation. The response of the simulated, insofar linear, joint load
is shown in Fig. 6. Two stiffness values differing in order of magnitude lead to the oscillating
link deflection depicted in Fig. 6 (b) and (c). Besides the differing eigenfrequencies the
principal shape of the enveloping function appears as quite similar for both stiffness values.
302
10 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
0.8 12 12
dθ/dt (−)
0.6
α (−)
α (−)
8 8
0.4
4 4
0.2 dQL/dt
0 dQ/dt 0 0
0 0.2 0.4 0.6 0 0.2 t (s) 0.4 0.6 0 0.2 t (s) 0.4 0.6
t (s)
Fig. 6. Response of the joint load with elasticities to the lagged step of the input torque
However, most crucial is the fact that the amplitudes differ in the same order of magnitude,
so that a lower stiffness provides a definitely higher level of link vibrations. Considering
the correspondent velocity response, once in the joint output and once in the link deflection
coordinates depicted in Fig. 6 (a) the impact of the link vibrations becomes evident. Here,
the output joint velocity θ̇ is subject to the substantial oscillations at the beginning of relative
motion. As a consequence, the oscillating behavior of the joint load will be propagated back
to the joint transmission, thus increasing the complexity of overall joint behavior.
D
M
m
f * F
u q, q T , T
N
Apart from the hardening stiffness and hysteresis property, the rotary spring can also include
the backlash nonlinearities as mentioned afore in Section 2.2. Note that the complexity degree
of nonlinearities involved in the transmitting rotary spring can vary. This depends on the
required model accuracy as well as significance of each particular phenomenon observable
when measuring the joint behavior.
Since the hysteresis map captures the frequency-independent structural damping, an
additional linear damping term D is connected in parallel to the rotary spring. The introduced
linear damping comprises all viscous (velocity-dependent) dissipative effects arising from
the internal teeth and slider interactions as mentioned afore in Section 2.2. Here, one
must be aware that the velocity-dependent damping map can ditto possess the nonlinear
characteristics. However, for reasons of clarity and more straightforward identification
the simple linear damping is further assumed. As shown previously in Fig. 4 (c) the
superposition of both damping mechanisms provides the system dynamics with multiple
non-zero equilibrium states. Apart from the curvature and area of the underlying hysteresis,
which determine the dissipative map, the magnitude of the linear damping affects a faster
settling of the oscillatory joint motion.
The moving output joint part is mandatory supported by some rotary-type bearing, so that the
output friction torque F has to be included. A proper decomposition of the input and output
joint friction appears as one of the most challenging identification tasks when modeling
the joint with elasticities. When no sufficient input and output torque measurements are
available, certain assumptions about the friction distribution across the joint transmission
have to be made. One feasible way to decompose a coupled joint friction is to identify first
the overall friction behavior under certain conditions, at which the impact of joint elasticities
is either negligible or it constitutes a constant bias term. Here, one can think about the drive
experiments under reduced (decoupled) or stationary handled joint load. The steady-state
velocity drive experiments can constitute the basis for such an identification procedure. Once
the overall joint friction is identified it can be decomposed by introducing the weighting
functions and finding the appropriate weighting rates within the overall joint dynamics.
However, this heuristic method requires a good prior knowledge about the acting joint
elasticities. That is the nonlinear spring and damping models have to be identified afore.
effects in robotic joints and machine tools (see Armstrong-Helouvry et al. (1994), Bona & Indri
(2005), Al-Bender & Swevers (2008) for overview).
Generally, the existent approaches can be subdivided into static and dynamic mapping of
kinetic friction. Note that in terms of kinetics the static friction signifies the friction forces
acting during a relative motion at constant or slightly changing velocities and not the sticking
forces of stationary bodies. Here, the classical constant Coulomb friction and the linear
velocity-dependent viscous friction can be involved to describe the frictional forces between
two contacting surfaces which slide upon each other. Moreover, the well-known Stribeck
effect can be involved to describe the nonlinear transition between the break-away friction at
zero velocity and linear viscous friction in a more accurate way. The most dynamic friction
models capture both pre-sliding and sliding regimes of the friction and provide the smooth
transition through zero velocity without discontinuities typical for static friction models.
The most advanced dynamic friction models are also capable of describing such significant
frictional phenomena as position-dependent pre-sliding hysteresis and frictional lag in the
transient behavior. An extensive comparative analysis of several dynamic friction models,
among Dahl, LuGre, Leuven, and GMS one, can be found in Al-Bender & Swevers (2008) and
Armstrong-Helouvry & Chen (2008).
Apart from the pre-sliding frictional mechanisms, the main differences between the static
and dynamic friction modeling can be explained when visualized in the velocity-force (w, F )
coordinates as shown in Fig. 8. The static friction map provides a discontinuity at zero
velocity which can lead to significant observation errors and control problems since the
system operates with frequent motion reversals. However, when the steady-state operation
modes are predominant the static map can be accurate enough to capture the main frictional
phenomena. Further, it is easy to recognize that the closer the relative velocity is to zero the
larger is the impact of nonlinear Stribeck effect. The peak value at zero velocity indicates
the maximal force, also called break-away, required to bear the system from the sticking to
the sliding state of continuous motion. Comparing the Stribeck static map with the dynamic
(a) (b)
ω
F (−)
F (−)
0 0
Coulomb &
viscous
with Stribeck 2SEP
ω
0 0
w (−) w (−)
Fig. 8. Static (a) and dynamic (b) friction map in velocity-force coordinates
one shown in Fig. 8 (b) it can be seen that the principal shape of the velocity-force curves
remain the same. However, the transient frictional behavior when passing through zero
velocity is quite different dependent on the ongoing input frequency ω. Here, the overall
friction response to a cyclic input velocity with an increasing frequency is shown. The higher
is the ongoing frequency when changing the motion direction the higher is the stretch of the
resulting frictional hysteresis. This phenomenon is also known as the frictional lag (see by
Al-Bender & Swevers (2008)) since the friction force lags behind the changing relative velocity.
Roughly speaking, the frictional lag occurs due to the time required to modify either the
hydro-dynamic lubricant properties during the viscous, or the adhesion contact properties
during the dry sliding. With an increasing relative velocity after transient response the friction
Modeling ofRobot
Modeling of Elastic Elastic Robot
Joints with Joints
Nonlinear with
Damping Nonlinear Damping and Hysteresis
and Hysteresis 305
13
force is attracted towards the static characteristic curve which represents the steady-state
friction behavior. Depicted in Fig. 8 (b), the dynamic friction response is obtained using 2SEP
(two-state with elasto-plasticity) model which is briefly described in the rest of this Section.
The 2SEP dynamic friction model (Ruderman & Bertram (2010), Ruderman & Bertram
(2011a)) maintains the well-established properties of dynamic friction which can
be summarized as i) Stribeck effect at steady-state velocity-dependent sliding, ii)
position-dependent pre-sliding hysteresis, and iii) friction lag at transient response. The
model describes the dynamic friction behavior using a linear combination of the pre-sliding
and transient friction state that converges towards the velocity-dependent steady-state. At
this, one independent and one dependent state variables are intuitively related to such
frictional phenomena as the pre-sliding hysteresis and friction lag at transient behavior. The
weighted superposition of both yields the total friction force as
F = A z1 + B | w | z2 . (11)
Note that the instantaneous friction value is conducted coevally by the relative displacement
and relative velocity without switching functions and (or) thresholds to separate the
pre-sliding and sliding regimes. The pre-sliding friction behavior, predominantly
captured by z1 , is described using the Modified Maxwell-Slip (MMS) approximation
(Ruderman & Bertram (2011b)). The MMS structure can be imagined similar to a single
Maxwell-slip element, but with a pivotal difference of representing the applied connecting
spring. In MMS model, the irreversible nonlinear spring exhibits a saturating elasto-plastic
deflection until a motion reversal occurs. Assuming an exponentially decreasing stiffness
by an increasing relative displacement after reversal, the overall hysteretic state converges
towards the constant tangential force. It is assumed that the deflection of elasto-plastic
asperity contacts saturates at the Coulomb friction level, so that the weighting factor A can
be often set to one. A saturated friction state characterizes the plastic sliding at which the
deflected asperities slip upon each other and provide an approximately constant value of the
tangential force at non-zero velocity. The state dynamic is defined as a first-order nonlinear
differential equation
ż1 = | Ω| w K exp − K | qr | , (12)
with
Ω = sgn(w) Fc − F , (13)
when the velocity sign changes. The average stiffness capacity Ω of interacting asperities
memorizes the last motion reversal, at which the asperity junctions are released and reloaded
again in the opposite direction. The integrated relative displacement qr is reset to zero
whenever the motion direction changes. Solely two parameters, the Coulomb friction Fc and
the initial stiffness K, determine the overall hysteresis map of the pre-sliding friction.
The transient friction state
S (w) − F
ż2 = , (14)
| S (w)|
behaves as a first-order time delay element that transfers the velocity-dependent dynamic
friction and attracts it towards the steady-state S (w) = s(w) + σ w. The factor σ constitutes
the linear viscous friction term and the nonlinear characteristic curve
w
s(w) = sgn(w) Fc + ( Fs − Fc ) exp −| | δ (15)
Vs
306
14 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
describes the well-known Stribeck effect. The velocity-dependent Stribeck map is upper
bounded by the static friction (also called stiction) force Fs and low bounded by the Coulomb
friction force. The exponential factors Vs and δ denote the Stribeck velocity and the Stribeck
shape factor correspondingly. In Eq. (11), the attraction gain B determines how fast the
overall friction force converges towards the steady-state. The attraction gain is additionally
subject to the velocity magnitude. On the one hand, this is done in order to reduce the
impact of the transient behavior in vicinity to zero velocity, at which the pre-sliding frictional
mechanisms predominate. On the other hand, the velocity-dependent attraction gain ensures
the steady-state convergence as | w| increases.
Overall seven free parameters are required to describe the dynamic friction using 2SEP
model, where five of them are already spent to support the standard Stribeck effect. Here,
it is important to say that other dynamic friction models which capture the basic frictional
phenomena can be applied in equal manner when describing the overall joint behavior. At
this, the most significant aspects by the choice of a particular friction model are the ease of
implementation, generality, as well as practicability in terms of the identification to be done
under the circumstances of a real operation mode.
The restoring torque arising in a hysteretic spring is subdivided into an elastic and plastic
(irreversible) part related to each other by a weighting factor. When using the polynomials
k1 , . . . , k i for describing the characteristic stiffness curve the total restoring torque can be
expressed as a superposition
Γ (δ, x ) = ∑ μ k i sgn(δ) | δ| i + (1 − μ ) k i sgn( x ) | x | i . (16)
i
At this, the weighting coefficient 0 < μ < 1 provides a relationship between the purely
elastic (μ = 1) and purely hysteretic, further also as plastic, (μ = 0) deflection. The dynamic
state variable x which captures the elasto-plastic deflection, therefore responsible for arising
hysteresis, is described by
The amplitude and shape of hysteresis are determined by the parameters A, β, and γ. The
factor n ≥ 1 provides the smoothness of transition between an elastic and plastic response.
For an interested reader a more extensive parameter analysis of Bouc-Wen-like differential
Γ (−)
Γ (−)
0 0 0
0 0 0
δ (−) δ (−) δ (−)
Fig. 9. Hysteretic restoring torque for purely elastic (a), elasto-plastic (b), and plastic (c)
torsional deflection
hysteresis models can be found by Ma et al. (2004). In Fig. 9, an example of the restoring
torque on a closed load-release cycle is shown dependent on the factor μ. At this, the residual
model parameters have been held by the same values. It is evident that no hysteresis losses
occur for μ=1, and the case represented in Fig. 9 (a) provides a simple nonlinear characteristic
stiffness curve with hardening properties. Though the larger a plastic contribution to the
overall restoring torque is the larger is the hysteresis area in Fig. 9 (b) and (c), and the larger
is the corresponding structural damping acting in the joint transmission.
With respect to the joint structure derived in Section 3.1 the overall transmitted joint torque is
given by
T = Γ (δ) + D δ̇ . (18)
Note that the transmitted joint torque already includes the nominal gear ratio N captured by
δ. When replacing the nonlinear function Γ (·) by a linear spring described by Kδ a simple
joint transmission model with linear stiffness and damping can be achieved again. However,
when accounting for hysteresis motion losses the use of a hysteresis stiffness map becomes
indispensable and has to be included in the overall joint dynamics.
308
16 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
Joint load
Gear unit
BLDC motor
1
10
Amplitude
0 W(jω)/U(jω) − free
10
W(jω)/U(jω) − bounded, prestressed
W(jω)/U(jω) − bounded, relaxed
−1 W(jω)/T(jω) − free
10
1 2
10 ω (rad/s) 10
prestressed FRF exhibits a slight resonance increase due to the raised joint elasticity comparing
to the bounded relaxed and the free FRF. At the same time, the high structural and frictional
damping impedes the occurrence of significant resonances which could be related to the joint
elasticities. The transfer from the joint torque T ( jω ) to the angular velocity W ( jω ) illustrates
a backward propagation of reactive restoring torque coming from the joint load. The torque
excitation is realized by an external shock applied mechanically to the load part of the joint.
The W ( jω )/T ( jω ) FRF coincides qualitatively with the W ( jω )/U ( jω ) FRF of free motion
despite a high level of process and measurement noise.
Further, let us examine the identification of the joint actuator including the nonlinear friction
and the total inertia of the motor and gear assembly. Using the available motor current and
joint torque as the inputs and the angular actuator velocity as the output the identification
problem can be easily solved in the least-squares sense. The system is excited by a down-chirp
signal with 30–1 Hz frequency range. The measured and identified velocity response shown
in Fig. 12 exhibits some resonant and anti-resonant phases related to the overall system
dynamics.
40
measurement
20 model
w (rps)
−20
−40
0 2 4 6 8 10 12 14 16 18 20
t (s)
Fig. 12. Measured and identified velocity response to the down-chirp excitation
In the similar manner, the nonlinear joint transmission can be identified using the joint
torque and torsion measurements obtained from the same experiment. The time series of
the measured and identified joint torque is depicted in Fig. 13 (a). It is easy to recognize
that both curves coincide fairly well with each other, thus proving the transmission model
to be capable of describing the dynamic joint behavior. Note that the visible discrepancies
occur rather at low frequencies at t > 19 s. Here, the joint output vibrations which come
from the non-smoothly moving load part influence the torque and angular measurements in
a non-congruent way. The corresponding torsion-torque hysteresis depicted in Fig. 13 (b)
confirms ditto a good agreement between the measured and modeled response, and that for a
large set of the nested hysteresis loops.
(a) (b)
40 40
measurement measurement
20 model 20 model
T (Nm)
T (Nm)
0 0
−20 −20
−40 −40
0 5 10 15 20 −1 −0.5 0 0.5 1
t (s) δ (deg)
Fig. 13. Measured and identified joint torque, (a) time series, (b) torsion-torque hysteresis
310
18 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
Finally, the diagrams provided in Fig. 14 visualize some identification results related to the
joint load. The gravity term can be determined in a quite simple way using the quasi-static
motion experiments. The joint link is driven slowly across the overall operation space by
using the lowest possible constant input torque. The obtained measurements of the joint
torque and output angular position are used to fit the gravity as shown in Fig. 14 (a). The
measured and predicted velocity response of the identified joint load is shown in Fig. 14
(b) for the applied step excitation. The peak velocity of about 60 deg/s is in the upper
operation range of the experimental joint setup. A substantial velocity decrease occurs due
to an increasing impact of gravity. Note that the depicted (measured) joint torque constitutes
the total torque balance behind the gear affected simultaneously by the transmitted actuator
torque and reactive torque of the joint load. A clearly visible oscillation pattern indicates the
impact of the output joint shaft and link vibrations.
(a) (b)
15 80
40 input torque
0
−5 20
−10
0
−15
−180 −135 −90 −45 0 45 90 135 0 1 2 3 4 5
θ (deg) t (s)
Fig. 14. Measured and identified gravity torque (a), measured and predicted velocity
response of the joint load together with the measured applied and transmitted torque (b)
5. Conclusions
The presented methodology aims for analyzing and modeling the behavior of elastic robot
joints with nonlinearities such as hysteresis and friction which affect the stiffness and damping
characteristics. Independent of the rigid or elastic joint consideration a general topology of
revolute robot joints has been demonstrated as a starting position for a subsequent, more
detailed system modeling. The defined forward and feedback paths allow to decompose
the robot joint into the close physical subsystems with appropriate eigenbehavior. This
permits to cut free the entire joint, and to analyze and identify its subsystems stand-alone,
under the terms of fully or partially measurable states. For a large class of rotary joint
transmissions a sophisticated but nevertheless manageable model has been proposed. In
addition to numerical examples an experimental case study has been shown to prove the
proposed methods.
6. References
Al-Bender, F. & Swevers, J. (2008). Characterization of friction force dynamics, IEEE Control
Systems Magazine 28(6): 64–81.
Albu-Schaffer, A., Eiberger, O., Grebenstein, M., Haddadin, S., Ott, C., Wimbock, T., Wolf, S.
& Hirzinger, G. (2008). Soft robotics, IEEE Robotics Automation Magazine 15(3): 20–30.
Armstrong-Helouvry, B. & Chen, Q. (2008). The Z-properties chart, IEEE Control Systems
Magazine 28(5): 79–89.
Modeling ofRobot
Modeling of Elastic Elastic Robot
Joints with Joints
Nonlinear with
Damping Nonlinear Damping and Hysteresis
and Hysteresis 311
19
Wen, Y. K. (1976). Method for random vibration of hysteretic systems, ASCE Journal of the
Engineering Mechanics Division 102(2): 249–263.
Zollo, L., Siciliano, B., Luca, A. D., Guglielmelli, E. & Dario, P. (2005). Compliance control
for an anthropomorphic robot with elastic joints: Theory and experiments, Journal of
Dynamic Systems, Measurement, and Control 127(3): 321–328.
0
16
1. Introduction
In recent years, the scientific community has had an increased interest in exploring
the asteroids of the solar system (JAXA/ISAS, 2003; JHU/APL, 1996; NASA/JPL, 2007).
Technological advances have enabled mankind for the first time to take a closer look at these
small solar system objects through sensors and instruments of robotic deep space probes.
However, most of these space probe missions have focused on the reconnaissance of the
asteroids’ surfaces and their compositional analysis from a distance. Little attention has been
given to locomotion on their surfaces with a mobile robotic system, due to the challenging
gravity conditions found in these small solar system bodies.
In small bodies like asteroids, the gravitational fields are substantially weaker than those of
Earth or Mars, therefore the likelihood of a robot’s unintentional collision with the surface
while attempting a movement is substantially higher. In one of the latest missions, the
Japanese Hayabusa spacecraft carried onboard a small robot named MINERVA (Yoshimitsu
et al., 2001) to be deployed and used to explore the asteroid surface. The robot was designed
with a single reaction wheel, located inside of it, to produce the necessary inertial reaction to
move. But with this system the location of the robot when the motion is damped out is very
challenging to predict or control. Subsequently, in order to maximize the scientific return from
any given mission on an asteroid’s surface, future missions must have the ability to conduct
stable mobility and accurate positioning on the rough terrain.
In the robotics field, limbed locomotion is broadly recognized as superior in its capability to
traverse terrains with irregularities such as obstacles, cliffs and slants. Exotic types of wheeled
rovers (Bares et al., 1999; Wilcox & Jones, 2000) can only drive over obstacles of heights that
are at best a fraction of the vehicle’s body length. Thus, some terrains are not accessible to
wheeled vehicles. Conversely, legged or limbed locomotion has the possibility to provoke
minimum reactions on the asteroid surface that could push the robot with sufficient force
to reach escape velocity and drift into space. It also facilitates achievement of desired goal
configurations that deal with new complex situations, ensuring that a robot’s behavior does
not deviate from a stable condition.
In this chapter, the focus is on gravity-independent locomotion approaches, technologies
and challenges of robotic mobility on asteroids. Recommendations and methods to perform
314
2 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
Fig. 1. Mosaic of Eros ©Applied Physics Lab/JHU and Asteroid 25143 Itokawa ©JAXA/ISAS
2. Asteroid exploration
Orbiting between 2.1 to 3.2 AU1 from the sun, primitive asteroids in the main asteroid
belt represent key bodies to research on the early planetary system origin and evolution.
Asteroids could provide clues about the birth and growth of our planetary system, but without
any in-situ observation, we are not able to link measurements of asteroid material with a
corresponding catalog of meteorite groups on Earth. The in-situ study of asteroids can lead
to important scientific findings in the effort to map the main asteroid belt. Mapping the belt
by spectral classes and knowledge about which region on Earth the meteorites have landed
can provide key clues about the origin and evolution of our solar system, even including the
geological history of our planet Earth (Fujiwara et al., 2006).
Asteroids’ physical characteristics provide a very hostile environment distinguished by
the absence of (almost any) gravity. The effects of the microgravity environment can be
approximated for convenience as those on order of 10−6 g (Scheeres, 2004) (where g is the
acceleration due to gravity on Earth). In such an environment, objects basically do not fall,
but remain orbiting unless they reach the low escape velocity of the asteroid on order of 20
cm/s (Scheeres, 2004), as in the case of the asteroid 25143 Itokawa (Fig. 1, right). To attain
stable mobility on these bodies, it is critical to consider the interaction forces between a robot
and the asteroid’s surface in such microgravity environments.
Relatively little attention from planetary scientists and planetary robotics engineers has been
focused on surface mobility on asteroids. As a result, there exists some risk that premature
conclusions about the feasibility of stable mobility on asteroid surfaces may be drawn without
thorough consideration of all possible alternatives. However, it is clear that in order to increase
1 AU stands for astronomical unit and is based on the mean distance from Earth to the Sun, 1.5×108 [km].
Gravity-Independent Locomotion:
Dynamics and
Gravity-Independent Position-Based
Locomotion: Control Control
Dynamics and Position-Based of Robots
of Robots on Asteroid
on Asteroid SurfacesSurfaces 3153
any scientific return from a mission operating on an asteroid, movement on the surface would
require a closer look at stability control against the forces interacting between bodies in a
microgravity environment.
The following section presents recent developments in robotics which may pose the most
feasible solutions to asteroid surface exploration challenges.
3. Mobility in microgravity
Weak gravitational fields (micro-g to milli-g) characteristic of asteroids make it difficult to
achieve normal forces usually required for stable surface locomotion. Various approaches to
mobility in weak gravity domains of asteroids are discussed below.
Another hopping vehicle intended for asteroids is the Asteroid Surface Probe (Cottingham et
al., 2009), an 8 kg battery-powered (100 hours) spherical body of 30 cm diameter that uses
thrusters to hop. When stationary, the sphere opens up using 3 petals to expose a science
instrument payload. The petals also provide the means to self-right the probe (Ebbets et al.,
2007). Similar, in principle, is a 12 kg thruster-propelled ballistic free-flyer concept designed
by the German Aerospace Center (DLR) as part of a European Space Agency study (Richter,
1998). Other hopping robots proposed for asteroid exploration include a pyramid-shaped, 533
g prototype with four single degree-of freedom (DOF) flippers at its base to enable hopping
plus a lever arm for self-righting (Yoshida, 1999), and a spherical 1 kg robot with internal iron
ball actuated by electro-magnets to induce hopping (Nakamura et al., 2000).
A recent study concluded that wheeled and hopping locomotion modes in low gravity are
comparable in locomotion speed (Kubota et al., 2009). The study considered ideal conditions
(e.g., flat terrain and no loss of contact between wheels and terrain). A similar conclusion
regarding energy consumption was reached in a comparative study of wheeled and hopping
rovers for Mars gravity (Schell et al., 2001).
contact while crawling or climbing based on examples of technology proposed for space and
planetary rovers.
four-limbed planetary rover, LEMUR IIb, for which several types of climbing end-effectors
have been investigated (Kennedy et al., 2005). The locomotion functionality for the
LEMUR-class of robots evolved (kinematically) from 6-limbed robots for walking on space
structures in orbit to 4-limbed free-climbing on steep terrain. Technologies addressed during
the development of free-climbing capabilities for LEMUR IIb (e.g., gripping end-effectors,
force control, and stability-based motion planning) should be useful for gravity-independent
locomotion on asteroids as well.
Recent work at Tohoku University spearheaded limbed locomotion solutions and prototypes
that are more specific to asteroid surface mobility and explored the feasibility of statically
stable grapple-motion in microgravity (Chacin & Yoshida, 2005). The work is motivated
by a desire to achieve finer and more deterministic control of robot motion and position.
The focus thus far has been a 6-legged rover with 4 DOF per leg and spiked/gripping
end-effectors for grasping the asteroid surface. Motion control complexities are handled using
a behavior-based control approach in addition to bio-inspired central pattern generators for
rhythmic motion and sensor-driven reflexes. Dynamic simulation results showed that static
locomotion is feasible when grasping forces on the surface can be achieved (Chacin & Yoshida,
2005). A 2.5 kg prototype of the Tohoku asteroid rover was built using a piercing spike at the
tip of each limb to serve as momentary anchors in soft regolith or as contact points of a static
grip on hard surfaces when used in combination (Chacin et al., 2006). Crawling gaits feasible
for locomotion in microgravity environments using this system are analyzed in (Chacin &
Yoshida, 2006) for stability (in the sense that they hold the rover to the asteroid surface).
The next section focuses on this robotic system as an example of an asteroid mobility solution
and control approach. It considers issues related to the microgravity environment and its
effect on dynamics of robotic systems on asteroids.
Fig. 2. Robot during early stages of development (top) and its simulated model (bottom).
1995). During the execution of a compliant motion, the trajectory of the end tip is reflected
according to the sensed forces derived from the contacts.
In this context, the problem of specifying a compliant motion command is similar to the
problem of planning using pure position control to orient the end tip. The compliant motion
control (Klein & Briggs, 1980) allows specification of the forces and velocities to be maintained
in the motion frame until the meeting of a set of termination conditions is detected.
Throughout the following analysis, to simplify the discussion, it is assumed that:
a1 ) The object is a rigid body in contact with a rigid link of the robot;
a2 ) Accurate models of the limbs and object are given;
a3 ) Interference between limbs is ignored;
a4 ) Each limb has only one frictional contact point at a fixed location;
a5 ) The z direction of the contact point is always inward of the surface normal;
a6 ) Contact points are known and the mass of each link of the robot is negligible;
a7 ) Dynamic and static frictional coefficients are not distinguished between each other;
a8 ) The motion is quasi-static2 to suppress any dynamic effect.
Assumption a4 allows us to consider only forces at the contact points. In this way, while
executing a compliant command, the robot controller can interpret the sensed forces to
automatically generate the corrective actions needed to comply with the task while preserving
contact during motion.
Static friction occurs when the relative tangential velocity at a contact point is zero; otherwise,
the friction is called dynamic friction. Assumptions a7 and a8 allow us to consider a
“first-order" (or quasi-static) world where forces and velocities are related and also has static
friction but no dynamic friction.
where,
H : inertia matrix of the robot
xb : position/orientation of the base
φ : articulated joint angles
cb , cm : velocity/gravity dependent non-linear terms
Fb : forces/moments directly applied on the base
τ : joint articulated torque
JT : Jacobian matrix
F ex : external forces/moments on the end points.
And the kinematic relationship around the end points is expressed as follows:
ẍ ex = J m φ̈ + J̇ m φ̇ + J b ẍ b + J̇ b ẋ b (3)
where J b and J m denote the Jacobian of the base (main) body and the Jacobian of a given
manipulator (limb) respectively.
The magnitude of the forces is determined by the friction of the contact surface. Let us
consider the i-th limb of the robot. Vectors p ij ∈ R3×1 , f ij ∈ R3×1 and τ ij ∈ R3×1 denote
the contact position, the contact force and the joint torque vectors, respectively; i and j express
j
the i-th limb and the j-th contact point. Let τ i be the torque due to the contact force f ij . The
relationship between the contact force and the joint torque is given by
j
τ i = J ijT f ij (4)
where J ijT denotes the transpose of the Jacobian matrix that maps the contact force into the
j
joint torque. Then, using the principle of superposition for the relationship between τ i and
f i , we have:
ki
j
τi = ∑ τi (5)
j =1
ki
τi = ∑ J ijT f i (6)
j =1
where, k i is the number of contact points of the i-th limb. Since it is assumed that a limb can
only have one contact point, Equation 6 can be rewritten in the following form.
τ i = J iT f i (7)
T T T
where τ i = [ τ i1 , · · · , τ im ] T , f i = [ f i1 , · · · , f im ] and J iT = [ J i1
T , · · · , J T ].
im
Mathematically, both arms (upper limbs) and legs (lower limbs) are linked multi-body
systems. Forces and moments yielded by the contact with the ground, or planetary surface,
govern the motion of the system.
322
10 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
m v
F co n t a c t
τ = KΔφ (8)
where ⎡ ⎤
k1 0
⎢ .. ⎥
K=⎣ . ⎦. (9)
0 kN
The relation between the force f i and the corresponding spatial deformation Δp is determined
from the following relationship:
Δp = JΔφ. (10)
Solving Equation 8 for Δφ,
Δφ = K −1 τ. (11)
And substituting in Equation 10,
Δp = JK −1 τ. (12)
Finally, from Equation 7 and Equation 12,
Δp = JK −1 J T f i . (13)
C = JK −1 J T . (14)
The impact phase can be divided into two stages: compression and restitution. During the
compression stage, the elastic energy is absorbed by the deformation of the contact surfaces
Gravity-Independent Locomotion:
Dynamics and
Gravity-Independent Position-Based
Locomotion: Control Control
Dynamics and Position-Based of Robots
of Robots on Asteroid
on Asteroid SurfacesSurfaces 323
11
of the impacting bodies. In the restitution stage the elastic energy stored in the compression
stage is released back to the bodies making the relative velocity greater than zero.
The robot can be modeled as a mass-spring system (Fig. 3) with a purely elastic contact at its
end tips (Chacin & Yoshida, 2008). As a result, the following relation between the mass, the
velocity and the force should hold:
2mv = FΔt. (15)
Isolating the reaction force, we have:
2mv
. F= (16)
Δt
Now, considering the time of contact as a function of the mass of the system m and the stiffness
coefficient of the limbs K, we have:
m
Δt = π . (17)
K
Finally, from Equation 16 and Equation 17,
2 √
F contact = mK v (18)
π
with
N
F contact = ∑ fi (19)
i =1
where N is the number of limbs in contact at landing.
μ θ
θ f ci − tg
f ci − tg
f ci − normal f
f ci f ci − normal ci
x
o
Fig. 4. Decomposition of the contact forces.
2mv
μ> tanθ. (24)
Δt
And substituting Equation 17,
tanθμ m
π > v. (25)
2m K
Equation 25 shows that the considered contact stability will strictly depend on the approach
velocity of the robot.
Quasi-static stability is a more general stability criterion than that used in the previous
discussion. Under this condition, inertia forces are included but limb dynamics are not
separately considered; the masses of the limbs are considered with that of the body. In
the previous argument, quasi-static stability is partly assumed, provided all the normal
components of the contact points’ f forces are positive. Since the contact point cannot support
a negative normal force (as shown in Fig. 4), the appearance of a negative force indicates that
the given limb will lift and, since it cannot provide the required movement about the center
of mass, the robot will jump in a microgravity environment like MINERVA (Yoshimitsu et al.,
2001).
Gravity-Independent Locomotion:
Dynamics and
Gravity-Independent Position-Based
Locomotion: Control Control
Dynamics and Position-Based of Robots
of Robots on Asteroid
on Asteroid SurfacesSurfaces 325
13
f n ≥ 0 (28)
f t ≤ μ f n . (29)
The support polygon is the projection of these polyhedrons onto the global coordinate system.
The body attitude also needs to be controlled in order for the robot to maintain static balance.
Assuming that the robot is well balanced in the lateral plane, the body attitude at any given
moment is an important determinant of the acceleration experienced by the center of mass
(COM) in the sagittal plane. If the desired value of the acceleration output is known at all
times throughout the gait cycle, then a corrective action can be generated which will maintain
the robot in a stable state. If the body attitude needs to be increased or decreased depending
upon the error signal, and assuming that we can determine the desired acceleration in the
sagittal plane for all times throughout the gait cycle, we can implement a continuous control
system.
Ṗ p = ∑ f exi + Mg (30)
ω v
o
g
r
Fex
P np
For a surface limbed robot, the gravity forces exerted on it can be neglected; the non-linear
term in Equation 1 then becomes cb = Ḣ b ẋb + Ḣ bm φ̇b . Integrating its upper set with respect
to time, we obtain the total momentum of the system as:
L= J bT F ex dt = H b ẋ b + H bm φ̇. (32)
Going back to Equation 2, and eliminating φ̇, we can obtain the following equation:
L = ( H b − H m J s−1 ) ẋ b + H m J − 1
s ẋ ex . (33)
In this way, if the system does depart from static stability, then the control system can identify
this condition and bring the robot back to the statically stable condition.
r
X T
Tiim
meelliin
nee G
Geen
neerraattoorr S
Syysstteem
mSSttaattee
S t at e S e l e c t or
Follow Trajectory
Release
No
s u pp or t i n g
fi > 0 c o nta c t s a nd M o ve Bo dy
Yes ap ply f orce at
new contact point
No r
FFrriiccttiioon
nCCoon
nee X achieved?
L
fi * = ∑ λ υ l
i
l
i Yes
l =1 ~ r r
X = X −x
[
Vi = υ i1 ,...,υ il ,...,υ iL ] EEnndd
f i * = Vi λ i
r
fi ≡ fi * Addjjuusstt x ttoo m
A maaiinnttaaiinn ffoorrccee
vveeccttoorr bboouunnddeedd w C
Chhaannggee S
Sttaattee
No Yes wiitthhiinn FFC C
5 Considering the friction cone estimation (Chacin, 2007) and the contact stability
conditions shown by Equation 25, determine the robot base motion ẋb by
ẍ b Fb c
= H −1 + J T F ex − b
φ̈ τ cm
6 Calculate the joint velocity of the limbs φ̇ by Equation 2, using ẋ b and ẋex while
considering Equation 29; change the state of the control system. If necessary, adjust x
to maintain the force vectors bounded within the friction cones.
7 Adopt the new contact configuration to release the support contacts and apply a
permissible contact force at the new contact points. Dynamic exploration can be applied
to reduce the surface position/orientation uncertainties. Change the state of the control
system.
8 Control the joints along with the solution from 6 to move the body. Verify if the goal
position
X has been reached; if it has not, then repeat.
One difference of this algorithm with respect to conventional ones is the consideration of
momentum of the robot in 3 . Without this step, the obtained joint motion may have errors
from the originally planned end point trajectory, thus may not satisfy the stability condition.
Conventionally, a feedback control may be employed to correct these errors. But using
Equation 33 in
3 , the error can be compensated in advance.
328
16 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
Fig. 7. The experimental setup: robot mounted on a PA10 manipulator (left) and over an
uneven surface (right).
dv
[ m] = q − [ K ] Δx − [ C] v (34)
dt
where q is the external force and torque applied to the manipulator’s end tip, Δx is the
displacement of the end tip relative to the reference and v is its velocity. [ m] ∈ R6 is the
virtual mass, [ K ] ∈ R6 is the virtual spring constant and [ C] ∈ R6 is the damping constant.
Equation 34 can be transformed into:
1
v= (q − [ K ] Δx − [ C] v)dt (35)
[ m]
which can be represented as the following finite differential equation:
Δt Δt
vn = (q − [ K ] Δx n−1 ) + ([ I ] − [ C ])vn−1 (36)
[ m] n −1 [ m]
where Δt is the time step and [ I ] is the identity matrix. Equation 36 describes the velocity at
the sampling time based upon values from the previous time step. Based on this equation,
Gravity-Independent Locomotion:
Dynamics and
Gravity-Independent Position-Based
Locomotion: Control Control
Dynamics and Position-Based of Robots
of Robots on Asteroid
on Asteroid SurfacesSurfaces 329
17
Velocity [mm/s]
80
60
40
20
0 2 4 6 8 10 12
time [s]
Fig. 8. Composite velocity profile of the manipulator’s end tip.
and knowing the forces and torques as well as the displacement, the arm can be controlled in
real-time using Equation 36 and Equation 1 to simulate the attached robot and to compensate
the input of the F/T sensor on its tip. In short, the control policy uses the relationship between
force and velocity to find the position and then calculate a velocity command as inputs to the
system. The behavior of the system can be altered by changing the impedance constants [ m],
[ K ] and [ C ] as defined in Subsec. 4.3 and Subsec. 4.4 or can be fine tuned in order to exhibit
the desired properties.
3
In reality [C ] is chosen to be 0.1 to avoid problems in the computational solution of Equation 36 and to
satisfy the condition.
330
18 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
2.0
1.5
Force [N]
1.0
0.5
0.0
0 1 2 3 4 5
time [s]
Fig. 9. Force profile during motion without control feedback.
required in order to get the same velocity and position displacement would have increased
accordingly. Similar experiments have been carried out for rotational and multi-translational
movements, but their description has been omitted to show these results as the basic principle
behind the selection of proper parameter values.
After achieving a satisfactory response from the control system, ASTRO was attached to the
end tip of the manipulator and was suspended above a table that serves as analog to the
asteroid surface.
4
Sandpaper of different grit sizes was used in the experiments to simulate the potential roughness of an
asteroid surface.
5 The term geometry is used, when referring to robot control and gait planning, to mean that a robot is
controlled based on fixed relationships between models of the robot and models of the world in which
the robot operates.
Gravity-Independent Locomotion:
Dynamics and
Gravity-Independent Position-Based
Locomotion: Control Control
Dynamics and Position-Based of Robots
of Robots on Asteroid
on Asteroid SurfacesSurfaces 331
19
Force [N]
5
0 1 2 3 4
time [s]
Fig. 10. Force profile during motion with control feedback.
The results of this experiment are shown in Fig. 10. It can be noted that the force values are
higher, but more importantly the robot is resting balanced on the limbs in contact with the
surface. With the feedback from the force sensors, the control system was able to determine
an appropriate corrective measure for the robot to take in order to maintain the stability of
the contact while keeping its attitude (expressed in roll-pitch-yaw angles) mostly unchanged
during motion. For the case where the robot is under a known environment and the desired
forces to be exerted on the surface are known in advance, any deviation from this state should
cause an appropriate corrective action to be taken.
5. Challenges
A rich set of challenges are encountered during development and evaluation of prospective
solutions for gravity-independent locomotion on asteroids. The experiments reported here
are indicative of a few, but several additional key challenges deserve early attention by
researchers. One of them is the mechanics of controlled ballistic hopping on rotating asteroids
and in non-uniform gravity fields due to their irregularly shaped bodies. Bellerose et al
(Bellerose et al., 2008; Bellerose & Scheeres, 2008) modeled the dynamics of hopping vehicles to
enable hops covering designated distances by computing and controlling initial hop velocity.
The model accounts for distance covered by residual bounces as the vehicle comes to rest
(considering surface friction coefficient and restitution). A particularly challenging aspect
to consider is that some asteroid shapes may have surface locations where a vehicle could
stay in equilibrium, thus affecting vehicle dynamics on the surface (Bellerose et al., 2008;
Bellerose & Scheeres, 2008). Conceivably, a hopping rover could be perturbed away from
predicted ballistic trajectories by such equilibria. This can affect exploration objectives by
constraining the total area that a rover can safely or reliably traverse to on an asteroid surface
when stable and unstable equilibrium locations happen to coincide with surface regions of
scientific interest. Purely hopping vehicles that operate primarily at the mercy of small body
physics can have limited accessibility of such surface regions. Bellerose’s model also provides
insight into the effects of non-uniform gravity fields and how centripetal and Coriolis forces
due to asteroid rotation may assist or hinder hop performance (Bellerose & Scheeres, 2008).
Another key challenge is achieving the ability to land after hopping in such a way as to avoid
rebound. Control and robotics techniques can be used to address this challenge. One robot
concept employs a spring and linear actuators with horizontal velocity control to achieve this
(Shimoda et al., 2003), while other research is experimenting with active grappling of the
surface upon landing (Chacin, 2007; Chacin & Yoshida, 2008; 2009). The related challenge,
central to gravity-independent locomotion, is maintaining grip or temporary anchoring while
controlling force for closure and compliance. The work presented herein and in (Chacin
332
20 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
(a) (b)
(c) (d)
(e) (f)
Fig. 11. Movement over inclined frictional surface with friction feedback gait control.
Gravity-Independent Locomotion:
Dynamics and
Gravity-Independent Position-Based
Locomotion: Control Control
Dynamics and Position-Based of Robots
of Robots on Asteroid
on Asteroid SurfacesSurfaces 333
21
& Yoshida, 2009) examines the motion/force control and dynamic modeling germane to
the problem of stable crawling and force closure needed to maintain contact/grip with an
asteroid surface under microgravity conditions. Experiments with ASTRO reveal the utility
of force feedback for maintaining contact during execution of compliant motion. Kennedy
et al (Kennedy et al., 2005) address active force control to achieve anchoring associated with
stable free-climbing motion control. Tactile sensing and related motion planning algorithms
(Bretl et al., 2003) have been implemented on the LEMUR IIb robot.
The low gravity environment and its effect on surface vehicles present a key challenge for
rover localization whether hopping, crawling, or climbing. Determining, updating and
maintaining knowledge of rover position and orientation on an asteroid surface can be
important for recording spatial context for surface science measurements and for certain
mission concepts of operation. Localization approaches for hopping robots have been
proposed with some reliance on range measurements to an orbiting or station-keeping mother
spacecraft (Yoshimitsu et al., 2001) and via use of more general approaches such as particle
filters (Martinez, 2004), Kalman Filters with landmark geo-referencing (Fiorini et al., 2005),
and optical flow as well as visual odometry without continuous terrain feature tracking while
tumbling (So et al., 2008; 2009). During local navigation across the terrain, existing localization
approaches for rolling or walking robots may apply. These may be based on use of extended
Kalman Filters on fused celestial sensor data and optical-flow measurements (Baumgartner et
al., 1998).
Finally, a key challenge is the testing and verification of gravity-independent locomotion
systems to ensure confidence in their technology readiness for asteroid missions. This
is always a challenge for space systems and particularly those intended for operation in
microgravity domains. The testbed described in the previous section and its means of
emulating reduced gravity are representative solutions for addressing the challenge using
relatively affordable technology. Other testbed approaches to emulating reduced gravity in
the laboratory include the use of overhead gantry systems with frictionless air-bearing pulleys
from which to suspend prototype rovers, and the use of prototype rovers on flat air-tables or
mounted on a mobile base with integrated air bearings. Beyond the fundamental feasibility of
controlled surface mobility in low gravity fields of asteroids, additional challenges of high
relevance and importance remain to be addressed by advanced research and technology
development.
that there exists a solution that satisfies the force conditions for any system with friction. It
works by reacting to the current locations of contact points and estimating the force-closure
condition for stable motion. Such a mechanism is central in the control structure.
The control methods proposed in this research are useful to improve the operational
performance and efficiency for robots capable of position-based controlled motion on an
asteroid. They demonstrated that proper knowledge of the force cone interaction with the
surface plays a significant role in the development of proper control procedures that can allow
next-generation surface robots to gain proper mobility in a microgravity environment.
7. Acknowledgments
The research reported in this chapter was partly sponsored by the Japanese Ministry of
Education, Culture, Sports, Science and Technology and conducted at the Space Robotics
Laboratory in Tohoku University, Sendai, Japan. Partial support of the JHU/APL Civilian
Space Business Area is acknowledged.
8. References
ATI Industrial Automation (2005). Force Torque Transducer TWE F/T Instalation and
oparation manual. Component data sheet, ATI Industrial Automation. 2005.
Bares, J., Hebert, M., Kanade, T., Krotkov, E., Mitchell, T., Simmons, R. & Whittaker, W. (1999).
Ambler: An Autonomous Rover for Planetary Exploration. IEEE Computer, pp. 18-26,
Vol. 22, No 6, 1999.
Brach, R. M. (1991). Mechanical Impact Dynamics: Rigid Body Collisions. John Wiley & Sons.
1991.
Baumgartner, E.T., Wilcox, B.H., Welch, R.V. & Jones, R.M. (1998). Mobility performance of a
small-body rover. Proceedings 7th International Symposium on Robotics and Applications,
World Automation Congress, Anchorage, Alaska. 1998.
Behar, A., Bekey, G., Friedman, G., & Desai, R. (1997). Sub-kilogram intelligent tele-robots
(SKIT) for asteroid exploration and exploitation. Proceedings SSI/Princeton Conference
on Space Manufacturing, Space Studies Institute, pp. 65-83, May, Princeton, NJ. 1997.
Bellerose, J., Girard, A. & Scheeres, D.J. (2008). Dynamics and control of surface exploration
robots on asteroids. Proceedings 8th International Conference on Cooperative Control and
Optimization, pp. 135-150, January, Gainesville, FL. 2008.
Bellerose, J. & Scheeres, D.J. (2008). Dynamics and control for surface exploration of small
bodies. Proceedings AIAA/AAS Astrodynamics Specialist Conference and Exhibit, Paper
6251, Honolulu, Hawaii, 2008.
Bombardelli, C., Broschart, M. & Menon, C. (2007). Bio-inspired landing and attachment
system for miniaturised surface modules. Proceedings 58th International Astronautical
Congress Hyderabad, India. 2007.
Borenstein, J. (1995). Control and Kinematic Design of Multi-degree-of-freedom Mobile
Robots with Compliant Linkage. IEEE Transactions on Robotics and Automation, pp.
21-35, Vol. 11, No. 1, February, 1995.
Bretl, T. & Rock, S. & Latombe, J.C. (2003). Motion planning for a three-limbed climbing robot
in vertical natural terrain. IEEE International Conference on Robotics and Automation,
pp. 2947-2953, Taipei, Taiwan, 2003.
Gravity-Independent Locomotion:
Dynamics and
Gravity-Independent Position-Based
Locomotion: Control Control
Dynamics and Position-Based of Robots
of Robots on Asteroid
on Asteroid SurfacesSurfaces 335
23
Chacin, M. & Yoshida, K. (2005). Multi-limbed rover for asteroid surface exploration
using static locomotion. Proceedings of the 8th International Symposium on Artificial
Intelligence, Robotics and Automation in Space, Munich, Germany. September 2005.
Chacin, M. & Yoshida, K. (2006). Stability and Adaptability Analysis for Legged Robots
Intended for Asteroid Exploration. Proceedings of the 2006 IEEE International
Conference on Intelligent Robots and Systems (IROS2006) Beijing, China. 2006.
Chacin, M. & Yoshida, K. (2006). Evolving Legged Rovers for Minor Body Exploration
Missions. Proceedings of the 1st IEEE / RAS-EMBS International Conference on Biomedical
Robotics and Biomechatronics, BioRob2006 Pisa, Italy. 2006.
Chacin, M., Nagatani, K. & Yoshida, K. (2006). Next-Generation Rover Development for
Asteroid Surface Exploration: System Description. Proceedings of the 25th International
Symposium on Space Technology and Science and 19th International Symposium on Space
Flight Dynamics Kanazawa, Japan. June 2006.
Chacin, M. (2007). Landing Stability and Motion Control of Multi-Limbed Robots for Asteroid
Exploration Missions. Ph.D. dissertation, Dept. of Aerospace Engineering, Tohoku
University, Tohoku, Japan, 2007.
Chacin, M. & Yoshida, K. (2008). A Microgravity Emulation Testbed for Asteroid Exploration
Robots. Proceedings of International Symposium on Artificial Intelligence, Robotics,
Automation in Space (i-SAIRAS08) Los Angeles, CA, February 2008.
Chacin, M. & Yoshida K., (2009). Motion control of multi-limbed robots for asteroid
exploration missions, Proceedings 2009 IEEE International Conference on Robotics and
Automation, Kobe, Japan, May 2009.
Cottingham, C.M., Deininger, W.D., Dissly, R.W., Epstein, K.W., Waller, D.M. & Scheeres, D.J.
(2009). Asteroid Surface Probes: A low-cost approach for the in situ exploration of
small solar system objects. Proceedings IEEE Aerospace Conference, Big Sky, MT, 2009.
Dalilsafaei, S. (2007). Dynamic analyze of snake robot. Proceedings World Academy of Science,
Engineering and Technology, pp. 305-310, Issue 29, Berlin, Germany. May, 2007.
Der Stappen, A. V., Wentink, C. & Overmars, M. (1999). Computing form-closure
configurations. Proceedings of the IEEE International Conference on Robotics and
Automation, ICRA pp. 1837-1842, USA. 1999.
Ebbets, D., Reinert, R. & Dissly, R. (2007). Small landing probes for in-situ characterization
of asteroids and comets. textitProceedings 38th Lunar and Planetary Science Conference,
League City, TX. 2007.
Fiorini, P., Cosma, C. & Confente, M. (2005). Localization and sensing for hopping robots.
Autonomous Robots, pp. 185-200, Vol. 18, 2005.
Fujiwara A., Kawaguchi J., Yeomans D. K. et al. (2006). The Rubble Pile Asteroid Itokawa as
observed by Hayabusa. Report: Hayabusa at asteroid Itokawa. Science, pp. 1330-1334,
Vol. 312, No. 5778, June, 2006.
Ge, L., Sethi, S., Ci, L., Ajayan, P.M. & Dhinojwala, A. (2007). Carbon nanotube-based synthetic
gecko tapes. Procedings National Academy of Sciences, pp. 10792-10795, Vol. 104, No. 26,
2007.
Gilardi, G. & Shraf, I. (2002). Literature Survey of Contact Dynamics Modeling. Mechanism and
Machine Theory, pp.1213-1239, Vol. 37, 2002.
Hatton, R.L. & Choset, H. (2010). Generating gaits for snake robots: annealed chain fitting and
keyframe wave extraction. Autonomous Robots, pp. 271-281, Vol. 28, 2010.
336
24 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
Hirabayashi, H., Sugimoto, K., Enomoto, A. & Ishimaru, I. (2000). Robot Manipulation Using
Virtual Compliance Control. Journal of Robotics and Mechatronics, pp. 567-575, Vol. 12,
No.5, 2000.
Hokamoto, S. & Ochi, M. (2001). Dynamic behavior of a multi-legged planetary rover
of isotropic shape. Proceedings 6th International Symposium on Artificial Intelligence,
Robotics, and Automation in Space, June, St-Hubert, Quebec, Canada, 2001.
Inoue, K., Arai, T., Mae, Y., Takahashi, Y., Yoshida, H. & Koyachi, N. (2001). Mobile
Manipulation of Limbed Robots -Proposal on Mechanism and Control. Preprints of
IFAC Workshop on Mobile Robot Technology, pp. 104-109, 2001.
Inoue, K., Mae, Y., Arai, T. & Koyachi, N. (2002). Sensor-based Walking of Limb Mechanism on
Rough Terrain. Proceedings of the 5th International Conference on Climbing and Walking
Robots (CLAWAR), 2002.
JAXA / Institute of Space and Astronautical Science (2003), Asteroid Explorer HAYABUSA,
http://hayabusa.jaxa.jp/, 2003.
Johns Hopkins University Applied Physics Laboratory (1996), Near Earth Asteroid
Rendezvous - Shoemaker Mission, http://near.jhuapl.edu, 1996.
Jones, R. et al. (1998). NASA/ISAS collaboration on the ISAS MUSES C asteroid sample return
mission. Proceedings of 3rd IAA International Conference on Low-Cost Planetary Missions,
Pasadena, CA. 1998.
Kajita, S., Kanehiro, F., Kaneko, K., Fujiwara, K., Harada, K., Yokoi, K. & Hirukawa, H. (2003).
Resolved Momentum Control: Humanoid Motion Planning based on the Linear and
Angular Momentum. Proceedings of the IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS’03), pp. 1644-1650, 2003.
Kawaguchi, J., Uesugi, K. & Fujiwara, A. (2003). The MUSES-C mission for the sample
and returnits technology development status and readiness. Acta Astronautica, pp.
117-123, Vol. 52, 2003.
Keller, J. B. (1986). Impact With Friction. ASME Journal of Applied Mechanics, Vol. 53, No. 1,
1986.
Kennedy, B., Okon, A., Aghazarian, H., Badescu, M., Bao, X. & Bar-Cohen, Y. et al. (2005).
LEMUR IIb: A robotic system for steep terrain access. Proceedings 8th International
Conference on Climbing and Walking Robots, pp. 595-695, London, UK, 2005.
Klein, D.P.C. & Briggs, R. (1980). Use of compliance in the control of legged vehicles. IEEE
Transactions on Systems, Man and Cybernetics, pp. 393-400, Vol. 10, No. 7, 1980.
Kubota, T., Takahashi, K., Shimoda, S., Yoshimitsu, T. & Nakatani, I. (2009). Locomotion
mechanism of intelligent unmanned explorer for deep space exploration. Intelligent
Unmanned Systems: Theory and Applications, pp. 11-26, Vol. 192, Springer, Berlin, 2009.
Martinez-Cantin, R. (2004). Bio-inspired multi-robot behavior for exploration in low gravity
environments. Proceedings 55th International Astronautical Congress, Vancouver,
Canada, 2004.
Menon, C., Murphy, M., Sitti, M. & Lan, N. (2007). Space exploration Towards bio-inspired
climbing robots. Bioinspiration and Robotics: Walking and Climbing Robots, pp. 261-278,
M.K. Habib, Ed., Vienna, Austria: I-Tech Education and Publishing, 2007.
Nakamura, Y., Shimoda, S. & Shoji, S. (2000). Mobility of a microgravity rover using internal
electro-magnetic levitation. Proceedings IEEE/RSJ International Conference on Intelligent
Robots and Systems, pp. 1639-1645, Takamatsu, Japan, 2000.
NASA / Jet Propulsion Laboratory (2007), Dawn Mission, http://dawn.jpl.nasa.gov/, September
2007.
Gravity-Independent Locomotion:
Dynamics and
Gravity-Independent Position-Based
Locomotion: Control Control
Dynamics and Position-Based of Robots
of Robots on Asteroid
on Asteroid SurfacesSurfaces 337
25
Richter, L. (1998). Principles for robotic mobility on minor solar system bodies, Robotics and
Autonomous Systems, pp. 117-124, Vol. 23, 1998.
Schell, S., Tretten, A., Burdick, J., Fuller, S.B. & Fiorini, P. (2001). Hopper on wheels: Evolving
the hopping robot concept. Proceedings International Conference on Field and Service
Robotics, pp. 379-384, Helsinki, Finland, 2001.
Scheeres, D. (2004). Dynamical Environment About Asteroid 25143 Itokawa. University of
Michigan, Department of Aerospace Engineering, USA, 2004.
Scheeres, D., Broschart, S., Ostro, S.J. & Benner, L.A. (2004). The Dynamical Environment
About Asteroid 25143 Itokawa. Proceedings of the 24th International Symposium on Space
Technology and Science, pp. 456-461, Miyazaki, Japan, 2004.
Shimoda, S., Wingart, A., Takahashi, K., Kubota, T. & Nakatani, I. (2003). Microgravity
hopping robot with controlled hopping and landing capability. textitProceedings
IEEE/RSJ Intl. Conference on Intelligent Robots and Systems, pp. 2571-2576, October, Las
Vegas, NV, 2003.
Silva, M.F. & Tenreiro Machado, J.A. (2008). New technologies for climbing robots adhesion
to surfaces. Proceedings International Workshop on New Trends in Science and Technology,
November, Ankara, Turkey, 2008.
So, E.W.Y., Yoshimitsu, T. & Kubota, T. (2008). Relative localization of a hopping rover on an
asteroid surface using optical flow. Proceedings Intl. Conf. on Instrumentation, Control,
and Information Technology, SICE Annual Conference, pp. 1727-1732, August, Tokyo,
Japan. 2008.
So, E.W.Y., Yoshimitsu, T. & Kubota, T. (2009). Hopping odometry: Motion estimation with
selective vision. Proceedings IEEE/RSJ International Conference on Intelligent Robots and
Systems, pp. 3808-3813, October, St. Louis, MO. 2009.
Tunstel, E. (1999). Evolution of autonomous self-righting behaviors for articulated nanorovers.
Proceedings 5th International Symposium on Artificial Intelligence, Robotics & Automation
in Space, pp. 341-346, Noordwijk, The Netherlands, June, 1999.
Transeth, A.A., Pettersen, K.Y. & Liljebck, P. (2009). A survey on snake robot modeling and
locomotion. Robotica, pp. 999-1015, Vol. 27, No. 7, December. 2009.
Vukobratovic, M., Frank, A. & Juricic, D. (1970). On the Stability of Biped Locomotion. IEEE
Transactions on Biomedical Engineering, pp. 25-36, Vol.17, No.1, 1970.
Wagner, R. & Lane, H. (2007). Lessons learned on the AWIMR project. Proceedings of the IEEE
International Conference Robotics and Automation, Space Robotics Workshop, Rome, Italy.
2007.
Wilcox, B.H. & Jones, R.M. (2000). The MUSES-CN nanorover mission and related technology.
IEEE Aerospace Conference, Big Sky, MT, USA, pp. 287-295, 2000.
Wilcox, B.H. & Jones, R.M. (2001). A 1 kilogram asteroid exploration rover. Proceedings IEEE
International Conference on Robotics and Automation (ICRA), San Francisco, CA. 2001.
Wilcox, B., Litwin, T., Biesiadecki, J., Matthews, J., Heverly, M., Morrison, J., Townsend,
J., Ahmad, N., Sirota, A. & Cooper, B. (2007). ATHLETE: A cargo handling and
manipulation robot for the moon. Journal of Field Robotics, pp. 421-434, Volume 24,
Issue 5, May 2007.
Yano, H., Kubota, T., Miyamoto, H. & Yoshida, K. et al. (2006). Touchdown of the Hayabusa
Spacecraft at the Muses Sea on Itokawa. Report: Hayabusa at asteroid Itokawa.
Science, Vol. 312, No. 312, June, 2006.
338
26 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
1. Introduction
Parallel manipulators have been proposed to overcome accuracy problem in the end effector
positioning, exhibited by serial manipulators(Stewart, 1965)(Reboulet, 1988)(Merlet, 2000).
These parallel robots are primarly used in the applications for which the considered processes
require a high degree of accuracy, high speeds or accelerations. Aircraft simulator (Stewart,
1965), machining tools (Neugebauer et al., 1998)(Poignet et al., 2002), and various other
medical applications (Merlet, 2002)(Leroy et al., 2003)(Plitea et al., 2008) constitute some of the
many possible applications of parallel robots.
The computation of the inverse dynamic model is essential for an effective robot control. In
the field of parallel robots, many approaches have been developed for efficient computation
of the inverse dynamics. The formalism of d’Alembert has been used to obtain an analytical
expression of the dynamics model (Fichter, 1986)(Nakamura & Ghodoussi, 1989). The
principle of virtual works has been applied in (Tsai, 2000) for solving the inverse dynamics
of the Gough-Stewart platform and in (Zhu et al., 2005) for a Tau parallel robot. Lagrangian
formalism is applied in (Leroy et al., 2003) for the dynamics modeling of a parallel robot used
as a haptic interface for a surgical simulator. These various approaches do not seem effective
for a robot dynamic control under the real time constraint. A better computational efficiency
can be achieved by the development of approaches using recursive schemes, in particular,
based on the Newton-Euler formulation. Gosselin (Gosselin, 1996) proposed an approach
for the computation of the inverse dynamic model of planar and spatial parallel robots, in
which all the masses and inertias are taken into account. This proposed method is difficult
to generalize for all the parallel architectures. Dasgupda et al (Dasgupta & Choudhury, 1999)
applied this method to several parallel manipulators. Khan (Khan, 2005) has developed a
recursive algorithm for the inverse dynamics. This method is applied to a 3R planar parallel
robot. Bi et al (Bi & Lang, 2006) use the Newton-Euler recursive scheme for the computation of
the articular forces of a tripod system. Khalil et al (Khalil & Guegan, 2004) proposed a general
method for the inverse and direct dynamic model computation of parallel robots, which is
applied to several parallel manipulators (Khalil & Ibrahim, 2007).
Despite the large amount of contributions in this field, there is still a need for improving
the computational effeciency of the inverse kinematic and dynamic model clculation for
real-time control. In this paper, a parallel robot is considered as a multi robot system with
340
2 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
k serial robots (the k parallel robot segments) moving a common load (the mobile platform).
The proposed approach uses the methodology developed by Khalil et al (Khalil & Guegan,
2004)(Khalil & Ibrahim, 2007).
In this paper, we first review the proposed approaches for computation of kinematics and
inverse dynamics of of a C5 joint parallel robot. One of the interesting aspects of the parallel
robots is that, unlike the serial robots, it is easy and efficient to directly compute the inverse
of Jacobian matrix: However, there seems to be no proposed approach for direct computation
of the Jacobian matrix. We propose a new method which allows the derivation of the Jacobian
matrix in a factored form, i.e., as a product of two highly sparse matrices. This factored form
enables a very fast computation and application of the Jacobian matrix, which is also needed
for the inverse dynamics computation. Another issue for inverse dynamics computation is
the determination of joint accelerations given the acceleration of the mobile platform. We
propose a new scheme that, by using projection matrices, enablse a fast and direct calculation
of the joint accelerations. Since this calculation is needed in any inverse dynamics formalism,
our proposed new method improves the efficiency of the computation. For calculation of the
inverse dynamics, we consider the formalism developped by Khalil et al (Khalil & Guegan,
2004)(Khalil & Ibrahim, 2007). In this approach, since the inverse of Jacobian is used, the
calculation of the joint forces would require a linear system solution. We show that, by using
our factorized form of the Jacobian, our proposed scheme not only eliminates the need for
linear system solution but it also does not require the explicit computation of either Jacobian
or its inverse. As a result, a significantly better efficiency in the computation can be achieved.
This paper is organized as follows. In Section 2, we present some preliminaries and the
notation used in our appraoches. The C5 parallel robot is presented in Section 3. The proposed
methodologies for computation of the inverse kinematics and the inverse Jacobian matrix are
reviewed in Sections 4 and 5. The new methodology for derivation of the Jacobian matrix
in a factored form is presented in Section 6. In Section 7, we present a fast and direct
scheme for calculation of joint accelerations, given the desired acceleration of the mobile
platform. The formalism for computation of the inverse dynamics, developed by Khalil et
al (Khalil & Guegan, 2004)(Khalil & Ibrahim, 2007), is discussed in Section 8 and it is shown
how the new scheme for calculation of joint accelerations as well as the use of factored form
of the Jacobian matrix can significantly improve the computational efficiency. A simulation
of the proposed scheme for computation of the inverse dynamics is provided in section 9
validating the proposed approach. Finally, some concluding remarks are presented in Section
10.
2. Preliminaries
In this section, the required notation for a serial chain are presented (see also Fig. 1).
The spatial inertia of body j about its center of mass,Gj , is denoted by IGj and is given by:
JG j 0
IG j = ∈ 6×6
0 mj U
where JGj is the second moment of mass of link j about its center of mass and m j is the
mass of link j. The spatial inertia Ij can be calculated by:
Ij = Ŝ j IGj Ŝ jt
⎡ ⎤
U
⎢ − P̂ N −1 U 0 ⎥
⎢ ⎥
⎢ 0 − P̂ − U ⎥
⎢ N 2 ⎥
P =⎢ 0 0 ⎥
⎢ ⎥
⎢ .. .. ⎥
⎣ . . ⎦
0 0 0 − P̂1 U
The inverse of matrix P is a block triangular matrix given by:
⎡ ⎤
U 0 ··· 0
⎢ P̂ N,N −1 U 0 ··· 0⎥
⎢ ⎥
⎢ 0⎥
P −1
= ⎢ P̂ N,N −2 P̂ N −1,N −2 U 0 ⎥
⎢ .. .. .. .. .. ⎥
⎣ . . . . . ⎦
P̂ N,1 P̂ N −1,1 ··· P̂2,1 U
By using the matrix P , Eq. (1) can be expressed in a global form as:
P t V = H Q̇ (2)
thus:
−1
V = Pt H Q̇ (3)
The end effector spatial velocity V N +1 is obtained by the following relation:
t
V N +1 − P̂ N V N = 0 (4)
thus:
V N +1 = P̂ Nt V N (5)
Let β ∈ 6×6N be the matrix defined by β = P̂tN 0 · · · 0 , Eq. (5) becomes:
V N +1 = β V (6)
Kinematic andDynamic
Kinematic and Inverse Inverse Dynamic
Analysis Analysis
of a C5 Joint Parallel Robot of a C5 Joint Parallel Robot 3435
3. C5 parallel robot
The C5 joint parallel robot (Dafaoui et al., 1998) consists of a static and a mobile part connected
together by six actuated segments (Fig. 2 and 3). Each segment is connected to the static part
at point Ai and linked to the mobile part through a C5 passive joint (3DOF) in rotation and
2DOF in translation) at point Bi . Each C5 joint consists of a spherical joint tied to two crossed
sliding plates (Fig 4). Each segment is equipped with a ball and a screw linear actuator driven
by a DC motor.
• Bi is the center of the rotational joint between the segment i and the mobile part:
y t
CBi/R p = bix bi biz
• R is the rotation matrix, with elements rij (using the RPY formalism), expressing the
orientation of the R p coordinate system with respect to the Rb coordinate system. The
expression for this matrix is given by:
⎡ ⎤
r11 r12 r13
R = ⎣ r21 r22 r23 ⎦ (15)
r31 r32 r33
where:
r11 = cos β cos γ
r12 = − cos β sin γ
r13 = sin β
r21 = sin γ cos α + cos γ sin β sin α
r22 = cos α cos γ − sin α sin β sin γ
346
8 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
Ai A i + 1
L= 2 for i = 1, 3 and 5
For the C5 joint parallel robot, the actuators are equidistant from point O (Fig. 6).
Ai Bi
ni = (19)
Qi1
Kinematic andDynamic
Kinematic and Inverse Inverse Dynamic
Analysis Analysis
of a C5 Joint Parallel Robot of a C5 Joint Parallel Robot 349
11
and inserting Eq. (17) into Eq. (18), we obtain the following expression:
The inverse Jacobian matrix of the C5 parallel robot is then given by the following relation:
⎡ ⎤
(n6 × B6 C )t n6t
⎢ (n × B C ) nt ⎥ ⎡
t ⎤
⎢ 5 5 5⎥ (n6 × P62 )t n6t
⎢ t t ⎥
⎢ (n × B4 C ) n4 ⎥ ⎢ .. ⎥
J −1 = ⎢ 4 ⎥=⎢ ⎥ (22)
⎢ (n3 × B3 C )t n3t ⎥ ⎣ . ⎦
⎢ ⎥ t
⎣ (n2 × B2 C )t n2t ⎦ (n1 × P12 ) n1 t
t t
(n1 × B1 C ) n1
As stated before, for parallel robots, unlike the serial robots, the inverse of Jacobian matrix can
be directly and efficiently obtained. In fact, the cost of computation of J −1 from Eq. (22) is
(18m + 30a) where m and a denote the cost of multiplication and addition, respectively.
Therefore, a factorized expression of the parallel robot Jacobian matrix is obtained as:
−1
J = β i Pit Hi Πi (25)
J = Ji Πi (26)
The computation of matrix of Πi depends on the considered parallel robot’s structure. In the
following, we present the computation of this matrix for the C5 parallel robot.
y1 = y2 = y c
z3 = z4 = z c (28)
x5 = x6 = x c
The global vector of articular coordinate velocity of the leg i is given by:
t
Q̇i = ẇ pi u̇ pi γ̇ pi β̇ pi α̇ pi Q̇i1 (29)
where u̇ pi and ẇ pi are translation velocities due to the crossed sliding plates.
The elements i π jk of the matrix Πi are computed by using Eq. (7). This equation is true for
i = 1 to 6, thus:
−1
−1
β i Pit Hi Q̇i = β j P jt H j Q̇ j (31)
Kinematic andDynamic
Kinematic and Inverse Inverse Dynamic
Analysis Analysis
of a C5 Joint Parallel Robot of a C5 Joint Parallel Robot 351
13
for i and j = 1 to 6. From Eq. (31), we can show that for all i, j = 1, . . . , 6, we have the following
relations: ⎧
⎨ α̇ pi = α̇ p j
β̇ p = β̇ p j (32)
⎩ i
γ̇ pi = γ̇ p j
After some manipulations on relation (31), we obtain:
• For i = 1 and j = 2
Q̇11 = (z2 − z1 ) β̇ pi + (y1 − y2 ) γ̇ pi + Q̇21 (33)
• For i = 3 and j = 4:
Q̇31 = (z3 − z4 ) α̇ pi + ( x4 − x3 ) γ̇ pi + Q̇41 (34)
• For i = 5 and j = 6:
Q̇51 = (y6 − y5 ) α̇ pi + ( x5 − x6 ) β̇ pi + Q̇61 (35)
• For i = 1 and j = 3:
u̇ p1 = (z1 − z3 ) α̇ pi + ( x3 − x1 ) γ̇ pi + Q̇31 (36)
• For i = 1 and j = 5:
ẇ p1 = (y5 − y1 ) α̇ pi + ( x1 − x5 ) β̇ pi + Q̇51 (37)
From Eq. (28), we have y1 = y2 , z3 = z4 and x5 = x6 . Thus, the Eqs. (33, 34, and 35) can be
written as:
Note the highly sparse structure of the matrix J1 . In fact, if the matrix Π1 is already computed
then the computation of the matrix J1 does not require any operation. However, if the explicit
computatiuon of J is needed it can be then computed as J = J1 Π1 . Exploiting the sparse
structure of matrices J1 and Π1 , this computation can be performed with a cost of 29m + 37a.
In the following, we propose a new and more efficient approach for computation of Q̈i . From
the propagation of acceleration given in Eq. (9), we can derive the following relations:
t
V̇ N +1 = P̂i2 V̇i2 (43)
t d t
V̇i2 = Hi2 Q̈i2 + P̂i1 V̇i1 + P̂ V (44)
dt i1 i1
V̇i1 = Hi1 Q̈i1 (45)
Considering the othogonality properties of the projection matrices Hij and Wij given in
t we get:
Eq. (12), by multiplying both sides of Eq. (44) by Wi2
t t d t t
t
Wi2 V̇i2 = Wi2
t
Hi2 Q̈i2 +Wi2
t
P̂i1 V̇i1 + Wi2 P̂i1 Hi1 Q̇i1 = Wi2
t
P̂i1 V̇i1 (46)
dt
0 0
Note that the above projection indeed eliminates the term Q̈i2 from the equation. From
Eqs. (45) and (46), we then obtain:
t −1
Q̈i1 = Wi2
t
P̂i1 Hi1 t
Wi2 V̇i2 (47)
t P̂ t H −1
Where the term defined by Wi2 i1 i1 is a scalar.
Again, considering the properties given by Eq. (14), multiplying both sides of Eq. (44) by by
t we get:
Hi2
t t d t t
Q̈i2 = Hi2
t
V̇i2 − Hi2
t
P̂i1 V̇i1 − Hi2 P̂i1 Hi1 Q̇i1 = Hi2
t
V̇i2 − Hi2
t
P̂i1 V̇i1 (48)
dt
0
t d P̂ t H = 0 since d P̂ t is along H and as can be seen from the description
Note that, Hi2 dt i1 i1 dt i1 i1
t H = 0.
of Hi1 and Hi2 in section 3, Hi2 i1
The joint accelerations of the segment i are then computed in four steps as follows:
1. Compute V̇i2 from Eq. (43)
2. Compute Q̈i1 from Eq. (47)
3. Compute V̇i1 from Eq. (45)
4. Compute Q̈i2 from Eq. (48)
Kinematic andDynamic
Kinematic and Inverse Inverse Dynamic
Analysis Analysis
of a C5 Joint Parallel Robot of a C5 Joint Parallel Robot 353
15
Exploiting the sparse structure of matrices Hi1 , Hi2 , and Wi2 as well as an appropriate scheme
Q̈i2
for projection of the equations, the cost of computing Q̈i = for each segment is of
Q̈i1
(10m + 10a).
Where:
• Fi2 is the force exterted to the mobile platform by the segment i (Fig. 5).
• J Bi is the Jacobian matrix of the segment i, computed to the point Bi . The expression of
J Bi is given by:
J Bi = P̂i2−t Ji (50)
• Ci + Gi represents the contributation of the Coriolis, centrifugal, and gravitional terms.
• Γi represents the joint force vector of the segment i.
The contact forces exerted to the mobile platform by the segments, shown in Fig. 5, are
computed from Eq. (49) as:
The dynamic behavior of the mobile platform is given by the following relation:
F N +1 = ΛC V̇ N +1 + (GC + CC ) (52)
Where:
• F N +1 is the spatial force applied at the point C, representing the contribution of the contact
forces FiN propagated to the point C:
6
n N +1
F N +1 =
f N +1
= ∑ P̂i2t Fi2 (53)
i =1
IC = R IC/R m Rt (55)
We have ∑6i=1 P̂i2t J − t Γ = J −1 Γ where J −1 is the inverse Jacobian matrix of the parallel
Bi i
robot. Eq. (58) can be rewritten as:
6
F N +1 = J − t Γ − ∑ P̂i2t J B−t i
Mi Q̈i + Ci + Gi (59)
i =1
The inverse dynamic model, given by Kahlil et al in (Khalil & Ibrahim, 2007) is then expressed
by:
6
Γ = J t F N +1 + ∑ P̂i2t J B−t i
Mi Q̈i + Ci + Gi (60)
i =1
J −t Γ = K (61)
The computation cost for the derivation of J −1 , from Eq. (22), is (18m + 30a). The cost of
solving the linear system of dimension 6 in Eq. (61) is of (116m + 95a) wherein the cost of
division and multiplication are taken to be the same. Therefore, the total cost of Step 6 is of
(134m + 125a)
Our contribution concerns the improvement of the efficiency of the inverse dynamic
computational by using the factorized expression of the Jacobian matrix and the new
formulation of the joint accelerations. Using these formulations, the computations can be
then performed as follows:
Step 1 is performed with a cost of (10m + 10a) for each segment by using the expression given
in Eqs. (47) and (48). This represents a much better efficiency than that of using Eq. (42).
Steps 2, 3, 4 and 5 are the same as in Khalil et al in (Khalil & Guegan, 2004) (Khalil & Ibrahim,
2007) .
For computation of Step 6, the factorized expression of J , given by Eq. (40) is used. To this
end, we have:
Γ = Π1 t J 1 t K (62)
Note that, the above equation not only eliminates the need for solving a linear system but it
also does not require the explicit computation of neither J −1 nor J . By exploiting the sparse
structure of matrices Π1 t and J1 , the cost of computation of the vector Γ from Eq. (62) is
of (65m + 67a), which represents a much better efficiency, almost half of the computations,
compared by using Eq. (40).
10. Conclusion
In this paper, we first presented a review of various proposed schemes for kinematics and
inverse dynamics computation of the C5 parallel robot. We presented a new and efficient
scheme for derivation of the Jacobian matrix in a factored form as a product two highly
sparse matrices. We also presented a new scheme for fast and direct computation of the
joint accelerations, given the desired acceleration of the mobile platform. We also reviewed
the proposed scheme by Khalil et al (Khalil & Guegan, 2004)(Khalil & Ibrahim, 2007) for
computation of the inverse dynamics of the C5 parallel robot. We showed that by using
our new scheme for computation of the joint acceleration as well as the use of Jacobian in
a factored form a much better efficiency in the computation can be achieved.
The proposed methodology by Khalil et al (Khalil & Guegan, 2004)(Khalil & Ibrahim, 2007)
for computation of the inverse dynamics is very systematic and indeed has been applied
to several parallel robots. However, we still believe that more efficient formulations can be
derived by a further exploitation of specific structures of the parallel robots. We are currently
investigating new approaches for the inverse dynamics computation of parallel robots. We
are also extending our derivation of the Jacobian matrix in the factored form to other parallel
robots.
12. References
Bi, Z. M. & Lang, S. Y. T. (2006). Kinematic and dynamic models of a tripod system with a
passive leg, IEEE/ASME Transactions on Mechatronics Vol. 11(No. 1): 108–112.
Cork, P. I. (1996). A robotics toolbox for matlab, IEEE Robotics and Automation Magazine Vol.
3: 24–32.
Dafaoui, M., Amirat, Y., François, C. & Pontnau, J. (1998). Analysis and design of a six dof
parallel robot. modeling, singular configurations and workspace, IEEE Trans. Robotics
and Automation Vol. 14(No. 1): 78–92.
Dasgupta, B. & Choudhury, P. (1999). A general strategy based on the newton-euler approach
for the dynamic formulation of parallel manipulators, Mech. Mach. Theory Vol.
4: 61–69.
Fichter, E. F. (1986). A stewart platform based manipulator: general theory and pratical
construction, International Journal of Robotics Research Vol. 5(No. 2): 157–182.
Fijany, A., Djouani, K., Fried, G. & J. Pontnau, J. (1997). New factorization techniques and fast
serial and parallel algorithms for operational space control of robot manipulators,
Proceedings of IFAC, 5th Symposium on Robot Control, Nantes, France, pp. 813–820.
Fijany, A., Sharf, I. & Eleuterio, G. M. T. (1995). Parallel o(logn) algorithms for computation
of manipulator forward dynamics, IEEE Trans. Robotics and Automation Vol. 11(No.
3): 389–400.
Fried, G., Djouani, K., Borojeni, D. & Iqbal, S. (2006). Jacobian matrix factorization
and singularity analysis of a parallel robot, WSEAS Transaction on Systems Vol.
5/6: 1482–1489.
Gosselin, C. M. (1996). Parallel computational algorithms for the kinematics and dynamics of
planar and spatial parallel manipulator, Journal of Dynamic System, Measurement and
Control Vol. 118: 22–28.
Khalil, W. & Guegan, S. (2004). Inverse and direct dynamic modeling of gough-stewart robots,
IEEE Transactions on Robotics Vol. 20(No. 4): 745–762.
360
22 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
Khalil, W. & Ibrahim, O. (2007). General solution for the dynamic modeling of parallel robots,
Journal of Intelligent and Robotic Systems Vol. 49(No. 1): 19–37.
Khan, W. A. (2005). Recursive kinematics and inverse dynamics of a 3r planar
parallel manipulator, Journal of Dynamic System, Measurment and Control Vol. 4(No.
4): 529–536.
Leroy, N., Kokosy, A. M. & Perruquetti, W. (2003). Dynamic modeling of a parallel robot.
Application to a surgical simulator, Proceedings of IEEE International Conference on
Robotics and Automation, Taipei, Taiwan, pp. 4330–4335.
Luh, J. Y. S., Walker, W. & Paul, R. P. C. (1980). On line computational scheme for mechanical
manipulator, ASME Journal of Dynamic System, Meas Control Vol. 102: 69–76.
Merlet, J. P. (2000). Parallel Robots, Kluwer Academic Publishers.
Merlet, J. P. (2002). Optimal design for the micro parallel robot mips, Proceedings of IEEE
International Conference on Robotics and Automation, ICRA ’02, Washington DC, USA,
pp. 1149–1154.
Nakamura, Y. & Ghodoussi, M. (1989). Dynamic computation of closed link robot mechanism
with non redundant actuators, IEEE Journal of Robotics and Automation Vol. 5: 295–302.
Neugebauer, R., Schwaar, M. & F. Wieland, F. (1998). Accuracy of parallel structured of
machine tools, Proceedings of the International Seminar on Improving Tool Performance,
Spain, pp. 521–531.
Plitea, N., Pisla, D., Vaida, C., Gherman, B. & Pisla, A. (2008). Dynamic modeling of a parallel
robot used in minimally invasive surgery, Proceedings of EUCOMES 08, Cassino, Italy,
pp. 595–602.
Poignet, P., Gautier, M., Khalil, W. & Pham, M. T. (2002). Modeling, simulation and control
of high speed machine tools using robotics formalism, Journal of Mechatronics Vol.
12: 461–487.
Reboulet, C. (1988). Robots parallèles, Eds Hermès, Paris.
Stewart, D. (1965). A plaform with 6 degrees of freedom, Proceedings of the institute of mechanical
engineers, pp. 371–386.
Tsai, L. W. (2000). Solving the inverse dynamics of a gough-stewart manipulator by the
principle of virtual work, Journal of Mechanism Design Vol. 122: 3–9.
Zhu, Z., Li, J., Gan, Z. & Zhang, H. (2005). Kinematic and dynamic modelling for real-time
control of tau parallel robot, Journal of Mechanism and Machine Theory Vol. 40(No.
9): 1051–1067.
18
1. Introduction
1.1 Problem definition
The ‘boundary of space’ model representing all possible positions which may be occupied
by a mechanism during its normal range of motion (for all positions and orientations) is
called the work envelope. In the robotic domain, it is also known as the robot operating
envelope or workspace (Nof, 1999). Several researchers have investigated workspace
boundaries for different degrees of freedom (DOF), joint types and kinematic structures
utilizing many approaches (Ceccarelli , 1996), (Ceccarelli & Vinciguerra, 1995), (Ceccarelli &
Lanni, 2004), (Cebula & Zsombor-Murray, 2006), (Castelli, et al., 2008), (Yang, et al., 2008). A
recent example utilizes a reconfigurable modeling approach, where the 2D and 3D
boundary workspace is created by using a method identified as the Filtering Boundary
Points (FBP) algorithm. This is developed fully in Djuric and ElMaraghy (2008).
However, this work envelope based work is limited as it does not contain relevant
information regarding the relationships between the robot, or mechanisms within a system.
This includes the general kinematic structures within the system, the location of the working
part(s), tools, process parameters and other operation related parameters. Here an operation
is defined as consisting of the travel path, manipulator/end-effector or working tool, tool
and part location, and orientation, and any other related process related parameters. The
work envelope provides essential boundary information, which is critical for safety and
layout concerns, but the work envelope information does not by itself determine the
feasibility of a desired configuration. The effect of orientation is not captured as well as the
coupling related to operational parameters. Included in this are spatial occupancy concerns
due to linking multiple kinematic chains, which is an issue with multi-tasking machine
tools, reconfigurable machines, and manufacturing cells.
Multi-tasking machine tools can be considered CNC mill-lathe hybrids (Figure 1). These
machines enable multiple tool engagement in multiple parts simultaneously. Each tool/
part/ spindle/ axis set combination follows its own unique programmed path. These
machines minimize the number of manual operations, as well as reduce setup costs and
potential quality issues. A single multi-tasking machine tool can completely machine
complex parts from start to finish (Hedrick & Urbanic, 2004). Traditional computer
numerical control (CNC) machines consist of multiple kinematic chains (two – one to
362 Robotic Systems – Applications, Control and Programming
support the tool and the other to support the fixture), and can be defined by standard
families, i.e., 3 axis horizontal machine, 3 axis vertical, 4 axis horizontal and so forth. The
valid kinematic chain loops need to be defined, but are static. It must be noted that
depending on the machine type and configuration for multi-tasking machines, the kinematic
chain loops can dynamically change, which is beyond the scope of this work, but needs to be
considered when assessing the feasibility in time and space of these machine families.
Fig. 1. Okuma LT300-MY (Okuma, 2002) and Programmable Axes (adapted from Hedrick &
Urbanic, 2004)
In any manufacturing environment, responsiveness to unforeseen disturbances on the shop
floor is highly desirable in order to increase throughput, reduce waste and improve resource
utilization and productivity (ElMaraghy, 1993). Process improvement methods to increase
shop floor responsiveness while minimizing capital investments have consistently been a
topic for manufacturing research. This is presently being addressed through the concept of
reconfigurable manufacturing (ElMaraghy, 2008). The key building blocks of a
reconfigurable manufacturing system are scalability, adaptability, and transformability
(Koren et al., 1999). This should reduce capital investments over the system life cycle, rapid
changeovers, steeper launch curves, and minimal fluctuations in quality and this has great
future potential (Anderson & Bunce, 2000). The reconfiguration concept is divided into two
subsets for this work: extendibility (Figure 2) and reconfigurability (Figure 3).
Changing cutting tools, the tooling systems used to mount the tools into the machine, the
end-effector or replacing an axis with another with different constraints is considered
extendibility. For machining, this usually introduces shifts in the working boundaries
allowing the same programs / travel paths to be used with appropriate offsets.
For robotic applications, this may not be the case. (Note: for systems where the end-effector
requires specific orientations, it is assumed that the orientation remains constant for a given
length offset.) These modifications are typically made in the tooling domain.
A reconfiguration consists of altering the machine topology, i.e. changing the DOF by
adding or removing an axis. This may require re-programming, or depending on the
Utilizing the Functional Work Space Evaluation Tool
for Assessing a System Design and Reconfiguration Alternatives 363
Fig. 4. Work envelope and work window for a 6R Fanuc robot (Djuric & Urbanic, 2009)
The actual work window conditions will vary based on the part, travel path tool parameters,
geometry, machine kinematic composition and the application. For machining operations,
the work window will vary for each unique orientation/ tooling /fixture combination.
Structural reconfigurations, such as adding or removing an axis (DOF and type of joints)
either virtually or physically, will also impact the work window. A methodology needs to
be developed to define the work window for a configuration, and any potential
reconfigurations. The assessment of system elements is necessary in order to examine the
feasibility and practicality of a set of kinematic structures in a manufacturing scenario. This
does not exist at this time. There is a direct relation between the kinematic structures, their
manipulability and constraints, to the work window. Proper assessment of system elements
will help in optimization of a system and help in deciding various parameters such as
number of degrees of freedom, placement of machines, robots and other supporting
mechanisms (in case of a work cell consisting of multiple kinematic chain elements) and so
forth. The methodology needs to be applicable for the reconfigurable type of machinery. The
goal of this work is to focus on foundational work window assessment approaches.
are used to link relationships to define the usable three dimensional (3D) space. It is readily
evident that the work envelope essentially consists the bounding stroke positions of the
linear or translational axes. The process flow for determining the work window for these
mechanisms is illustrated in Figure 5.
Fig. 5. Work window algorithm for mechanisms created from elemental kinematic chains,
adapted from Djuric & Urbanic, 2009
To assist with steps 1 and 2 in the process flow, the workspace and window are analysed for
a set of 2 and 3 DOF kinematic structures (a selected set presented in Table 1). The R
represents a rotary axis, and the T a translational axis. For this work, it is assumed that axes
are stacked orthogonally. The coded delineation represents the axis stack, i.e., RT indicates
that a translational axis is orthogonally stacked at the end of a rotary axis (Case 2 in Table 1).
When specific subscripts are included, for example Case 8 - TXRB, this is used to indicate an
axis placed on top of another axis, in the order listed. The A axis rotates around the X
translational axis, the B axis rotates around the Y translational axis, and the C axis rotates
around the Z translational axis.
The workspace of the 2 DOF mechanisms may be a surface, curve or line. Depending on the
axis combination and axis stack, the bounding conditions for an axis combination may be a
segmented line – i.e., a void exists where the axis/part / fixture is located. The 3 DOF
mechanisms usually have a solid workspace. Again, this depends of the joint type
(rotational or translational) and their order in the kinematic chain. The revolve profile
consists of the sweep volume of the rotary axis, part, and fixture profiles around the rotation
axis.
Two combinations of vertical 3 axis milling machines are shown in Figure 6, which are
comprised of Case 0, T and Case 4. In Figure 7, a 5 axis vertical milling machine which is
comprised of Cases 5 and 6 is shown. In Figure 8, a horizontal 5 axis milling machine is
illustrated, which is comprised of Cases 4 and 9. For this machine, the revolve profile is also
displayed.
Utilizing the Functional Work Space Evaluation Tool
for Assessing a System Design and Reconfiguration Alternatives 367
Work window =
6 TX TY TZ Cube solid Cube solid Work space offset by tool
length
Hemi--
Void in the region of the
spherical +
10 TZRCRA Indeterminate axis dependent on the
cylindrical
sweep radii and tool length
surfaces
Table 1. Summary of selected elemental structures (adapted from Djuric & Urbanic (2009)
368 Robotic Systems – Applications, Control and Programming
(a) (b)
Fig. 6. (a) TZ + TYTX combination, (b) (TZ1 + TZ2)+ TXTY combination
For step 3 in the work window process flow, the analytical representation for Cases 1-7 is
relatively simple, and related directly to the axis limits (all cases) and tool length (Cases 2, 3,
5 - 7). The complexity is introduced when coupling the kinematic chains for Cases 0, R and
0, T, and Cases 4, 5, and 6 with Cases 8-10. Reference planes need to be established to link
the axes’ grounds. Along with the axis stack order, the axes travel limits must be defined, as
well as the revolve profiles, where appropriate. Set theory and Boolean operations are then
performed to couple the fixture and tooling kinematic chains in relation to their bounding
geometry.
(a) (b)
Fig. 7. (a) 5 axis milling machine and basic structure, (b) RR+ TXTYTZ combination
Utilizing the Functional Work Space Evaluation Tool
for Assessing a System Design and Reconfiguration Alternatives 369
(a) (b)
Fig. 8. 5 axis milling machine and basic structure, TZRBRA+ TXTY combination, (b) revolve
profile around the B axis at the home position (centre) of X axis
The addition of the rotary axes immediately reduces the maximum part size that can be
machined. This reduction of available work space (or volumetric void) depends on the
centre of rotation for the rotary axis, its rotational limits, and the axis stack; hence, the
importance of being able to generate the revolve profile. The inclusion of the fixture(s) and
part(s) modifies the rotary profile for an axis stack located on the table. For an axis stack
located on the end-effector, the tool length (cutting tool or CMM probe) needs to be taken
into consideration. This needs to be defined in order to assess the optimal fixture(s)
positioning, the maximum part size or number of parts to be located on a fixture (i.e., a
tombstone configuration consisting of multiple parts), the maximum tool length and width
parameters, and the retraction travel path. The table based revolve profile must be contained
or be a volumetric space subset within the workspace region defined by the X, Y, Z travel
limits. For mechanisms that contain Cases 8 and 9, initial volumetric assessments need to
be performed at the X and Y travel limits with the Z axis fully retracted while considering
the tool offset to ensure no basic crash conditions and the maximum part/fixture
conditions. Then assessments need to be performed at the rotary axes limits to determine
orientation feasibilities. A sample of Boolean operations for the region defined by the tool
offset (Figure 9 (a)) and then an axis stack on the Z axis (Figure 9 (b)) are illustrated in
Figure 9.
(a)
(b)
Fig. 9. Boolean operations for a Case 10 set of tools to establish valid operating region
robotics domain and provide a standard methodology to write the kinematic equations of a
manipulator or end-effector. Each joint in a serial kinematic chain is assigned a coordinate
frame. Using the DH notation (Denavit & Hartenberg , 1955), 4 parameters are needed to
describe how a frame i relates to a previous frame i − 1 as shown in Figure 10.
After assigning coordinate frames the four D-H parameters can be defined as following:
ai - Link length is the distance along the common normal between the joint axes
αi - Twist angle is the angle between the joint axes
θ i - Joint angle is the angle between the links
di - Link offset is the displacement, along the joint axes between the links
Utilizing the Functional Work Space Evaluation Tool
for Assessing a System Design and Reconfiguration Alternatives 371
Joint n
Link n
Joint n-1
zn
zn+1
dn xn
zn-1 yn-1 αn xn+1
yn
xn-1 yn+1
θn
an-1
an
robots is presented below. The ABB IRB 140 kinematic structure diagram is shown in Figure
11 and its D-H parameters are given in Table 2.
i di θi ai αi
1 d1 θ 1 = 0 a1 −90
2 0 θ2 = −90 a2 0
3 0 θ3 = 180 0 90
4 d4 θ4 = 0 0 −90
5 0 θ5 = 0 0 90
6 d6 θ6 = −90 0 90
d4
x2 z2
y3 d6
θ3 θ4 y5
y2 z4
θ5 z6 = a
x3 z3
θ6
a2 y4
y6 = s
z5
x6 = n
a1 z1
x4 x5
θ2
x1
d1 y1
z0 θ1
y0
x0
i di θi ai αi
1 d1 θ 1 = 0 a1 −90
2 0 θ2 = −90 a2 180
3 0 θ3 = 180 a3 90
4 d4 θ4 = 0 0 −90
5 0 θ5 = 0 0 90
6 d6 θ6 = 180 0 180
z3 θ4 d4
d6
z5 θ 5
y3 x3 x6 = n
y4
y2 x2 θ6 z6 = a
z4 y6 = s
θ3 x5
y5
a2 z2 x4
z1
a1
θ2
x1
d1 y1
z0 θ1
y0
x0
the work window region. The 2D set of points for the ABB IRB 140 is presented in Figure 13.
Similarly, the work window (subspace with ‘normal to the base’ orientation) is generated for
the Fanuc M16 robot and this set of 2D points is presented in Figure 14.
Fig. 13. Work window for the ABB IRB 140 Robot for the 90 degree (normal) orientation
Fig. 14. Work window for the Fanuc M16 Robot for 90 degree (normal) orientation
Upon combining the procedures for the generation both work windows, an empirical
formula for the joint five angle is developed, which is shown in equation (1).
Utilizing the Functional Work Space Evaluation Tool
for Assessing a System Design and Reconfiguration Alternatives 375
θ5 = Kφ − (θ 2 + Kθ3 )
(1)
The parameter K in Equation 1 is calculated using the formula below:
K = cosα 2 (2)
Joint limits,incrementΔ,
Orientation angleφ
Twist anglesα1,α2
K = cos α 2
θ 1 = θ 1 max
θ 2 = θ 2 max
θ 3 = θ 3 max
θ 5 = K φ − (θ 2 + K θ 3 )
Yes
θ 5 < θ 5 min θ 3 = θ 3 max − Δ
No
Create a point
Yes
θ 5 = θ 5 max or θ 5 = θ 5 min
No OUTPUT:
Yes Yes Do you want Yes
θ 3 = θ 3 min θ 2 = θ 2 min 2 - 3 Workwindow
2 − 3 Workwindow ?
No No No
θ 3 = θ 3 max − Δ Yes
θ 2 = θ 2 max − Δ θ1 = θ1 min OUTPUT :
No 1 - 2 - 3 Workwindow
θ1 = θ1 max − Δ
0
An =0A11 A2 2 A3i −1Ai n −1An
(4)
Where 0 An is the pose of the end-effector relative to base; i −1 Ai is the link transform for the
i th joint; and n is the number of links.
For the ABB IRB 140 robot, six homogeneous transformation matrices have been developed
using Maple 12 symbolic manipulation software, Equation 2 and the D-H parameters from
Table 2.
The ABB IRB 140 robot can now be kinematically modeled by using Equations 4-10.
0
A6 = 0 A1 1 A2 2 A3 3 A4 4 A5 5 A6 (11)
The pose matrix of the end-effector relative to the base is presented in Equation 12.
nx sx ax px
ny sy ay py
0
A6 = (12)
nz sz az pz
0 0 0 1
The end-effector orientation is defined with the rotation matrix. The upper 3X 3 sub
matrices of the homogeneous transformation matrices Equation 11, represents the rotational
matrix, Equation 13. The graphical representation of the end-effector is shown in Figure16.
nx sx ax
0
R6 = ny sy ay (13)
nz sz az
378 Robotic Systems – Applications, Control and Programming
z6 = a
x6 = n y6 = s
Fig. 16. The end-effector frame of the ABB IRB 140 robot
The normal vector n is along x6 axis, the sliding vector s is along y 6 axis, and the approach
vector a is along z6 axis. The orientation of the end-effector, relative to the robot base
frame, is defined with the three vectors: n , s , and a . Their projections onto the robot base
frame are given with the rotational matrix 0 R6 . For the ABB IRB 140 Robot the relation
between end-effector and base frame, when the end-effector is normal to the robot base, is
graphically shown in Figure 17. The orientation matrix for the given position is calculated
(see Equation 13).
0 0 1
0
R6 = −1 0 0 (14)
0 −1 0
z6 = a
x6 = n
y6 = s
z0
y0
x0
Fig. 17. The end-effector frame of the ABB IRB 140 robot
Utilizing the Functional Work Space Evaluation Tool
for Assessing a System Design and Reconfiguration Alternatives 379
The calculation of the 2D end-effector orientation is dependent on the three joint angles:
Joint 2, Joint 3 and Joint 5. The formula for Joint 5 is generated by assigning initial values for
the Joint 1, Joint 4 and Joint 6 in the forward kinematic solution. The rotational matrix in that
case is calculated and is shown in Equation 15.
By combining Equation 14 and 15 the formula for the Joint 5 angle is generated.
sin(θ 2 + θ 3 )
θ 5 = a tan 2 (16)
− cos(θ 2 + θ 3 )
To be able to generate complete work window, the Joint 2 and Joint 3 angles must vary
between their given limits for the desired increment value Δ and using the forward
kinematic solution to generate the solution for Joint 5.
Equation 16 was evaluated using the commercial software Matlab. The results were
compared to the forward kinematics calculated using the Maple 12 software and the
Workspace 5 simulation and off-line programming software ( Figure 18).
Matlab PROGRAM
Workspace 5
Fig. 18. Evaluation of the formula for the ABB IRB 140 robot
The forward kinematic calculations for the Fanuc M16 robot are done using Equations 2 and
3 and Table 3, which are shown in equations 17-22.
380 Robotic Systems – Applications, Control and Programming
Similarly to the previous calculations, the end-effector orientation matrix is found. The
graphical representation of the end-effector is shown in Figure 19.
For the Fanuc M16 Robot the relationship between end-effector and base frame, when end-
effector is normal to the robot base, is graphically shown in Figure 20. The orientation
matrix for the given position is calculated using Equation 23.
Utilizing the Functional Work Space Evaluation Tool
for Assessing a System Design and Reconfiguration Alternatives 381
x6 = n
z6 = a
y6 = s
Fig. 19. The end-effector frame of the Fanuc M16 robot
x6 = n
y6 = s
z0
y0 z6 = a
x0
1 0 0
0
R6 = 0 −1 0 (23)
0 0 −1
The rotational matrix in this case is calculated and is shown in Equation 24.
By combining Equations 23 and 24, the formula for Joint 5 angle is generated.
382 Robotic Systems – Applications, Control and Programming
− sin(θ 2 − θ 3 )
θ 5 = a tan 2 (25)
− cos(θ 2 − θ 3 )
To be able to generate the complete work window, again the Joint 2 and Joint 3 angles must
be varied between their given limits for the desired increment value Δ and using the
forward kinematic solution to generate the solution for the Joint 5.
As with the ABB IRB 140 robot, this resulting equation 25 is evaluated using Matlab, and
compared with result assessments Maple 12 and Workspace 5 (Figure 21).
Matlab PROGRAM
Workspace 5
K sin(θ 2 + Kθ 3 )
θ 5 = a tan 2 (26)
− cos(θ 2 + Kθ 3 )
This methodology can be applied to any similar kinematic structure and the unification
procedure can be extended. The formula is valid for any desired orientation. The example
using a 45 degree orientation for the ABB IRB 140 robot is illustrated in Figures 22 and 23.
As with the machine tool example illustrated in Figure 9 (b), the Boolean intersection of the
selected 45° orientation and the 90 ° normal to the base regions is the zone where both
orientations are valid (Figure 23), and where the part should be located if both orientations
are required for the process travel paths.
Utilizing the Functional Work Space Evaluation Tool
for Assessing a System Design and Reconfiguration Alternatives 383
Fig. 22. Work window for the ABB IRB 140 robot for the 45 degree orientation
Fig. 23. Work window for the ABB IRB 140 robot showing the comparison between the 90
normal to the base orientation and the 45 degree orientation
384 Robotic Systems – Applications, Control and Programming
To facilitate the 6 DOF work window calculation, empirical and analytical methods are used
and validated. From the empirical study, it found out that Joint 2, Joint 3, and Joint 5 play
the most important role. By varying Joint 2 and Joint 3 between their limits, Joint 5 needs to
be calculated to satisfy the desired end-effector orientation angle. The study of a repeatable
procedure led to the generation of the analytical method.
Two kinematically different robots are selected and their work windows are generated: the
ABB IRB 140 and Fanuc M16 robots. The resulting work window space can then be assessed
using Boolean operations to determine the valid working zone for multiple travel paths,
orientations, or regions of overlap for a manufacturing cell. This is illustrated for the 90
degree normal to the base and selected 45 degree orientation example.
Generation of
Work window
for the selected
example
4. Conclusion
To conclude, a methodology to predetermine regions of feasible operation for multiple
kinematic chain mechanisms and manufacturing cells is presented. The approach differs
for mechanisms consisting of 1, 2, and 3 DOF subsets linked together via reference frames,
and 6 DOF industrial robots. However, after the basic region of valid operation is
determined, assessment of the part location and travel paths can be performed. With
complex system configurations, it is not intuitive to define the region of task feasibility for
the initial design and as well as design alternatives, as there is much coupling related to
the kinematic structures and their manipulability, tooling, fixtures, part geometry and
task requirements. Changing an operation / task set or a system reconfiguration can be
Utilizing the Functional Work Space Evaluation Tool
for Assessing a System Design and Reconfiguration Alternatives 385
executed virtually, and these methods employed to determine feasibility prior to physical
set ups or modification in the manufacturing environment, which represents a time and
cost savings.
5. References
Abdel-Malek, K. & Yeh, H. J. (2000). Local Dexterity Analysis for Open Kinematic Chains,
Mechanism and Machine Theory, Vol. 35, pp. 131-154.
Anonymous - Okuma, 2002, Okuma Twin Star LT Series Turning Centres with Twin
Spindles, LT Series - (1) –300, Okuma Co.
Castelli, G.; Ottaviano, E. & Ceccarelli, M. (2008). A Fairly General Algorithm to Evaluate
Workspace Characteristics of Serial and Parallel Manipulators, Mechanics Based
Design of Structures and Machines, Vol. 36, pp. 14-33
Cebula, A.J. & Zsombor-Murray, P.J. (2006). Formulation of the Workspace Equation for
Wrist-Partitioned Spatial Manipulators, Mechanisms and Machine Theory, Vol 41, pp.
778-789
Ceccarelli, M. & Lanni, C. (2003). A Multi-objective Optimum Design of General 3R
Manipulators for Prescribed Workspace Limits, Mechanisms and Machine Theory,
Vol. 39, pp. 119-132
Ceccarelli, M. & Vinciguerra, A. (1995). On the Workspace of the General 4R Manipulators,
The International Journal of Robotics Research, Vol.14, pp.152-160
Ceccarelli, M. (1996). A Formulation for the Workspace Boundary of General N-revolute
Manipulators, Mechanisms and Machine Theory, Vol. 31, pp. 637-646
Dai, J. & Shah, P. (2003). Orientation Capability Planar Manipulators Using Virtual Joint
Angle Analysis, Mechanism and Machine Theory, Vol. 38, pp. 241-252
Denavit J. & Hartenberg R. S. (1955). A Kinematic Notation for Lower-pair Mechanisms
Based on Matrices, Journal of Applied Mechanics, Vol. 77, pp. 215-221.
Djuric, A. & Urbanic, R. J. (2009). A Methodology for Defining the Functional Space (Work
Window) for a Machine Configuration, 3nd International Conference on Changeable,
Agile, Reconfigurable and Virtual Production, CD-ROM, Munich, October 5th-7th,
2009
Djuric, A. M. & ElMaraghy, W. H. (2008). Filtering Boundary Points of the Robot
Workspace, 5th International Conference on Digital Enterprise Technology. Nantes,
France, October 2008
ElMaraghy, H. A. & ElMaraghy, W. H. (1993). Bridging the Gap Between Process Planning
and Production Planning and Control, CIRP Journal of Manufacturing Systems, Vol.
22, No. 1, pp. 5-11
ElMaraghy, H. A. (2005). Flexible and Reconfigurable Manufacturing Systems Paradigms,
International Journal of Flexible Manufacturing Systems, Special Issue on Reconfigurable
Manufacturing Systems, Vol. 17, pp. 261-276
ElMaraghy, H. A. (2008). Changeable and Reconfigurable Manufacturing Systems, (editor)
Springer-Verlag (Publisher), ISBN: 978-1-84882-066-1
Hedrick R. & Urbanic, R. J. (2007). A Methodology for Managing Change when
Reconfiguring a Manufacturing System, 2nd International Conference on Changeable,
Agile, Virtual and Reconfigurable Production (CARV), pp. 992-1001
Koren, Y.; Heisel, U.; Jovane, F.; Moriwaki, T.; Pritchow, G.; Van Brussel, H. & Ulsoy, A.G.
(1999). Reconfigurable Manufacturing Systems, CIRP Annals, Vol. 48, No. 2.
386 Robotic Systems – Applications, Control and Programming
Nof, S.Y. (1999). Handbook of Industrial Robotics, 2nd edn. New York: John Wiley & Sons.
Pond, G. & Carretro, J. (2011). Dexterity Measures and Their Use in Quantitative Dexterity
Comparisons, Meccanica, Vol. 46, pp. 51-64
Yang, J.; Yu, W.; Kim, J. & Abdel-Malet, K. (2009). On the Placement of Open-Loop
Robotic Manipulators for Reachability, Mechanism and Machine Theory, Vol. 44,
pp. 671-684
0
19
1. Introduction
The development of industrial production is subject to a change, which is mainly caused
by higher customer orientation. The possibility of an end-customer to configure nearly
individual mass products results in sophisticated challenges for production. The diversity
of variants is growing. In addition an increasing demand to offer a band of new products
leads to shorter product life cycles. To be competitive a company has to provide the product
at reasonable costs. Considering serial production lines in highly developed countries, a
high degree of automation is necessary to gain economies of scale. These changes of the
economic boundaries demand specific design methodologies for the production equipment.
In particular reconfigurable robotic systems are promising alternatives to the purchase of new
production equipment. In case of changed production organization, reconfigurable robotic
systems reduce redesign efforts and do not require rescheduling of entire production lines.
Major impact factors to the design of reconfigurable robots are shown in Figure 1. They are
subdivided into product and production driven aspects.
2. Literature review
The field of robotic research reconfiguration is often attended by modularization,
standardization of interfaces or morphology on different mechatronic levels. In order
to establish a consistent comprehension the term "‘reconfiguration"’ is defined, before
representatives of different concepts are described briefly.
"‘Reconfigurability is the ability to repeatedly change and rearrange the components of a system in a
cost-effective way"’.
In literature a wide spread understanding of reconfigurable robots is existent (see Figure 2). In
the following section different approaches and realization examples are introduced in order
to distinguish the scope of this contribution.
Reconfiguration
Approaches
Online / Modular self- Auxiliary Drives System Inherent Reconfiguration
Dynamic reconfiguration and Adaptive Reconfiguration via Grasping
Machine Elements
Based on this brief literature review in the following section particular properties as well as
design aspects of parallel robots are introduced. Subsequently, requirement management for
reconfigurable robots is introduced in section 4. Further considerations will also focus on
offline and online reconfiguration. Here, the use of adaptive machine elements as well as
system inherent reconfigurations are considered.
Frame
End-effector Platform
3. end-effector singularities, in which the end-effector frame loses DoF of available motion.
Mathematically, the mentioned singularities occur, if one of the Jacobian Matrices J A,B with
δf δf
JA = ∨ JB = (1)
δX δq
where X represents the Tool Center Point (TCP) coordinates X = ( x, y, z) T and q the actuator
variables q = (q1 , q1 , ..., qn ) T , becomes singular. Whereas, a configuration space singularity
appears if,
the derived requirements have to be analysed to determine whether they are relevant for
reconfiguration of the robot. First of all, concept-relevant requirements (e.g. DoF) are
analysed. Based on the definition of concepts, embodiment related requirements have to
be considered. For instance, different payloads and process loads lead to the relevant
requirement operation forces, while accuracy leads to elastic deformation. Both are related
by the structural stiffness of the kinematic structure.
The object of the third step is to estimate whether spreading requirements are subject of
reconfiguration. It is obvious, that a reconfiguration is not probable between every use
case. A robot used in the chemical industry first will not be sold to a company from the
food industry, reconfigured and reused for pick-and-place tasks. However, a pick-and-place
robot in the food industry might handle muffins first and convert to packaging of croissants
in the same company later. Additionally the influence of requirement changes to other
requirements have to be considered. The DoF at the gripper for instance might be subject
of a wide spread. However, extending a fully parallel structure with an additional DoF
causes a completely new control program. Therefore, it becomes obvious that not every
requirement spread is a meaningful basis for reconfiguration concepts. Costs and engineering
complexity are major limits. A detailed analysis of the goal-system leads to a lean set of
essential reconfiguration-relevant requirements for robotic systems [Schmitt (2009)]:
• enabling object handling • assure high accuracy
• provide workspace • provide performance
• enable operation forces • enable low lifecycle costs
• provide degree of freedom
These general requirements are specified by different use cases. For instance, the DoF is
refined by the number of possible movements and the orientation of rotation (e.g. ξ = ±25).
In the fourth step, requirements are related to system components. System components can
be hard- or software and are related to other components (e.g. information flow, geometric
surfaces). For instance, the requirement workspace dimension is satisfied by the kinematic
structure. More precisely the system component strut and its parameter strut length fulfill
this requirement.
As a last step reconfiguration scenarios are developed and analyzed. Therefore, traces from a
396
10 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
requirement to the involved parameters are followed and detected relations are described in
a qualitative or - if possible - in a quantitative way. For instance, relation between strut length
and workspace dimension can be described by kinematic problem, see section 3. Furthermore,
it is necessary to determine the influence of the identified reconfiguration parameters to other
requirements and detect possible goal-conflicts. It is obvious, that an extension of the strut
length will lead to a higher moved mass, which might decrease the performance of the robotic
system.
Fig. 6. SysML model for the modularization of the system components of a parallel robot.
q = (q1 , q2 ) T includes the actuated strut variables, whereas each strut can be moved between
the intervals [qmin , qmax ]. The parameter α describes the angular displacement B of base
B to A. For the kinematic analysis the solution of the relation between end-effector coordinates
and drive parameter ( f ( X, q) = 0) is essential. Here, the direct and inverse kinematic
problem constitutes the motion behavior constrained by the geometric dimensions. The direct
kinematic problem calculates the position and orientation of the TCP out of the given drive
displacement. If the TCP coordinates are defined, the inverse kinematic problem assesses the
actuator displacement. The relation given in Figure 7 b) is important for kinematic analysis.
In paragraph 3 the existence and kinematic influences of singularities of parallel robots were
discussed briefly. In case of the shown planar 2-DoF parallel mechanism the configuration
space singularities can be associated with the boundary of the respective workspace.
Actuation singularities are existent for the condition
Fig. 8. Influence of the reconfiguration parameter on the position and dimension of the
workspace: a) varied strut length l1 , l2 ; b) varied frame diameter b and c) varied angle α at
the frame connection level.
to be varied to fulfil different tasks. Since the main drawback of parallel mechanisms is
the small workspace-to-installation-space ratio, dynamic reconfiguration type I facilitates the
passing of singularties. Hence, several workspaces are combined during operation and the
useable overall workspace is enlarged. The different workspaces are mathematically defined
by different solutions of the direct kinematic problem (DKP) (assembly modes) or the indirect
kinematic problem (IKP) (working modes). Each mode constitutes its own workspace. The
workspaces of different working modes are divided by configuration space singularities,
where assembly mode workspaces are distinguished by actuator singularities. In order to
determine each of the possible configurations a vector k of binary configuration parameters
(+1 or -1) has been established by Budde (2008) with
drive 3
y drive 1
x
drive 2
additionally a fourth rotary axis can be mounted at the end-effector platform. Thus, a hybrid
kinematic structure occurs. In Figure 9 the kinematic structure of the T RIGLIDE robot without
the rotary axis is shown.
Fig. 10. Prototype of the T RIGLIDE robot (left); Usable working-configurations and resulting
overall workspace (right).
In case of the T RIGLIDE robot the change between different configuration workspaces is
accomplished by passing through singularities. Implemented strategies to go through the
different singularity type are described in Budde (2010). In Figure 10 the prototype of the
T RIGLIDE robot is shown as well as the desired main-workspaces and the resulting overall
workspace.
As mentioned before the so called change-over configurations have to be captured in order to
reach the main-workspaces, which are mostly capable to fulfil requirement spreads regarding
to workspace. Several configurations are shown in Figure 11. The presented approach to
reconfigure the T RIGLIDE robot leads to challenging tasks in control. The necessary steps and
the adapted control strategy is explained in Maass (2006).
To complete the explanation of the named reconfiguration approaches for parallel robots, the
next paragraph deals with DR II with a focus on adaptive machine elements.
z z
y y
x x
clearance
inner outer
ring ring
(a) (b)
Fig. 12. Effect principle and realized prototype of the adaptive revolute joint used for
dynamic reconfiguration type II.
manipulator with two DoF. The cranks at the base points A1 and A2 are actuated. The
two passive rods L12 and L22 are coupled to each other by a passive revolute joint in the
end-effector point C. The kinematic chain originating at base A1 and the corresponding crank
is connected by the described adaptive revolute joint, which is marked by ⊗ (see Figure 13).
C C
B2 B2
B1 B1
L21 L21
L11 y L11
q1 q2 q1 C´ q2
A1 x A2 A1 A2
LB LB
(a) (b)
Fig. 13. Kinematic scheme and geometries of the RRRRR-mechanism (a) and solutions of the
DKP (b).
402
16 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
The kinematic description of the mechanism with its two closed loop chains i = 1, 2 can
be geometrically done with the cartesian end-effector coordinates X = [ xc , yc ] T , the base
coordinates A = [ x Ai , y Ai ] T , which can be derived by L B , the actuator angles qi and the given
geometric parameters Lii with respect to the base frame {0}. In eq. 9 the kinematic describtion
of the mechanism is given, considering the parameters and variables shown in Figure 13 (a).
2
xc x + cos(qi ) · Li1
F= − Ai − Li2 2 = 0. (9)
yc y Ai + sin(qi ) · Li1
Expression 9 provides a system of equations which relate the end-effector coordinated X to
the actuator coordinates qi . The equation for the direct kinematic problem (DKP) has two
solutions, in case of the RRRRR-mechanism. They constitute to different configurations of
the mechanism (see Figure 13 (b)). Changing these configurations an actuator singularity
occurs, when moving from position C to C . In general actuator singularities can occur in the
workspace or separate workspaces into different areas.
In order to reconfigure the mechanism or to use these different workspaces adequate strategies
have to be developed, in case of DR II using the adaptive revolute joint. Assembling the
adaptive revolute joint in point B1 (see Fig. 13 (a)) and blocking it entirely, it is possible to treat
the mechanism as a RRRR 4-bar structure, with one actuated revolute joint at base point A1 .
According to the theorem of G RASHOF different kinds of 4-bar mechanisms exist, depending
on the geometric aspect ratio of the links [Kerle (2007)]:
lmin and lmax are the lengths of the shortest and the longest link in the structure and l’, l” are
the lengths of the remaining links. In case of the shown 5-bar mechanism with the adaptive
revolute joint, one link length can be adjusted according to the desired mechanism property.
The resulting length L R can be calculated by
L R = L212 + L211 − 2 · L12 · L11 · cos(Θ1 ). (10)
Beside the aforementioned possibility to reconfigure the robot, it is feasible to develop
strategies to pass through an actuator singularity in an assured manner by blocking and
releasing the mechanism at point B1 by means of the adaptive revolute joint. This
reconfiguration strategy is shown in Fig. 14.
The aspect ratios in the example are chosen by Li1 /( Li2 , L B ) = 3/4. If the mechanism is close
to a singular position (Fig. 14.1), the adaptive revolute joint will be locked, (θ1 = 40◦ ). The
resulting 4-bar mechanism moves on a path, which is defined by a segment of a circle of radius
L R with center at A1 through the singular position (Figure 14.2). Subsequently, the mechanism
is in another configuration, which is not a singular position (Fig. 14.3) and the adaptive joint
can be released.
B2 singular position
C L22 C L22
L12 B2
B1
L12
B1 LR L21 LR L21
q1 q1=40° L11 q1
L11
A1 A2 A1 A2
q1=40° LB LB
(1) (2)
A1 A2
LB
(3)
Fig. 14. Strategy of passing through singularity by blocking the adaptive joint.
case studies are explained and the benefits of reconfigurable robots are shown. The first
one deals with the reconfiguration of a pick-and-place robot to an assembly robot, using
the approach of static reconfiguration. Another case study is the classification of solar cells
under consideration of their efficiency. This case study will illustrate an example of dynamic
reconfiguration of type I.
Fig. 15. Relation network for the static reconfiguration of a tow DoF parallel mechanism.
404
18 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
As can be seen in Figure 15 with the new use case the requirements process forces, accuracy
and workspace dimension are refined. Since the workspace is directly influenced by the rack
radius, the accuracy is influenced via the sensitivity. For increasing rack radius sensitivity
in x-direction also increases and leads to higher accuracy in this direction. At the same
time accuracy decreases in y-direction. In addition, the accuracy is influenced by the elastic
deformation of the structure. The dimension of the deformation correlates with the value of
the process forces and the stiffness of the structure. For the same stiffness higher process forces
result in higher deformation and, therefore, decreased accuracy. This leads to a goal conflict
which can be reduced with higher stiffness of the structure. At the same time the stiffness
can be increased due to an adaption of the rack radius. Here, a higher rack radius results in a
higher stiffness.
Fig. 16. Exemplary use cases for reconfiguration of a "‘pick-and-place"’ robot (a) into an
assembly robot (b).
Static Reconfiguration from "‘pick-and-place"’ to assembly system in this case is carried out
by increasing the rack radius. Figure 16 highlights the reconfiguration, showing the two
use cases. In the first use case push-buttons are delivered via a first conveyer belt and
placed into a container on a second belt. For this reason the workspace is elongated. After
reconfiguration the same push-buttons are taken from a container and assembled into a
casing. As the push-buttons provide a snap-in fastening operational forces in y-direction are
needed and forces in x-direction might appear due to alignment deviations. Hence, the second
configuration complies with the demanded changes.
μ ± nd · σ, (11)
where nd constitutes the desired interval of σ.
Under this assumption the concept of DR I, the T RIGLIDE robot makes use of the two
workspaces. The high frequented boxes to classify the solar cells are located in the first, the
less ones in the second workspace. Due to the required time TR to change the configuration,
it is not efficient to vary workspace every cycle. Therefore, the proposed box distribution
according to the expectation value μ is feasible. In Figure 17 (a) the workspaces with the
corresponding intervals of the Gaussian distribution and in (b) the concept of the storage
boxes are shown. The indices of the boxes cmno describe the actual workspace (m), the column
(n) and the row (o), so that the position is dedicated.
P (X = xi) P (X = xi)
TR
xi xi
m m - n ds m + nds
(a)
(b)
Fig. 17. Exemplarily probability distribution of supplied solar cells and used workspaces (a)
and top view of a feasible conceptual design of a classification facility with the T RIGLIDE
robot without periphery (b).
Since the introduced application examples highlight the advantages of reconfigurable parallel
robotic systems to fulfil changing requirement, specific restrictions have to be considred in
order to apply reasonable reconfiguration concepts. For this reason in the following section
major aspects for the assessment and their interrelations to the presented static and dynamic
reconfiguration strategies are proposed.
406
20 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
Aspects of Complexity
Computational Complexity
Mechanical Complexity
Reconfiguration-Relevant Requirements
Degree of Freedom
Parameter Range
Opertaion Force
Object Handling
Duration Time
Performance
Robustness
Workspace
Accuracy
Reconfiguration type Reconfiguration concept
Legend
realization of passing through respectively to avoid singular positions (DR) are challenging
research fields to deal with.
8. Acknowledgments
The authors gratefully thank the Deutsche Forschungsgemeinschaft (DFG) for supporting
the CRC 562 "‘Robotic Systems for Handling and Assembly"’ and the Project
"‘Application-Oriented Analysis of High Dynamic Handling of Solar Cells with Parallel
Robots"’.
9. References
Anggreeni, I., van der Voort, M.C. (2008). Classifying Scenarios in a Product Design Process: a
study towards semi-automated scenario generation. In: Proc. of the 18th CIRP Design
Conference - Design Synthesis, Enschede.
Boudreau, R. and C.M. Gosselin (2001). La synthèse d’une plate-forme de Gough-Stewart pour
un espace atteignable prescrit. Mechanism and Machine Theory 36. pp. 327-342.
Brouwer, M., van der Voort, M.C. (2008). Scenarios as a Communication Tool in the Design
Process: Examples from a Design Course. In: Proc. of the 18th CIRP Design
Conference - Design Synthesis, Enschede.
Budde, C., Helm, M., Last, P., Raatz, A., Hesselbach, J. (2010). Configuration Switching
for Workspace Enlargement In: Schütz, D., Wahl, F. (Ed.), Robotic Systems for
Handling and Assembly, Springer-Verlag, Star Series, Berlin, pp. 175-189, ISBN
978-3-642-16784-3.
Budde, C., Last, P., Hesselbach, J. (2007). Development of a Triglide-Robot with Enlarged
Workspace In: Proc. of the International Conference on Robotics and Automation,
Rom, Italy, ISBN 978-1-4244-0602-9.
Budde, C. (2008). Wechsel der Konfiguration zur Arbeitsraumvergrösserung bei
Parallelrobotern. Ph.D Thesis, TU Braunschweig, Vulkan, Essen, ISBN
978-3-8027-8747-8.
Clavel, R. (1991). Conception d’un robot parallel rapide a 4 degres de liberte, Ph.D. Thesis,
EPFL, Lausanne.
Fisher, R., Podhorodeski, R.P., Nokleby, S.B. (2004). Design of a Reconfigurable Planar Parallel
Manipulator. In: Journal of Robotic Systems Vol. 21, No. 12, (2004), pp. 665-675.
Franke, H.-J., Krusche, T. (1999). Design decisions derived from product requirements. In:
Proc. of the 9th CIRP Design Conference - Integration of Process Knowledge into
Design, Enschede, pp.371-382.
Frindt, M. (2001). Modulbasierte Synthese von Parallelstrukturen für Maschinen
in der Produktionstechnik. Ph.D, TU Braunschweig, Vulkan, Essen, ISBN
978-3-8027-8659-4.
Hesselbach, J., Helm, M., Soetebier, S. (2002). Connecting Assembly Modes for Workspace
Enlargement. In: Advances in Robotis, Kluwer Academic Publishers, 2002, pp.
347-356.
Inkermann, D., Stechert, C., Vietor, T. (2011). Einsatz multifunktionaler Werkstoffe für die
Entwicklung einer adaptiven Gelenkachse. In: Proc. of 8. Paderborner Workshop
Entwurf mechatronischer Systeme. HNI-Verlagsschriftenreihe Paderborn, 2011, pp.
231-244.
Requirement Oriented Reconfiguration of Parallel Robotic Systems 409
Requirement Oriented Reconfiguration
of Parallel Robotic Systems 23
Jantapremjit, P., Austin, D. (2001). Design of a Modular Self-Reconfigurable Robot. In: Proc.
of the 2001 Australien Conference on robotics and Automation. Syndney (Australia),
2001, pp. 38-43.
Ji, Z., Song, P. (1998). Design of a Reconfigurable Platform Manipulator. In: Journal of Robotic
Systems Vol. 15, No. 6, 1998, pp. 341-346.
Jovane, F. et al. (2002). Design issues for reconfigurable PKMs. In: Proc. of the 4th Chemnitz
Parallel Kinematic Seminar, Chemnitz, 2001, pp. 69-82.
Kerle, H., Pittschellis, R., Corves, B. (2007). Einführung in die Getriebelehre. Teubner Verlag,
Wiesbaden, Germany, ISBN 978-3-8351-0070-1.
Koren, Y. et al. (1999). Reconfigurable Manufacturing Systems. Annals of the CRIP, Vol. 48,
No. 2, 1999, pp. 527-540.
Krefft, M., Brüggemann, H., Herrmann, G. & Hesselbach, J. (2006). Reconfigurable Parallel
Robots: Combining High Flexibility and Short Cycle Times. In: Journal of Production
Engineering, Vol. XIII, No.1., pp. 109-112.
Lee, G.H. (1997). Reconfigurability Consideration Design of Components and Manufacturing
Systems. In: The International Journal of Advanced Manufacturing Technology, Vol.
13, 1997, pp. 376-386.
Maass, J. et al. (2006). Control strategies for enlarging a spatial parallel robot’s workspace by
change of configuration. In Proc. of the 5th Chemnitz Parallel Kinematics Seminar,
Chemnitz (Germany), 2006, pp. 515-530.
Merlet, J.P. (1997). DEMOCRAT: A DEsign MethOdology for the Conception of Robot with
parallel ArchiTecture. In: Proceedings of the International Conference on Intelligent
Robots and Systems, IROS, Grenoble France, pp. 1630-1636.
Merlet, J.-P. (2001). Parallel Robots - Solid Mechanisms and its applications, Vol. 74, ISBN
1402003854, Kluwer Academic Publishers, Dordrecht.
http://www.molecubes.org/
O’Brien, J.-F.; Wen, J.-T. (2001). Kinematic Control of Parallel Robots in the Presence of
Unstable Singularities. In: Proceedings of IEEE International Conference on Robotics
and Automation, Seoul, Korea, pp.354-359.
Park, F.C., Jin Wook Kim (1998). Manipulability and singularity analysis of multiple robot
systems: a geometric approach, In: Proceedings of the IEEE International Conference
on Robotics and Automation, Vol.2, pp.1032-1037.
Pritschow, G. (2000). Parallel Kinematic Machines (PKM) - Limitations and New Solutions.
CIRP Anals-Manifacturing Technology. Vol. 49, No. 1, 2000, pp. 275-295.
Riedel, M., Nefzi, M., and Corves, B. (2010). Grasp Planning for a Reconfigurable Parallel
Robot with an Underactuated Arm Structure. In: Mechanical Sciences, Vol. 1, 2010,
pp. 33Ű42.
Riedel, M., Nefzi, M., Hüsing, M., and Corves, B. (2008). An Adjustable Gripper as a
Reconfigurable Robot with a parallel Structure. In: Proc. of the 2nd Int. Workshop
on Fundamental Issues and Future Research Directions for Parallel Mechanisms and
Manipulators, 2008.
Roozenburg, N.F.M, Dorst, K., (1991). Some Guidelines for the Development of Performance
Specifications in Product Design. In: Proc. of the 8th International Conference on
Engineering Design ICED 91, pp.359-366.
Schmitt, J., Inkermann, D., Raatz, A., Hesselbach, J., Vietor, T. (2010). Dynamic
Reconfiguration of Parallel Mechanisms. In: EUCOMES - European Conference on
410
24 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
1. Introduction
Interpretation of human gestures by a computer is used for human-machine interaction in
the area of computer vision [3][28]. The main purpose of gesture recognition research is to
identify a particular human gesture and convey information to the user pertaining to
individual gesture. From the corpus of gestures, specific gesture of interest can be
identified[30][36], and on the basis of that, specific command for execution of action can be
given to robotic system[31]. Overall aim is to make the computer understand human body
language [14], thereby bridging the gap between machine and human. Hand gesture
recognition can be used to enhance human–computer interaction without depending on
traditional input devices such as keyboard and mouse.
Hand gestures are extensively used for telerobotic control applications [20]. Robotic systems
can be controlled naturally and intuitively with such telerobotic communication [38] [41]. A
prominent benefit of such a system is that it presents a natural way to send geometrical
information to the robot such as: left, right, etc. Robotic hand can be controlled remotely by
hand gestures. Research is being carried out in this area for a long time. Several approaches
have been developed for sensing hand movements and controlling robotic hand [9][13][22].
Glove based technique is a well-known means of recognizing hand gestures. It utilizes
sensor attached mechanical glove devices that directly measure hand and/or arm joint
angles and spatial position. Although glove-based gestural interfaces give more precision, it
limits freedom as it requires users to wear cumbersome patch of devices. Jae-Ho Shin et al [14]
used entropy analysis to extract hand region in complex background for hand gesture
recognition system. Robot controlling is done by fusion of hand positioning and arm gestures
[2][32] using data gloves. Although it gives more precision, it limits freedom due to necessity
of wearing gloves. For capturing hand gestures correctly, proper light and camera angles are
required. A technique to manage light source and view angles, in an efficient way for
capturing hand gestures, is given in [26]. Reviews have been written on the subsets of hand
postures and gesture recognition in [6]and [18]. The problem of visual hand recognition and
tracking is quite challenging. Many early approaches used position markers or colored bands
to make the problem of hand recognition easier, but due to their inconvenience, they cannot be
considered as a natural interface for the robot control [24]. A 3D Matlab Kinematic model of a
PUMA [1][4][16] robot, is used for executing actions by hand gesture[39]. It can be extended to
any robotic system with a number of specific commands suitable to that system [17].
414 Robotic Systems – Applications, Control and Programming
In this chapter, we have proposed a fast as well as automatic hand gesture detection and
recognition system. Once a hand gesture is recognised, an appropriate command is sent to a
robot. Once robot recieves a command, it does a pre-defined work and keeps doing until a
new command arrives. A flowchart for overall controlling of a robot is given in figure 1.
No
Frame
contain
object of Frame
contains object of interest?
Yes
2. Methodology
The proposed methodology involves acquisition of live video from a camera for gesture
identification. It acquire frames from live video stream in some given time interval[15]. In
this case frame capture rate for gesture search is 3 frames per second. The overall proposed
Real-Time Robotic Hand Control Using Hand Gestures 415
technique to acquire hand gestures and to control robotic system using hand gestures is
divided into four subparts:
• Image acquisition to get hand gestures.
• Extracting hand gesture area from captured frame.
• Determining gesture by pattern matching using PCA algorithm.
• Generation of instructions corresponding to matched gesture, for specified robotic
action.
Light Source
Camera
Light Source
sum
Motion parameter = (5)
r *c
Here r and c are the number of rows and columns in an image frame. Threshold value is set
as 0.01. i.e. if total pixels of mismatch are less than 1% of total pixels in a frame, then it is
considered as frame of interest. Required frame is forwarded for further processing and this
module again starts searching for next frame of interest.
The resultant image was segmented to get a binary image of hand[37]. Binary images are bi-
level images where each pixel is stored as a single bit (0 or 1). Smoothening was needed, as the
output image had some jagged edges as clearly visible in figure 4(c). There can be some noise
in the filtered images due to false detected skin pixels or some skin color objects (like wood) in
background, it can generate few unwanted spots in the output image as shown in figure 4(e).
Real-Time Robotic Hand Control Using Hand Gestures 417
Fig. 4. Binary Image formation (a) Target image (b) HSV conversion (c) Filtered image
(d) Smoothened image (e) Binary image (f) Biggest BLOB
To remove these errors, the biggest BLOB (Binary Linked Object) method was applied to the
noisy image [24][25]. The error free image is shown in figure 5. The only limitation of this
filter was that the BLOB for hand should be the biggest one. In this, masking background
would be illuminated, so false detection is not possible from the background [7][10].
Second step is to extract object of interest from the frame. In our case, object of interest is the
part of human hand showing gesture [29]. For this, extra part, other than the hand, is
cropped out so that pattern matching can give more accurate results [12]. For cropping extra
parts row and column numbers are determined, from where object of interest appears. This
is done by searching from each side of binary image and moving forward until white pixels
encountered are more than the offset value [25]. Experimental results show that offset value
set to 1% of total width gives better result for noise compensation. If size of selected image is
mXn then:
Hor_offset= m/100 (7)
Ver_offset=n/100 (8)
Min_col= minimum value of column number where total number of white pixels are more
than Hor_offset.
Max_col= maximum value of column number where total number of white pixels are more
than Hor_offset.
418 Robotic Systems – Applications, Control and Programming
Min_row= minimum value of row number where total number of white pixels are more than
Ver_offset.
Max_row= maximum value of row number where total number of white pixels are more
than Ver_offset.
shows that whether the object continues from the column of Global maxima to first column
or to last column. i.e. whether extra hand is left side of palm or right side of palm.
Fig. 7. Histogram showing white pixels in each column, with cropping point for hand gesture
420 Robotic Systems – Applications, Control and Programming
(a) (b)
Fig. 9. (a) PCA basis (b) PCA reduction to 1D
(a) The concept of PCA. Solid lines: the original basis; dashed lines: the PCA basis. The dots
are selected at regularly spaced locations on a straight line rotated at 30degrees, and then
perturbed by isotropic 2D Gaussian noise. (b) The projection (1D reconstruction) of the data
using only the first principal component.
There are many different algorithms available for Gesture Recognition, such as principal
Component Analysis, Linear Discriminate Analysis, Independent Component Analysis,
Neural Networks and Genetic Algorithms.
Real-Time Robotic Hand Control Using Hand Gestures 421
Principal Component Analysis (PCA)[11] is used in this application to compare the acquired
gesture with the gestures available in the database and recognize the intended gesture.
Eigenhands = A*L_eig_vec.
Eignevhands is centered image
Calculate Eigenhands vector, Size of Eigenhands is
10000 X 9
The concept of PCA is illustrated in Figure 9; The graph shows data that originally has two
components along the x1 and x2 directions. This is now resolved along the Φ1 and Φ2
directions. Φ1 corresponds to the direction of maximum variance and is chosen as the first
principal component. In the 2D case currently being considered, the second principal
component is then determined uniquely by orthogonality constraints; in a higher-
dimensional space the selection process would continue, guided by the variances of the
projections.
A comprehensive explanation of the above scheme for gesture recognition[19] within a
database of ten images is given in figure 10. For simplicity of calculations and ease of
understanding, each figure is taken to be of size of 100X100 pixels. This algorithms was
tested on a PC platform.
Fig. 11. Movements of PUMA 762 corresponding to some specific hand gestures
However, there are a total of 10 actionable gestures, and one background image for resetting
purpose that have been identified as shown in fig. 12.
424 Robotic Systems – Applications, Control and Programming
4. Conclusion
Controlling a robot arm, in real time, through the hand gestures is a novel approach. The
technique proposed here was tested under proper lighting conditions that we created in our
Real-Time Robotic Hand Control Using Hand Gestures 425
Stored in
RAM
Gesture Minimum
Recognition Distance
RAM
5. Acknowledgment
This research is being carried out at Central Electronics Engineering Research Institute
(CEERI), Pilani, India as part of a project “Supra Institutional Projects on Technology
Development for Smart Systems“. Authors would like to thank Director, CEERI for his
active encouragement and support.
426 Robotic Systems – Applications, Control and Programming
6. References
[1] Becerra V.M., Cage, C.N.J., Harwin, W.S., Sharkey, P.M. Hardware retrofit and
computed torque control of Puma 560 robot updating an industrial manipulator.
IEEE Control Systems Magazine, 24(5): 78-82. 2004.
[2] L. Brthes, P. Menezes, F. Lerasle, and J. Hayet. Face tracking and hand gesture
recognition for human robot interaction. In International Conference on Robotics
and Automation, pp. 1901–1906, 2004. New Orleans, Louisiana.
[3] Chan Wah Ng, Surendra Ranganath, Real-time gesture recognition system and
application. Image and Vision computing (20): 993-1007, 2002.
[4] Corke, P.I. Operational Details of the Unimation Puma Servo System. Report, CSIRO
Division of Manufacturing Technology. Australia, 1993.
[5] Crow, F. Summed-area tables for texture mapping. SIGGRAPH '84: Proceedings of the
11th annual conference on Computer graphics and interactive techniques. pp. 207-
212.
[6] J. Davis, M. Shah. Recognizing hand gestures. In Proceedings of European Conference
on Computer Vision, ECCV: 331-340, 1994. Stockholm, Sweden.
[7] Daggu Venkateshwar Rao, Shruti Patil, Naveen Anne Babu and V Muthukumar,
Implementation and Evaluation of Image Processing Algorithms on Reconfigurable
Architecture using C-based Hardware Descriptive Languages, International Journal
of Theoretical and Applied Computer Sciences, Volume 1 Number 1 pp. 9–34, 2006.
[8] Dastur, J.; Khawaja, A.; Robotic Arm Actuation with 7 DOF Using Haar Classifier
Gesture Recognition, Second International Conference on Computer Engineering
and Applications (ICCEA), vol.1, no., pp.27-29, 19-21 March 2010.
[9] K G Derpains, A review of Vision-based Hand Gestures, Internal Report, Department of
Computer Science. York University, February 2004.
[10] Didier Coquin, Eric Benoit, Hideyuki Sawada, and Bogdan Ionescu, Gestures
Recognition Based on the Fusion of Hand Positioning and Arm Gestures, Journal of
Robotics and Mechatronics Vol.18 No.6, 2006. pp. 751-759.
[11] Fang Zhong, David W. Capson, Derek C. Schuurman, Parallel Architecture for PCA
Image Feature Detection using FPGA 978-1-4244-1643-1/08/2008 IEEE.
[12] Freeman W., Computer vision for television and games. In Recognition, Analysis, and
Tracking of Faces and Gestures in Real-Time Systems, page 118, 1999. Copyright©
MEITCA.
[13] Haessig D.,Hwang J., Gallagher S., Uhm M., Case-Study of a Xilinx System Generator
Design Flow For Rapid Development of SDR Waveforms, Proceeding of the SDR 05
Technical Conference and Product Exposition. Copyright © 2005 SDR Forum.
[14] Jae-Ho Shin, Jong-Shill Lee, Se-Kee Kil, Dong-Fan Shen, Je-Goon Ryu, Eung-Hyuk Lee,
Hong-Ki Min, Seung-Hong Hong, Hand Region Extraction and Gesture
Recognition using entropy analysis, IJCSNS International Journal of Computer
Science and Network Security, VOL.6 No.2A, 2006 pp. 216-222.
[15] Jagdish Raheja, Radhey Shyam and Umesh Kumar, Hand Gesture Capture and
Recognition Technique for Real-time Video Stream, In The 13th IASTED
International Conference on Artificial Intelligence and Soft Computing (ASC 2009),
September 7 - 9, 2009 Palma de Mallorca, Spain.
Real-Time Robotic Hand Control Using Hand Gestures 427
[16] Katupitiya, J., Radajewski, R., Sanderson, J., Tordon, M. Implementation of PC Based
Controller for a PUMA Robot. Proc. 4th IEEE Conf. on Mechatronics and Machine
Vision in Practice. Australia, p.14-19. [doi:10.1109/MMVIP.1997.625229], 1997.
[17] R. Kjeldsen, J. Kinder. Visual hand gesture recognition for window system control, in
International Workshop on Automatic Face and Gesture Recognition (IWAFGR),
Zurich: pp. 184-188, 1995.
[18] Mohammad, Y.; Nishida, T.; Okada, S.; Unsupervised simultaneous learning of
gestures, actions and their associations for Human-Robot Interaction, Intelligent
Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on, vol.,
no., pp.2537-2544, 10-15 Oct. 2009.
[19] Moezzi and R. Jain, "ROBOGEST: Telepresence Using Hand Gestures". Technical report
VCL-94-104, University of California, San Diego, 1994.
[20] M. Moore, A DSP-based real time image processing system. In the Proceedings of the
6th International conference on signal processing applications and technology,
Boston MA, August 1995.
[21] Nolker, C.; Ritter, H.; Parametrized SOMs for hand posture reconstruction, Neural
Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International
Joint Conference on , vol.4, no., pp.139-144 vol.4, 2000
[22] Nolker C., Ritter H., Visual Recognition of Continuous Hand Postures, IEEE
Transactions on neural networks, Vol 13, No. 4, July 2002, pp. 983-994.
[23] K. Oka, Y. Sato and H. Koike. Real-Time Fingertip Tracking and Gesture Recognition.
IEEE Computer Graphics and Applications: 64-71, 2002.
[24] Ozer, I. B., Lu, T., Wolf, W. Design of a Real Time Gesture Recognition System: High
Performance through algorithms and software. IEEE Signal Processing Magazine,
pp. 57-64, 2005.
[25] V.I. Pavlovic, R. Sharma, T.S. Huang. Visual interpretation of hand gestures for human-
computer interaction, A Review, IEEE Transactions on Pattern Analysis and
Machine Intelligence 19(7): 677-695, 1997.
[26] Quek, F. K. H., Mysliwiec, T., Zhao, M. (1995). Finger Mouse: A Free hand Pointing
Computer Interface. Proceedings of International Workshop on Automatic Face
and Gesture Recognition, pp.372-377, Zurich, Switzerland.
[27] A.K. Ratha, N.K. Jain, FPGA-based computing in computer vision, Computer
Architecture for Machine Perception, CAMP ’97. Proceedings Fourth IEEE
International Workshop on, pp. 128–137, 20-22 Oct 1997.
[28] Ramamoorthy, N. Vaswani, S. Chaudhury, S. Banerjee, Recognition of Dynamic hand
gestures, Pattern Recognition 36: 2069-2081, 2003.
[29] Sawah A.E., and et al., a framework for 3D hand tracking and gesture recognition
using elements of genetic programming, 4th Canadian conference on Computer
and robot vision, Montreal, Canada, 28-30 May, 2007, pp. 495-502.
[30] Sergios Theodoridis, Konstantinos Koutroumbas, Pattern Recognition, Elsevier
Publication, Second Edition, 2003.
[31] Seyed Eghbal Ghobadi, Omar Edmond Loepprich, Farid Ahmadov Jens Bernshausen,
Real Time Hand Based Robot Control Using Multimodal Images, IAENG
International Journal of Computer Science, 35:4, IJCS_35_4_08, Nov 2008.
428 Robotic Systems – Applications, Control and Programming
[32] Shin M. C., Tsap L. V., and Goldgof D. B., Gesture recognition using bezier curves for
visualization navigation from registered 3-d data, Pattern Recognition, Vol. 37,
Issue 5, May 2004, pp.1011–1024.
[33] N. Shirazi and J. Ballagh, "Put Hardware in the Loop with Xilinx System Generator for
DSP", Xcell Journal online, May 2003,
hitp://xvww.xilinix.coinjpoblication s/xcellonlince/xcell 47/xc svsen47.htm
[34] Stephen D.Brown, R.J. Francis, J.Rose, Z.G.Vranesic, Field Programmable Gate Arrays,
1992.
[35] Starner T., Pentland. A., Real-time American Sign Language recognition from video
using hidden markov models. In SCV95, pages 265–270, Ames Street, Cambridge,
1995.
[36] Triesch J., Malsburg. C.V.D., Robotic gesture recognition. In Gesture Workshop, pages
233–244, Nara, Japan, 1997.
[37] Tomita, A., Ishii. R. J. Hand shape extraction from a sequence of digitized grayscale
images. Proceedings of tweenth International Conference on Industrial Electronics,
Control and Instrumentation, IECON '94, vol.3, pp.1925–1930, Bologna, Italy, 1994.
[38] Verma R., Dev A., Vision based Hand Gesture Recognition Using finite State Machines
and Fuzzy Logic, International Conference on Ultra-Modern Telecommunications
& Workshops, 12-14 Oct, 2009, pp. 1-6, in St. Petersburg, Russia.
[39] Walla Walla University, robotic lab PUMA 762.
http://www.mathworks.com/matlabcentral/fileexchange/14932
[40] Xilinx Inc., System Generator for DSP, Users Guide
http://wsssw.xilinx.con1i/products/softwa-e/svstzen/app docs/userLguide.htm
[41] Ying Wu, Thomas S Huang, Vision based Gesture. Recognition: A Review, Lecture
Notes In Computer Science; Vol. 1739, Proceedings of the International Gesture
Workshop on Gesture-Based Communication in Human-Computer Interaction,
1999.
21
1. Introduction
1.1 Visual servoing for robotics applications
Numerous advances in robotics have been inspired by reliable concepts of biological
systems. Necessity for improvements has been recognized due to lack of sensory capabilities
in robotic systems which make them unable to cope with challenges such as anonymous and
changing workspace, undefined location, calibration errors, and different alternating
concepts. Visual servoing aims to control a robotics system through artificial vision, in a way
as to manipulate an environment, in a similar way to humans actions. It has always been
found that, it is not a straightforward task to combine "Visual Information" with a "Arm
Dynamic" controllers. This is due to different natures of descriptions which defines
"Physical Parameters" within an arm controller loop. Studies have also revealed an option of
using a trainable system for learning some complicated kinematics relating object features to
robotics arm joint space. To achieve visual tracking, visual servoing and control, for accurate
manipulation objectives without losing it from a robotics system, it is essential to relate a
number of an object's geometrical features (object space) into a robotics system joint space
(arm joint space). An object visual data, an play important role in such sense. Most robotics
visual servo systems rely on object "features Jacobian", in addition to its inverse Jacobian.
Object visual features inverse Jacobian is not easily put together and computed, hence to use
such relation in a visual loops. A neural system have been used to approximate such
relations, hence avoiding computing object's feature inverse Jacobian, even at singular
Jacobian postures. Within this chapter, we shall be discussing and presenting an integration
approach that combines "Visual Feedback" sensory data with a "6-DOF robotics Arm
Controller". Visual servo is considered as a methodology to control movements of a
robotics system using certain visual information to achieve a task. Visionary data is
acquired from a camera that is mounted directly on a robot manipulator or on a mobile
robot, in which case, motion of the robot induces camera motion. Differently, the camera
can be fixed, so that can observe the robot motion. In this sense, visual servo control relies
on techniques from image processing, computer vision control theory, kinematics,
dynamic and real time computing.
430 Robotic Systems – Applications, Control and Programming
Robotics visual servoing has been recently introduced by robotics, AI and control
communities. This is due to the significant number of advantages over blind robotic
systems. Researchers have also demonstrated that, VISUAL SERVOING is an effective and a
robust framework to control robotics systems while relying on visual information as
feedback. An image-based scheme task is said to be completely performed if a desired
image is acquired by a robotic system. Numerous advances in robotics have been inspired
by the biological systems. In reference to Fig. (1), visual servoing aims to control a robotics
system through an artificial vision in a way as to manipulate an environment, comparable to
humans actions. Intelligence-based visual control has also been introduced by research
community as a way to supply robotics system even with more cognitive capabilities. A
number of research on the field of intelligent visual robotics arm control have been
introduced. Visual servoing has been classified as using visual data within a control loop,
enabling visual-motor (hand-eye) coordination. There have been different structures of
visual servo systems. However, the main two classes are; Position-Based Visual Servo
systems (PBVS), and the Image-Based Visual Servo systems (IBVS). In this chapter, we
shall concentrate on the second class, which is the Image-based visual servo systems.
In (Chen et. al. 2008), an adaptive visual servo regulation control for camera-in-hand
configuration with a fixed camera extension was presented by Chen. An image-based
regulation control of a robot manipulator with an uncalibrated vision system is discussed.
To compensate for the unknown camera calibration parameters, a novel prediction error
formulation is presented. To achieve the control objectives, a Lyapunov-based adaptive
control strategy is employed. The control development for the camera-in-hand problem is
presented in detail and a fixed-camera problem is included as an extension. Epipolar
Geometry Toolbox as in (Gian et. al. 2004), was also created to grant MATLAB users with a
broaden outline for a creation and visualization of multi-camera scenarios. In addition, to be
used for the manipulation of visual information, and the visual geometry. Functions
provided, for both class of vision sensing, the pinhole and panoramic, include camera
assignment and visualization, computation, and estimation of epipolar geometry entities.
Visual servoing has been classified as using visual data within a control loop, enabling
visual-motor (hand-eye) coordination.
Image Based Visual Servoing Using Takagi-Sugeno Fuzzy Neural Network Controller has
been proposed by (Miao et. al. 2007). In their study, a T-S fuzzy neural controller based IBVS
method was proposed. Eigenspace based image compression method is firstly explored
which is chosen as the global feature transformation method. Inner structure, performance
and training method of T-S neural network controller are discussed respectively. Besides
that, the whole architecture of TS-FNNC is investigated. For robotics arm visual servo, this
issue has been formulated as a function of object feature Jacobian. Feature Jacobian is a
complicated matrix to compute for real-time applications. For more feature points in space,
the issue of computing inverse of such matrix is even more hard to achieve.
utilizing and mergence three fundamentals. The first is a robotics arm-object visual
kinematics, the second is the Epipolar Geometry relating different object scenes during
motion, and a learning artificial neural system. To validate the concept, the visual control
loop algorithm developed by RIVES has been thus adopted to include a learning neural
system. Results have indicated that, the proposed visual servoing methodology was able to
produce a considerable accurate results.
a a ξ
ξ = φ
ζ
(1)
a a ψ
ψ = φ
ζ
( )
Both ξ a ,ψ a location over an image plane is thus calculated by Equ. (1). For thin lenses (as
the Pinhole camera model), camera geometry are thus represented by, (Gian et. al. 2004) :
Robotics Arm Visual Servo:
Estimation of Arm-Space Kinematics Relations with Epipolar Geometry 435
1 1 1
= a− (2)
φ ζ ζ
In reference to (Gian et. al. 2004), using Craig Notation, denotes coordinate of point P in
frame B . For translation case :
B A B
P= P+ O A
(3)
OA is a coordinate of the origin OA of frame "A" in a new coordinate system "B". Rotations
B
A iBT
B
A R= ( B
iA B
jA B
kA ) = A jBT
(4)
A kT
B
In Equ (4), B i A is a frame axis coordinate of "A" in another coordinate "B". In this respect, for
rigid transformation we have:
B
P = AB R A P
B
P = AB R A P + B O A (5)
For more than single consecutive rigid transformations, (for example to frame "C"), i.e. form
frames A → B → C , coordinate of point P in frame " C " can hence be expressed by:
( )
B
P = AB R B
A R A P + BO A + C OB
B
P= ( C
B
A
R AR P +
B
) ( C
B R AOA + C OA ) (6)
B
For homogeneous coordinates, it looks very concise to express P as :
B P AB R B
O A P
A
= (7)
1 OT 1
1
C P CB R C
o BP
=
B
(8)
1 OT 1 1
C P CB R C
O B A R
B B
O A P
A
= (9)
1 OT 1 T
1
O 1
We could express elements of a transformation ( Γ ) by writing :
436 Robotic Systems – Applications, Control and Programming
σ 11 σ 12 σ 13 σ 14
σ 21 σ 22 σ 23 σ 24 A Ω
Γ = Γ = (10)
σ 31 σ 32 σ 33 σ 34 OT 1
σ σ 42 σ 43 σ 44
41
R Ω
Γ = (11)
OT 1
Euclidean transformation preserves parallel lines and angles, on the contrary, affine
preserves parallel lines but not angles, Introducing a normalized image plane located at
focal length φ = 1 . For this normalized image plane, pinhole (C) is mapped into the origin of
an image plane using:
Pˆ = ( uˆ vˆ )
T
(12)
uˆ
1 PC
Ĉ and P are mapped to : Pˆ = uˆ = C ( I 0 ) (13)
ζ 1
1
c
ξ
1
P̂ = C (I 0 )ψ c (14)
ζ c
ζ
1
kφ 0 a 0 ξ
c
u
1
we also have : v = C 0 kφ v 0 ψ c
ζ
1 0 0 1 ζc
ξc
u kφ 0 u0
1 ψ c
v = C 0 kφ v0 ( I 0 ) c
(15)
ζ ζ
1 0 0 1
1
Now once κ = kφ and β = Lφ , then we identify these parameters κ , β , u o , v o as intrinsic
camera parameters. In fact, they do present an inner camera imaging parameters. In matrix
notation, this can be expressed as :
Robotics Arm Visual Servo:
Estimation of Arm-Space Kinematics Relations with Epipolar Geometry 437
ξc
u kφ 0 u0 c
1 ψ
v = C 0 kφ v0 c
ζ ζ
1
0
0 1
1
ξc
u κ 0 u0 0
1 R Ω ψ c
v = C 0 β v0 0 T (16)
ζ O 0 c
ζ
1 0 0 1 0
1
Both ( R ) and ( Ω ) extrinsic camera parameters, do represent coordinate transformation
between camera coordinate system and world coordinate system. Hence, any ( u , v ) point in
camera image plan is evaluated via the following relation:
u u
1 1
v = C M1M 2 pw v = C M pw (17)
ζ ζ
1 1
Here ( M ) in Equ (17) is referred to as a Camera Projection Matrix. We are given (1) a
calibration rig, i.e., a reference object, to provide the world coordinate system, and (2) an
image of the reference object. The problem is to solve (a) the projection matrix, and (b) the
intrinsic and extrinsic parameters.
ξiw
ui
1 ψ iw
vi = C M 1 M 2 (18)
ζ ζ w
1 i
1
ξiw
ui
1 ψ iw
vi = C M
ζ ζ w
1 i
1
438 Robotic Systems – Applications, Control and Programming
ξ iw
ui m11 m12 m13 m14
ψ iw
ζ ci = v i = m21 m22 m23 m 24 (19)
ζ iw
1 m31 m32 m33 m 34
1
ξ iw
ui σ 11 σ 12 σ 13 σ 14 w
ψ i
ζ = v i = σ 21 σ 22 σ 23 σ 24 w
c
i (21)
ζ i
1 σ 31 σ 32 σ 33 σ 34
1
Obviously, we can let σ 34 = 1 . This will result in the projection matrix is scaled by σ 34 .
Once KM = U , K ∈ ℜ 2 n×11 is a matrix, a 11-D vector, and U ∈ ℜ 2 n − D vector. A least square
solution of equation KM = U is thus expressed by:
M = K +U
M = K T K −1K TU (23)
K is the pseudo inverse of matrix K , and m and m 34 consist of the projection matrix M .
+
( ) ( )
T T
E 1 = E1 x E1 y 1 and E 2 = E2 x E2 y 1 (24)
One of the main parameters of an epipolar geometry is the fundamental Matrix Η (which is
ℜ ∈ 3 × 3 ). In reference to the Η matrix, it conveys most of the information about the relative
position and orientation ( t,R ) between the two different views. Moreover, the fundamental
matrix Η algebraically relates corresponding points in the two images through the Epipolar
Constraint. For instant, let the case of two views of the same 3-D point Χ w , both
characterized by their relative position and orientation ( t,R ) and the internal, hence Η is
evaluated in terms of K 1 and K 2 , representing extrinsic camera parameters, (Gian et al.,
2004) :
Η = K 2 −T ( t )x RK 1 −1 (25)
In such a similar case, a 3-D point ( Χ w ) is projected onto two image planes, to points ( m 2 )
and ( m 1 ), as to constitute a conjugate pair. Given a point ( m 1 ) in left image plane, its
conjugate point in the right image is constrained to lie on the epipolar line of ( m 1 ). The line
is considered as the projection through C 2 of optical ray of m 1 . All epipolar lines in one
image plane pass through an epipole point.
c1
This is a projection of conjugate optical centre: E 1 = P2 . Parametric equation of epipolar
1
line of m 1 gives m T
2 = E + λ P2P1 m
−1
1 . In image coordinates this can be expressed as:
2
( e2 )1 + λ ( v )1
( u ) = ( m2 )1 =
(26)
( e 2 )3 + λ ( v ) 3
( e2 )2 + λ ( v )2
( v ) = ( m2 )2 =
(27)
( e2 ) 3 + λ ( v ) 3
du [ v ]1 [ e 2 ]3 − [ v ]3 [ e2 ]1
= (28)
dλ ([ e2 ]3 + λ [ v ]3 )
2
dv [ v ]2 [ e 2 ]3 − [ v ]3 [ e2 ]2
= (29)
dλ ([ e2 ]3 + λ [ v ]3 )
2
440 Robotic Systems – Applications, Control and Programming
( )
The epipole is projected to infinity once E 2 = 0 . In such a case, direction of the epipolar
3
lines in right image doesn't depend on any more. All epipolar lines becomes parallel to
( )
T
vector E 2 E 2 . A very special occurrence is once both epipoles are at infinity. This
1 2
happens once a line containing ( C 1 ) and ( C 2 ), the baseline, is contained in both focal
planes, or the retinal planes are parallel and horizontal in each image as in Fig. (4). The right
pictures plot the epipolar lines corresponding to the point marked in the left pictures. This
procedure is called rectification. If cameras share the same focal plane the common retinal
plane is constrained to be parallel to the baseline and epipolar lines are parallel.
over features of an object, at a desired distance. Elements of the task are thus specified in
image space. For a robotics system with an end-effector mounted camera, viewpoint and
features are functions of relative pose of the camera to the target, ( c ξ t ). Such function is
usually nonlinear and cross-coupled. A motion of end-effectors DOF results in complex
motion of many features. For instant, a camera rotation can cause features to translate
horizontally and vertically on the same image plane, as related via the following
relationship :
φ = f ( cξ t ) (30)
δφ = f J c ( c x t ) δ c x t (31)
∂φ
f
Jc ( c xt ) = c (32)
∂ xt
In Equ (32), f Jc ( c x t ) is the Jacobian matrix, relating rate of change in robot arm pose to rate
of change in feature space. Variously, this Jacobian is referred to as the feature Jacobian,
image Jacobian, feature sensitivity matrix, or interaction matrix. Assume that the Jacobian is
square and non-singular, then:
x t = J c ( x t )
−1
c f c
f (33)
( )
−1
c
x t = ( K ) J c c x t
f
( f d − f (t )) (34)
will tend to move the robotics arm towards desired feature vector. In Equ (34), K f is a
diagonal gain matrix, and (t) indicates a time varying quantity. Object posture rates c x t is
converted to robot end-effector rates. A Jacobian , f J c( c x t ) as derived from relative pose
between the end-effecter and camera, ( c xt ) is used for that purpose. In this respect, a
technique to determine a transformation between a robot's end-effector and the camera
frame is given by Lenz and Tsai, as in (Lenz & Tsai. 1988). In a similar approach, an end-
effector rates may be converted to manipulator joint rates using the manipulator's Jacobian
(Croke, 1994), as follows:
θt represents the robot joint space rate. A complete closed loop equation can then be given
by:
For achieving this task, an analytical expression of the error function is given by :
442 Robotic Systems – Applications, Control and Programming
∂φ2
φ = Ζ+φ1 + γ ( Ι 6 − Ζ+ Ζ ) (37)
∂Χ
Here, γ ∈ℜ+ and Ζ + is pseudo inverse of the matrix Ζ , Ζ ∈ ℜ m×n = ℜ( Ζ T ) = ℜ(J 1T ) and J is
∂φ
the Jacobian matrix of task function as J = . Due to modeling errors, such a closed-loop
∂Χ
system is relatively robust in a possible presence of image distortions and kinematics
parameter variations of the Puma 560 kinematics. A number of researchers also have
demonstrated good results in using this image-based approach for visual servoing. It is
always reported that, the significant problem is computing or estimating the feature
Jacobian, where a variety of approaches have been used (Croke, 1994). The proposed IBVS
structure of Weiss (Weiss et. al., 1987 and Craig, 2004), controls robot joint angles directly
using measured image features. Non-linearities include manipulator kinematics and
dynamics as well as the perspective imaging model. Adaptive control was also proposed,
since f J θ−1 ( c θ) , is pose dependent, (Craig, 2004). In this study, changing relationship between
robot posture and image feature change is learned during a motion via a learning neural
system. The learning neural system accepts a weighted set of inputs (stimulus) and
responds.
1 P N
( tpi − ypi )
2
E= (38)
NP p =1 i = 1
where p indexes the patterns in the training set, i indexes the output nodes, and tpi and ypi
are, respectively, the target hand joint space position and actual network output for the ith
output unit on the pth pattern. An illustration of the layered network with an input layer,
two hidden layers, and an output layer is shown in Fig. (5). In this network there are i
Robotics Arm Visual Servo:
Estimation of Arm-Space Kinematics Relations with Epipolar Geometry 443
inputs, (m) hidden units, and (n) output units. The output of the jth hidden unit is obtained
by first forming a weighted linear combination of the (i) input values, then adding a bias:
l
a j = w 1ji xi + w 1 j 0 (39)
i =1
where w(1)ji is a weight from input (i) to hidden unit (j) in the first layer. w(1)
j 0 is a bias for
hidden unit j. If we are considering a bias term as being weights from an extra input x 0 = 1 ,
Equ. (39) can be rewritten to the form of:
l
aj = w 1ji xi (40)
i =0
The activation of hidden unit j then can be obtained by transforming the linear sum using a
nonlinear activation function g( x ) :
hj = g aj( ) (41)
m 2 2
ak = w kj h j + w k 0 (42)
j =1
absorbing the bias and expressing the above equation to:
m
ak = w 2kj h j (43)
j =0
Applying the activation function g 2 ( x ) to Equ. (43), we can therefore get the kth output :
y k = g 2 ( ak ) (44)
Combining Equ. (40), Equ. (41), Equ. (43) and Equ. (44), we get a complete representation of
the network as:
m l 2
y k = g 2 w kj2 g w i x i (45)
j =0 i = 0 j
The network of Fig. (5) is a synthesized ANN network with two hidden layers, which can be
extended to have extra hidden layers easily, as long as we make the above transformation
further. Input units do transform neural network signals to the next processing nodes. They
are hypothetical units that produce outputs equal to their supposed inputs, hence no
processing is done by these input units. Through this approach, the error of the network is
propagated backward recursively through the entire network and all of the weights are
adjusted so as to minimize the overall network error. The block diagram of the used
learning neural network is illustrated in Fig. (6). The network learns the relationship
between the previous changes in the joint angles Δ Θ k −1 , changes in the object posture Δuca ,
and changes joint angles Δ Θ k . This is done by executing some random displacements from
the desired object position and orientation. The hand fingers is set up in the desired position
and orientation to the object. Different Cartesian based trajectories are then defined and the
inverse Jacobian were used to compute the associated joints displacement Θ h ( k ) . Different
object postures with joint positions and differential changes in joint positions are the input-
output patterns for training the employed neural network. During the learning epoch,
weights of connections of neurons and biases are updated and changed, in such away that
errors decrease to a value close to zero, which resulted in the learning curve that minimizes
the defined objective function shown as will be further discussed later. It should be
mentioned at this stage that the training process has indeed consumed nearly up to three
hours, this is due to the large mount of training patterns to be presented to the neural
network.
biological nervous systems, such as the brain, process information. The key element of this
paradigm is the novel structure of the information processing system. It is composed of a
large number of highly interconnected processing elements (neurons) working in unison to
solve specific problems. ANN system, like people, learn by example.
An ANN is configured for a specific application, such as pattern recognition or data
classification, through a learning process. Learning in biological systems involves
adjustments to the synaptic connections that exist between the neurons. This is true of ANN
system as well (Aleksander & Morton, 1995). The four-layer feed-forward neural network
with (n) input units, (m) output units and N units in the hidden layer, already shown in the
Fig. (5), and as will be further discussed later. In reality, Fig. (5). exposes only one possible
neural network architecture that will serve the purpose. In reference to the Fig. (5), every
node is designed in such away to mimic its biological counterpart, the neuron.
Interconnection of different neurons forms an entire grid of the used ANN that have the
ability to learn and approximate the nonlinear visual kinematics relations. The used learning
neural system composes of four layers. The {input}, {output} layers, and two {hidden layers}.
If we denote ( wν c ) and ( wωc ) as the camera’s linear and angular velocities with respect to the
robot frame respectively, motion of the image feature point as a function of the camera
velocity is obtained through the following matrix relation:
c
px c
p x c px c
p x c px c
0 0 c c
− c
px c
αλ pz pz pz Rw 0 wν c
γ = − c (46)
pc 0 Rw
c c
ωc
w
py c
p x c px c
p cp c
1 −1 c c
− cx x − px
pz pz pz
Instead of using coordinates ( x Pc ) and ( y Pc ) for the object feature described in camera
coordinate frame, which are a priori unknown, it is usual to replace them by coordinates (u)
and (v) of the projection of such a feature point onto the image frame, as shown in Fig. (7).
5. Simulation case study: Visual servoing with pin-hole camera for 6-DOF
PUMA robotics arm
Visual servoing using a "pin-hole" camera for a 6-DOF robotics arm is simulated here. The
system under study is a (PUMA) and integrated with a camera and ANN. Simulation block
is shown Fig. (7). Over simulation, the task has been performed using 6-DOF-PUMA
manipulator with 6 revolute joints and a camera that can provide position information of the
robot gripper tip and a target (object) in robot workplace. The robot dynamics and direct
kinematics are expressed by a set of equations of PUMA-560 robotics system, as
documented by Craig, (Craig, 2004). Kinematics and dynamics equations are already well
known in the literature, therefore. For a purpose of comparison, the used example is based
on visual servoing system developed by RIVES, as in (Eleonora, 2004). The robotics arm
system are has been servoing to follow an object that is moving in a 3-D working space.
Object has been characterized by some like (8-features) marks, this has resulted in 24, ℜ∈8×3
size, feature Jacobian matrix. This is visually shown in Fig. (7). An object 8-features will be
mapped to the movement of the object in the camera image plane through defined
geometries. Changes in features points, and the differentional changes in robot arm,
constitute the data that will be used for training the ANN. The employed ANN architecture
has already been discussed and presented in Fig. (5).
Large training patterns have been gathered and classified, therefore. Gathered patterns at
various loop locations gave an inspiration to a feasible size of learning neural system. Four
layers artificial neural system has been found a feasible architecture for that purpose. The
448 Robotic Systems – Applications, Control and Programming
net maps 24 (3×8 feature points) inputs characterizing object cartesian feature position and
arm joint positions into the (six) differential changes in arm joints positions. The network is
presented with some arm motion in various directions. Once the neural system has learned
with presented patterns and required mapping, it is ready to be employed in the visual
servo controller. Trained neural net was able to map nonlinear relations relating object
movement to differentional changes in arm joint space. Object path of motion was defined
and simulated via RIVES Algorithm, as given in (Gian et al., 2004), after such large number
of running and patterns, it was apparent that the learning neural system was able to capture
such nonlinear relations.
2
8
Actual position
1
z
X
xc y Zc
Puma 560 X
Desired
c position Zc
Yc
Yc
Fig. 8. Top view. Actual object position and desired position before the servoing
Robotics Arm Visual Servo:
Estimation of Arm-Space Kinematics Relations with Epipolar Geometry 449
approaching towards a desired object posture. ANN was fed with defined patterns during
arm movement. Epipolars have been used to evaluate visual features and the update during
arm movement.
2
8
Actual position
1
Puma 560 Xc X
x yc
Desired position Zc Z
c
Yc Y
c
Fig. 9. Robot arm-camera system: Clear servoing towards a desired object posture
450 Robotic Systems – Applications, Control and Programming
-6
x 10
4
3
Error Between ANN output and Rives Outputs
-1
-2
-3
-4
0 10 20 30 40 50 60
Simulation Time in Sec.
Fig. 10. Resulting errors. Use of proposed ANN based visual servo
Fig. 11. Migration of eight visual features (as observed over the camera image plan)
Robotics Arm Visual Servo:
Estimation of Arm-Space Kinematics Relations with Epipolar Geometry 451
Fig. 13. ANN visual servo controller approaching zero value for different training visual
servo target postures
452 Robotic Systems – Applications, Control and Programming
Fig. 14. Error of arm movements: ANN based controller and RIVES output difference ANN
visual Servo
6. Conclusions
Servoing a robotics arm towards a moving object movement using visual information, is a
research topic that has been presented and discussed by a number of researchers for the last
twenty years. In this sense, the chapter has discussed a mechanism to learn kinematics and
feature-based Jacobian relations, that are used for robotics arm visual servo system. In this
respect, the concept introduced within this chapter was based on an employment and
utilization of an artificial neural network system. The ANN was trained in such away to
learn a mapping relating the " complicated kinematics" as relating changes in visual loop
into arm joint space. Changes in a loop visual Jocobain depends heavily on a robotics arm
3-D posture, in addition, it depends on features associated with an object under visual
servo (to be tracked). Results have shown that, trained neural network can be used to
learn such complicated visual relations relating an object movement to an arm joint space
movement. The proposed methodology has also resulted in a great deal of accuracy. The
proposed methodology was applied to the well-know Image Based Visual Servoing,
already discussed and presented by RIVES as documented in (Gian et al., 2004). Results
have indicated a close degree of accuracy between the already published "RIVES
Algorithm" results and the newly proposed "ANN Visual Servo Algorithm". This
indicates ANN Visual Servo, as been based on some space learning mechanisms, can
reduce the computation time.
Robotics Arm Visual Servo:
Estimation of Arm-Space Kinematics Relations with Epipolar Geometry 453
7. References
Aleksander, I. & Morton, H. (1995), An Introduction to Neural Computing, Textbook,
International Edition 2nd Edition.
Alessandro, D.; Giuseppe, L.; Paolo, O. & Giordano, R. (2007), On-Line Estimation of Feature
Depth for Image-Based Visual Servoing Schemes, IEEE International Conference on
Robotics and Automation, Italy, 1014
Chen, J.; Dawson, M.; Dixon, E. & Aman, B. (2008), Adaptive Visual Servo Regulation
Control for Camera-in-Hand Configuration With a Fixed-Camera Extension,
Proceedings of the 46th IEEE Conference on Decision and Control, CDC, 2008, pp. 2339-
2344, New Orleans, LA, United States
Cisneros P. (2004), Intelligent Model Structures in Visual Servoing, Ph.D. Thesis, University
of Manchester, Institute for Science and Technology
Corke, P. (2002), Robotics Toolbox for MatLab, For Use with MATLAB, User Guide, Vo1. 1.
Craig, J. (2004), Introduction to Robotics: Mechanics and Control, Textbook, International
Edition, Prentice Hall
Croke, P. (1994), High-Performance Visual Closed-Loop Robot Control, Thesis
submitted in total fulfillment of the Requirements for the Degree of Doctor of
Philosophy.
Eleonora, A.; Gian, M. & Domenico, P. (2004), Epipolar Geometry Toolbox, For Use with
MATLAB, User Guide, Vo1. 1.
Gian, M.; Eleonora, A. & Domenico, P. (2004), The Epipolar Geometry Toolbox (EGT) for
MATLAB, Technical Report, 07-21-3-DII University of Siena, Siena, Italy
Gracia, L. & Perez-Vidal, C. (2009), A New Control Scheme for Visual Servoing, International
Journal of Control, Automation,and Systems, Vol. 7, No. 5, pp. 764-776, DOI
10.1007/s12555-009-0509-9
Lenz, K. & Tsai, Y. (1988), Calibrating a Cartesian Robot with Eye-on-Hand Configuration
Independent of Eye-To-Hand Relationship, Proceedings Computer Vision and Pattern
Recognition, 1988 CVPR apos., Computer Society Conference, Vol.5, No. 9, pp. 67 –
75
Martinet, P. (2004), Applications In Visual Servoing, IEEE-RSJ IROS’04 International
Conference, September 2004, Japan.
Miao, H.; Zengqi, S. & Fujii, M. (2007), Image Based Visual Servoing Using Takagi-Sugeno
Fuzzy Neural Network Controller, IEEE 22nd International Symposium on Intelligent
Control, ISIC 2007, Singapore, Vol. 1, No. 3, pp. 53 – 58
Miao, H.; Zengqi, S. & Masakazu, F. (2007), Image Based Visual Servoing Using Takagi-
Sugeno Fuzzy Neural Network Controller, 22nd IEEE International Symposium on
Intelligent Control, Part of IEEE Multi-conference on Systems and Control
Singapore, pp. 1-3
Panwar, V. & Sukavanam, N. (2007), Neural Network Based Controller for Visual Servoing
of Robotic Hand Eye System, Engineering Letters, Vol. 14, No.1, EL_14_1_26,
Advance Online Publication
Pellionisz, A. (1989), About the Geometry Intrinsic to Neural Nets, International Joint
Conference on Neural Nets, Washington, D.C., Vol. 1, pp. 711-715
Shields, M. & Casey, C. (2008), A Theoretical Framework for Multiple Neural Network
Systems, Journal of Neurocomputing, Vol. 71, No.7-9, pp. 1462-1476
454 Robotic Systems – Applications, Control and Programming
Weiss, L.; Sanderson, A & Neuman, A. (1987), Dynamic Visual Servo Control of Robots: An
adaptive Image-Based Approach , IEEE Journal on Robotics and Automation, Vol. 3,
No.5, pp. 404–417.
22
1. Introduction
Quality assurance in all industrial fields depends on having a suitable and robust inspection
method which let to know the integrity or characteristics of the inspected sample. The
testing can be destructive or not destructive. The former is more expensive and it does not
guarantee to know the condition of the inspected sample. Though, it is used in sectors
where the latter cannot be used due to the technical limitations. The second one is an
ongoing research field where cheaper, faster and more reliable methods are searched to
assess the quality of specific specimens.
Once a suitable reliability and quality are achieved by using a specific method, the next step
is to reduce the duration and cost of the associated process. If it is paid attention to the fact
that most of these processes require of a human operator, who is skilled in the labour of
interpreting the data, it can be said that those previously mentioned factors (process
duration and cost) can be reduced by improving the automation and autonomy of the
associated processes. If a robotic system is used this can be achieved. However, most of the
algorithms associated to those processes are computationally expensive and therefore the
robots should have a high computational capacity which implies a platform of big size,
reduced mobility, limited accessibility and/or considerable cost. Those constraints are
decisive for some specific applications.
One important factor which should be considered to develop a low cost, small size and
mobile robotic system is the design of the software and hardware of the sensor. If it is
designed with a depth analysis of the context of the specific application it can be obtained a
considerable reduction on the requirements and complexity of the robotic system.
The appropriated design of the hardware and software of the sensor depends on the proper
selection of the signal pattern which is going to be used to characterize the condition of the
inspected sample. For ultrasonic waves the patterns are changes on amplitude, frequency or
phase on the received echo because of the properties of a specific target. Some of those
properties are attenuation, acoustic impedance and speed of sound, among others.
Among the many applications of ultrasound, one of them which is important for the
aerospace, automotive, food and oil industries, among others, is the material identification.
Depict the fact that there are many ultrasonic sensors which let to identify the material of
the sample being inspected, most of them are not appropriated to be implemented in a
456 Robotic Systems – Applications, Control and Programming
robot. The high computational capacity demanded by the algorithm of the sensor is the
main constraint for the implementation of the ultrasonic identification of materials in a
robot. In this book chapter is addressed this problem and it is solved by the Peniel method.
Based on this method is designed and constructed a novel sensor which is implemented in
two robots of the kit TEAC2H-RI.
This book chapter is organized as follows: Section Two briefly reviews some approaches in
Material Identification with ultrasonic sensors. Section Three illustrates the novel Peniel
method and its associated hardware and software. Section Four describes the results.
Finally, section five presents the conclusions and future work.
Furthermore, for this case is used a database which contains the typical values of the
parameters correspondent to specific materials. Its more relevant strengths are the
identification of the materials of objects of non-planar cylindrical surface profiles and the
accuracy of the results. On the other hand, its more significant limitations are the expensive
computational cost of its processing algorithm, the absence of autonomy and the restrictions
to the position of the transducer.
In [Ohtanil et al, 2006], is implemented an array of ultrasonic sensors and a MLP neural
network for the materials identification. Some experiments were performed on copper,
aluminium, pasteboard and acrylic, at different angles with respect to the x and z axes.
The maximum distance between the sensor array and the target was 30cm. The main
strength of the system is that it works in air over a wide range of distances between the
target and the sensor system. Additionally, the sensor model is automatically adjusted
based on this distance. The results are quite accurate. Its more significant limitation is the
big size of the system (only appropriated for a huge robot), the computing requirements
and the dependence on the position of the target to achieve the identification of the
material.
In [Zhao et al, 2003] is used the ultrasonic identification of materials for the quality
assurance in the production of canned products. The configuration of the ultrasonic system
is pulse-echo. Water is the coupling medium between the transducer and the target. By
means of this set up, it can be detected any shift from a specific value of the acoustic
impedance, which is the result of foreign bodies within the bottle. Its main strengths are the
accuracy of the system and the high sensibility. Its more relevant weaknesses are the
absence of mobility and autonomy.
In [Pallav et al, 2009] is illustrated a system which has the same purpose of the previous
mentioned system, i.e. also by means of the material identification performs quality
control in canned food. The system uses an ultrasonic sensor to identify the acoustic
impedance shift within the canned food which is related to foreign bodies within the can.
The main difference in comparison to the previous mentioned system [Zhao et al, 2003] is
the fact that the Through-Transmission configuration is used, and that the couplant is not
water but air. Its more relevant strengths are the successful detection of foreign bodies
and the mobility achieve in the X and Y axes. Its more important limitations are the high
requirements of voltage, the dependence of the object’s position and the narrow range of
mobility.
In [Stepanić et al, 2003] is used the identification of materials to differentiate between
common buried objects in the soil and antipersonnel landmines. For this case is used a tip
with an ultrasonic transducer in its border and this latter is acoustically coupled with the
target. The procedure to operate the system is: 1. Select a zone where is suspected the
landmine is. 2. Introduced the tip in this zone in order to touch the suspecting object 3. Take
some measurements. Based on this measurements the material can be identified and as a
consequence it can be detected the presence or absence of landmines. Its more important
limitations are the lack of autonomy, because it requires a human operator, and the
dangerous fact which involves making contact with the landmine.
From the previous mentioned approaches the most significant limitation which forbids the
automation in small robotic agents is the expensive computational cost of the processing
algorithms. This problem was addressed in a master thesis and the results are exposed in
this article.
458 Robotic Systems – Applications, Control and Programming
3. Peniel method
In the following lines is exposed the Peniel Method, which is a method of low
computational cost and therefore its associated algorithm and circuit can be implemented in
a small robot. It is important to highlight that only one microcontroller is used to implement
this method.
A ( t ) = Ae − B( t −C ) + D (1)
where A is related to the first echo amplitude, B to the rate of decay of reverberations,
C to the timing of the first echo from the start of the scan and D to the signal-to-noise
ratio.
In [Gunarathne et al, 2002] is mentioned that knowing the decay coefficient (B) the material
can be identified. Moreover, in [Allin, 2002] is illustrated that there are several methods that
are based on the decay coefficient to identify the material. This variable is important because
it only depends on the attenuation and acoustic impedance of the material, if the thickness is
kept constant. In most of the solids, the attenuation is low and its effect is negligible in
comparison with the acoustic impedance. Based on this the material can be identified if the
acoustic impedance is known. In the following lines we present a novel method developed
to estimate indirectly the decay coefficient B and therefore it can be used to identify the
material of solid plates.
Without losing generality, in (1) we assume that D=0 and C=0. If it is sought the time
interval during A(t) is greater than or equal to a value w, then the equation which express
this is:
A (t ) ≥ w (2)
Replacing (1) in (2) and taking into account the considerations that were made with respect
C and D, we obtained:
↔ Ae − B⋅t ≥ w (3)
After some algebra we obtained:
A
ln
w
↔t≤ (4)
B
A
ln w
It means that in the time interval 0, the exponential function is greater than or
B
equal to w. The duration of this time interval is defined as tdi,, and then for the previous case
it can be said that:
Design and Construction of an
Ultrasonic Sensor for the Material Identification in Robotic Agents 459
A
ln
w
t di ≤ (5)
B
The left side term in the expression (3) is an artificial signal which fits the received ultrasonic
echo (see page 3). If the received ultrasonic echo is amplified by a Gain G, as will the
exponential function (3). Thus it is obtained from (3):
G ⋅ Ae − B⋅t ≥ w (6)
It leads to expressions similar to those obtained in (4) and (5):
G⋅A
ln
w
t≤ (7)
B
G⋅A
ln
w
t di = (8)
B
Therefore, for two signals with similar A values, the tdi duration is inversely proportional to
the decay coefficient B. It can be seen in figure 1 for the case when A = 1, w = 0.5, and B =
1000, 2000, 3000 and 4000. Additionally, it is shown the behavior with G of the difference
between tdi3 and tdi4. This let us to conclude that tdi can be used to characterize a material if G,
A and w are known.
Fig. 1. The variation of the time interval duration depending on the gain. For this case, the
time interval is the time during an exponential signal is above 0.5 (w)
460 Robotic Systems – Applications, Control and Programming
Now, if it is used two different gain values (G1 and G2) for the same signal, the equation (8)
yields:
G ⋅A
ln 1
w
t di 1 = (9)
B
G ⋅A
ln 2
w
t di 2 = (10)
B
If G1<G2 and (9) is subtracted from (10), it is obtained:
G ⋅A G1 ⋅ A
ln 2 ln
w − w
tdi 2 − tdi 1 = (11)
B B
G ⋅A G1 ⋅ A
ln 2 − ln
w w
tdi 2 − t di 1 = (12)
B
G2 ⋅ A
ln w
G
1 ⋅ A
tdi 2 − tdi 1 = w (13)
B
G
ln 2
G
tdi 2 − t di 1 = Δt 1 = 1 (14)
B
ln ( Gr )
Δt1 = (15)
B
G2
where Gr =
G1
As it can be seen (14) does not depend on A and therefore is only necessary to know G2 and
G1 to find B from ∆t1. Moreover, if there are two signals from different materials, even
though G2 and G1 are not known, the material can be identified from the difference between
the correspondent ∆ts. In figure 2 is shown the behavior of ∆t as a function of B for different
values of Gr.
As it can be seen in the figure 2 the duration increment of the time interval (∆t) depends on
B if Gr is kept constant. Also, it can be recognized that if B is kept constant, high values of Gr
result in high increases on the duration of the time interval (∆t).
have a similar value A (see equation 1). To meet this requirement it has been used the circuit
in the figure 3. This circuit is called clustering circuit. Despite the fact that by means of (14)
the material can be identified without knowing A, the clustering circuit is used to make
more robust the method.
Fig. 2. Behavior of the duration increment of the time interval, (∆t), in function of the decay
coefficient for values of G1=10, 20, 30 and G2= 20, 30, 40
Fig. 3. Clustering circuit to assure that the A value is similar in signals which are compared
in terms of duration of the time interval. The abbreviation Amp refers to the amplifier and
Comp to the comparator
Fig. 4. Electronic circuit for calculating an estimated value of A. AMP1, AMP2, AMP3 and
AMP4 are the same as in Figure 3. ENV means envelope detector
Design and Construction of an
Ultrasonic Sensor for the Material Identification in Robotic Agents 463
3.2.3 The Circuit and the algorithm for the identification of the changes in the
durations
As it was mentioned in section 3.1 the material can be identified by finding the difference
between the two durations which are produced for the same signal when it is amplified by
two different gain values, i.e. finding ∆t.
In order to do this, the circuit of Figure 5 was implemented. In this case AMP1-4 correspond
to the same amplifiers mentioned in Figure 3. Amp1.1, Amp2.1, Amp3.1 and Amp4.1 are
additional amplifiers. COMP5, COMP6, COMP7 and COMP8 are comparators which even
with an amplitude modulated sinusoidal signal, such as the ultrasonic echo, remains
activated as long as the envelope of the signal is above the threshold of 2.5V. While the
output of this comparator is quite stable, still it has some noisy fluctuations which are
necessary to be removed and this is the reason to use the comparators COMP5.1, COMP6.1,
COMP7.1 and COMP8.1. The output of these comparators is connected to a specific input in
the CD4052, and the output in the pin 13 of this multiplexer goes to the microcontroller.
Fig. 5. The electronic circuit for calculating the durations associated to different gain values
process the data of the sensors. The six outputs from the receiver circuit go to the main
board and two outputs of the main board go to the receiver circuit. These two outputs
control the CD4052.
The microcontroller sends the duration and the peak voltage of the signals to the PC. There,
the signal is processed by Matlab and the results are plotted.
The whole set up of the experiments can be seen in figure 6.
In order to assure the repeatability of the results the following procedure was followed:
Procedure 1
1. Clean the inspected sample and the transducers face
2. Spread oil over the surface of both transducer faces
3. Push the transducers against the inspected sample
4. Take ten different measurements
5. Repeat this procedure for five times.
The ten different measurements refer to send in different moments a toneburst to the
inspected sample and capture the received echo. This is done ten times.
For the different materials used in this experiment this procedure was followed and the
results are in figure 7.
Fig. 6. Diagram of the Experimental set up used in procedure 1. The couplant is oil. Tx and
Rx refer to the Transmitter and Receiver, respectively
In figure 7 Delta t1 refers to tdi2 - tdi1, Delta t2 refers to tdi3 - tdi2, and so on. tdi1 refers to the
duration of the signal at the output of the comparator which corresponds to the group signal
and the other tdi, belong to the comparators of the groups with amplifiers with higher gain
than the amplifier of the group of the signal, e.g. if the signal belongs to group 2 the tdi1
refers to the duration of Comp6.1, and tdi2 and tdi3 belong to Comp7.1 and Comp8.1,
respectively. As it can be seen, for the higher order groups the number of tdi will be fewer
than for the groups of lower order.
As it can be seen from figures 7a-7b, the ∆t is not constant for any material. However, it can
be said that this value is always within a defined interval. In [Gunarathne et al, 2002] is
mentioned that the decay coefficient B does not have a fixed value from measurement to
measurement but it takes different values within an interval. Because of this is defined a
Gaussian Probability Density Function for the B value of each material, then the value can
be any of those values within the PDF. This fact, confirms that the results obtained here with
∆t are correct because they are congruent with the theory. Though, for the current case it
was not necessary to create a PDF but only intervals where the correspondent ∆t belongs.
This fact simplifies the material identification task. Following, the intervals are chosen.
Design and Construction of an
Ultrasonic Sensor for the Material Identification in Robotic Agents 465
(a)
(b)
(c)
Fig. 7. Duration increment of the time interval for the results obtained using the procedure 1.
The materials inspected were a) Acrylic b) Glass and c) Aluminum
466 Robotic Systems – Applications, Control and Programming
(a)
(b)
(c)
Fig. 8. Peak voltage for the results obtained using the procedure 1. The materials inspected
were a) Acrylic b) Glass and c) Aluminum
Design and Construction of an
Ultrasonic Sensor for the Material Identification in Robotic Agents 467
(a) (b)
(c)
Fig. 9. Duration increment of the time interval for the results obtained using the procedure 2.
The materials inspected were a) Acrylic b) Glass and c) Aluminum
From figure 7a it can be seen that the ∆t1 is always within the range (0.025ms, 0.15ms), while
∆t2 and ∆t3 are in the ranges (0.16ms, 0.4ms) and (0, 0.1ms), respectively, then these intervals
are chosen to identify the acrylic sample.
From figure 7b it can be seen that the ∆t1 and ∆t2 are always between (0.6ms, 1ms) and
(0.2ms, 0.65ms), respectively. Then, those are the intervals for the glass sample.
From figure 7c it can be seen that the ∆t1 and ∆t2 are always between (1ms, 2ms) and (0.8ms,
1.9ms), respectively. Then, those are the intervals for the aluminium sample.
In figure 8 is shown the peak voltages for the correspondent measurements in figure 7. It
can be seen from this figure that the peak voltage also varies in a wide range from
measurement to measurement. Then, if this parameter is going to be implemented to
identify the material also have to be defined a voltage interval for each material. This
parameter can be used to make more robust the measurements but for this moment only the
∆t variable will be used for the identification of the materials.
It is important to mention that both, the peak voltage and ∆t change, are within an interval
from measurement to measurement because those variables are very dependent on the
characteristics of the couplant used between the transducer face and the inspected sample.
468 Robotic Systems – Applications, Control and Programming
From the previous mentioned results the algorithm was modified and the intervals were
used to identify the material. With this new algorithm a procedure (procedure 2) similar to
procedure 1 was repeated for all the materials. The only difference in this new procedure is
that the average of the ten measurements is taken and this result is compared with the
different intervals. In figure 9 can be seen one of the five repetitions of the new procedure.
In table 1 can be seen the performance of the developed sensor for the material identification
task.
Correct Wrong
Material Trials % Accuracy
Identification Identification
Acrylic 5 5 0 100%
Glass 5 5 0 100%
Aluminum 5 5 0 100%
Table 1. Accuracy of the developed sensor to identify the material
As it can be seen in these experiments the accuracy of the identification is 100% for all the
materials. This fact is very important because with a quite easy method and a low cost
computational algorithm was performed a task which requires of more complex methods
and algorithms of more expensive computational cost.
Fig. 10. Son-robot and mom-robot exploring the environment (González J.J. et al, 2010a)
Design and Construction of an
Ultrasonic Sensor for the Material Identification in Robotic Agents 469
6. Acknowledgment
We would like to acknowledge to the Science, Technology and Innovation Management
Department, COLCIENCIAS, for the economical support to the master student Juan José
González España within the Youth Researchers and Innovators Program- 2009.
470 Robotic Systems – Applications, Control and Programming
7. References
Allin M. J. (2002) Disbond Detection in Adhesive Joints Using Low Frequency Ultrasound. PhD
Thesis. Imperial College of Science Technology and Medicine. University of
London. 2002
Airmar Technology. (2011). http://www.airmartechnology.com/ Accesed 4th August 2011
González J.J.; Jiménez J.A.; Ovalle D.A. (2010) TEAC2H-RI: Educational Robotic Platform for
Improving Teaching-Learning Processes of Technology in Developing Countries.
Technological Developments in Networking, Education and Automation 2010, pp
71-76
González J., Jiménez J. Algoritmo para la Identificación de Materiales en Agentes Robóticos
(Spanish). Quinto Congreso Colombiano de Computación (5CCC) Cartagena-
Colombia (2010)
Gunarathne, G. P. P. (1997) Measurement and monitoring techniques for scale deposits in
petroleum pipelines. In: Proc. IEEE Instrum. Meas. Technol. Conf., Ottawa, ON,
Canada, May 19–21, 1997, pp. 841–847. DOI: 10.1109/IMTC.1997.610200
Gunarathne G.P.P ; Zhou Q. ; Christidis K. (1998) Ultrasonic feature extraction techniques for
characterization and quantification of scales in petroleum pipelines. In: Proceedings IEEE
Ultrasonic Symposium, vol. 1; 1998, p. 859–64 DOI: 10.1109/ULTSYM.1998.762279
Gunarathne G.P.P ; Christidis K. (2002) Material Characterization in situ Using ultrasound
measurements. In: Proceedings of the IEEE Transactions on Instrumentation and
Measurement. Vol. 51, pp. 368-373. 2002 DOI: 10.1109/19.997839
NASA (2007) Ultrasonic Testing of Aerospace Materials. URL:
klabs.org/DEI/References/design_guidelines/test_series/1422msfc.pdf Accessed
on: August 2011.
Ohtanil K., Baba M. (2006) An Identification Approach for Object Shapes and Materials Using an
Ultrasonic Sensor Array. Proceedings of the SICE-ICASE International Joint
Conference, pp. 1676-1681. 2006
Pallav P., Hutchins D., Gan T. (2009) Air-coupled ultrasonic evaluation of food materials.
Ultrasonics 49 (2009) 244-253
Stepanić, H. Wüstenberg, V. Krstelj, H. Mrasek (2003) Contribution to classification of buried
objects based on acoustic impedance matching, Ultrasonics, 41(2), pp. 115-123, 2003
Thomas, S.M, Bull D.R, (1991) Neural Processing of Airborne Sonar for Mobile Robot
Applications. Proceedings of the Second International Conference on Artificial
Neural Networks. pp. 267-270. 1991.
Zhao B., Bashir O.A., Mittal G.S., (2003) Detection of metal, glass, plastic pieces in bottled
beverages using ultrasound, Food Research International 36 (2003) 513–521.
Part 4
1. Introduction
Advances in robotics and cognitive sciences have stimulated expectations for emergence of
new generations of robotic devices that interact and cooperate with people in ordinary human
environments (robot companion, elder care, home health care), that seamlessly integrate
themselves into complex environments (domestic, outdoor, public spaces), that fit into
different levels of system hierarchies (human-robot co-working, hyper-flexible production
cells, cognitive factory), that can fulfill different tasks (multi-purpose systems) and that
are able to adapt themselves to different situations and changing conditions (dynamic
environments, varying availability and accessibility of internal and external resources,
coordination and collaboration with other agents).
Unfortunately, so far, steady improvements in specific robot abilities and robot hardware have
not been matched by corresponding robot performance in real-world environments. On the
one hand, simple robotic devices for tasks such as cleaning floors and cutting the grass have
met with growing commercial success. Robustness and single purpose design is the key
quality factor of these simple systems. At the same time, more sophisticated robotic devices
such as Care-O-Bot 3 (Reiser et al., 2009) and PR2 (Willow Garage, 2011) have not yet met
commercial success. Hardware and software complexity is their distinguishing factor.
Advanced robotic systems are systems of systems and their complexity is tremendous.
Complex means they are built by integrating an increasingly larger body of heterogeneous
(robotics, cognitive, computational, algorithmic) resources. The need for these resources arises
from the overwhelming number of different situations an advanced robot is faced with during
execution of multitude tasks. Despite the expended effort, even sophisticated systems are
still not able to perform at an expected and appropriate level of overall quality of service in
complex scenarios in real-world environments. By quality of service we mean the set of system
level non-functional properties that a robotic system should exhibit to appropriately operate
in an open-ended environment, such as robustness to exceptional situations, performance
despite of limited resources and aliveness for long periods of time.
Since vital functions of advanced robotic systems are provided by software and software
dominance is still growing, the above challenges of system complexity are closely related to
the need of mastering software complexity. Mastering software complexity becomes pivotal
towards exploiting the capabilities of advanced robotic components and algorithms. Tailoring
modern approaches of software engineering to the needs of robotics is seen as decisive
towards significant progress in system integration for advanced robotic systems.
474
2 Robotic Systems – Applications, Control and Programming
Robotic Systems
Based on these observations, we assume that the next big step in advanced robotic systems
towards mastering their complexity and their overall integration into any kind of environment
and systems depends on separation of concerns. Since software plays a pivotal role in
advanced robotic systems, we illustrate how to tailor a service-oriented component-based
software approach to robotics, how to support it by a model-driven approach and according
tools and how this allows separation of concerns which so far is not yet addressed
appropriately in robotics software systems.
Experienced software engineers should get insights into the specifics of robotics and should
better understand what is in the robotics community needed and expected from the
software engineering community. Experienced roboticists should get detailed insights into
how model-driven software development (MDSD) and its design abstraction is an approach
towards system-level complexity handling and towards decoupling of robotics knowledge
from implementational technologies. Practitioners should get insights into how separation of
concerns in robotics is supported by a service-oriented component-based software approach
and that according tools are already matured enough to make life easier for developers
of robotics software and system integrators. Experts in application domains and business
consultants should gain insights into maturity levels of robotic software systems and according
approaches under a short-term, medium-term and long-term perspective. Students should
understand how design abstraction as recurrent principle of computer science applied to
software systems results in MDSD, how MDSD can be applied to robotics, how it provides
a perspective to overcome the vicious circle of robotics software starting from scratch again
and again and how software engineering and robotics can cross-fertilize each other.
Limited resources require decisions: when to assign which resources to what activity taking
into account perceived situation, current context and tasks to be fulfilled. Finding adequate
solutions for this major challenge of engineering robotic systems is difficult for two reasons:
• the problem space is huge: as uncertainty of the environment and the number and type of
resources available to a robot increase, the definition of the best matching between current
situation and correct robot resource exploitation becomes an overwhelming endeavour
even for the most skilled robot engineer,
• the solution space is huge: in order to enhance overall quality of service like robustness
of complex robotic systems in real-world environments, robotic system engineers should
master highly heterogeneous technologies, need to integrate them in a consistent and
effective way and need to adequately exploit the huge variety of robotic-specific resources.
In consequence, it is impossible to statically assign resources in advance in such a way that all
potential situations arising at runtime are properly covered. Due to open-ended real-world
environments, there will always be a deviation between design-time optimality and runtime
optimality with respect to resource assignments. Therefore, there is a need for dynamic
resource assignments at runtime which arises from the enormous sizes of the problem space
and the solution space.
For example, a robot designer cannot foresee how crowded an elevator will be. Thus, a robot
will need to decide by its own and at runtime whether it is possible and convenient to exploit
the elevator resource. The robot has to trade the risk of hitting an elevator’s user with the risk
of arriving late at the next destination. To match the level of safety committed at design-time,
the runtime trade-off has to come up with parameters for speed and safety margins whose
risk is within the design-time committed boundaries while still implementing the intent to
enter the elevator.
model and relate different views relevant to robotic system design. A major issue is the
support of separation of concerns taking into account the specific needs of robotics.
• The problem space can be mastered by giving the robot the ability to reconfigure its internal
structure and to adapt the way its resources are exploited according to its understanding
of the current situation.
$
%
&
$
#
! "#
Fig. 2. Separation of concerns and design abstraction: models created at design-time are used
and manipulated at runtime by the robot (Steck & Schlegel, 2011).
We coin the term model-centric robotic systems (Steck & Schlegel, 2011) for the new
approach of using models to cover and support the whole life-cycle of robotic systems.
Such a model-centric view puts models into focus and bridges design-time and runtime
model-usage.
Robotic Software
Robotic Software Systems:
Systems: From Code-Driven From Code-Driven
to Model-Driven to Model-Driven Software Development
Software Development 4797
During the whole lifecycle, models are refined and enriched step-by-step until finally they
become executable. Models comprise variation points which support alternative solutions.
Some variation points are purposefully left open at design time and even can be bound
earliest at runtime after a specific context and situation dependent information is available.
In consequence, models need to be interpretable not only by a human designer but also by
a computer program. At design-time, software tools should understand the models and
support designers in their transformations. At runtime, adaptation algorithms should exploit
the models to automatically reconfigure the control system according to the operational
context (see figure 2).
The need to explicitly support the design for runtime adaptability adds robotic-specific
requirements on software structures and software engineering processes, gives guidance on
how to separate concerns in robotics and allows to understand where the robotics domain
needs extended solutions compared to other and at first glance similar domains.
The robotics community provides domain-specific concepts, best practices and design
patterns of robotics. These are independent of any software technology and
implementational technology. They form the body of knowledge of robotics and provide the
domain-specific ground for the above roles.
The essence of the work of the component builder is to design reusable components which
can seamlessly be integrated into multiple systems and different hardware platforms. A
component is considered as a black box. The developer can achieve this abstraction only
if he is strictly limited in his knowledge and assumptions about what happens outside his
component and what happens inside other components.
On the other hand, the methodology and the purpose of the system integrator is opposite: he
knows exactly the application of the software system, the platform where it will be deployed
and its constraints. For this reason, he is able to take the right decision about the kind of
components to be used, how to connect them together and how to configure their parameters
and the quality of service of each of them to orchestrate their behavior. The work of the system
integrator is rarely reusable by others, because it is intrinsically related to a specific hardware
platform and a well-defined and sometimes unique use-case. We don’t want the system
integrator to modify a component or to understand the internal structure and implementation
of the components he assembles.
Fig. 3. Building robotic systems out of readily-available and reusable software components:
separation of the roles of component development and system integration.
and business-to-client products such as hardware parts, software applications and complete
robots with a specific application and deliver a valuable product to the customer.
To understand better what a robotics industry means, we draw an analogy to the personal
computer industry. Apart from very few exceptions, we can identify several companies
involved in the manufacturing of a single and very specific part of the final product: single
hardware components (memories, hard drives, CPU, mother boards, screens, power supplies,
graphic cards, etc.), operating systems (Windows, commercial Linux distributions), software
applications (CAD, word processing, video games, etc.) and system integrators which provide
ready-to-use platforms to the end user.
lifecycle of components that is during the design phase (design and implementation), the
deployment phase (system integration) and even the runtime phase (dynamic wiring of data
flow according to situation and context). CBSE is based on the explication of all relevant
information of a component to make it usable by other software elements whose authors
are not known. The key properties of encapsulation and composability result in the following
seven criteria that make a good component: “(i) may be used by other software elements
(clients), (ii) may be used by clients without the intervention of the component’s developers,
(iii) includes a specification of all dependencies (hardware and software platform, versions,
other components), (iv) includes a precise specification of the functionalities it offers, (v) is
usable on the sole basis of that specification, (vi) is composable with other components,
(vii) can be integrated into a system quickly and smoothly” (Meyer, 2000).
4. Abstractions are provided by models (Beydeda et al., 2005). Abstraction is a core principle
of software engineering.
“A model is a simplified representation of a system intended to enhance our ability to
understand, predict and possibly control the behavior of the system” (Neelamkavil, 1987).
M
PD
PD
Binaries, Runtime
M
PD
map component−
<<Component>> model onto specific
Framework−Level (F)
− build framework
target platform
− abstract middleware
and OS details
1 2 − black−box view on
middleware / OS
At the component level (C), the component builder wants to rely on a stable interface to the
component framework y. In an ideal situation, the component framework can be considered
as black box hiding all operating system and middleware aspects from the user code. The
component framework adds the execution container to the user code such that the resulting
component is conformant to a black box component view.
At the framework level (F), two stable interfaces exist: (i) between the framework and the
user code of the component builder y and (ii) between the framework and the underlying
middleware & operating system z. The stable interface y ensures that no middleware
and operating system specifics are unnecessarily passed on to the component builder. The
stable interface z ensures that the framework can be mapped onto different implementational
technologies (middleware, operating systems) without reimplementing the framework in its
entirety. The framework builder maintains the framework which links the stable interfaces y
and z and maps the framework onto different implementational technologies via the interface
z.
Fig. 7. The structure of a S MART S OFT component and its stable interfaces.
sync
<GridMapRequest, GridMap>
... handleQuery (Id,R)
query (R,&A)
async upcall
requests specific part of a grid map
queryRequest (R,&Id)
GridMap queryReceive (Id,&A)
async
#position, size, resolution queryReceiveWait (Id,&A) QueryServer
#mapCells, isValid <GridMapRequest, GridMap> answer(Id,A)
queryDiscard (Id) async answer
... QueryClient
<GridMapRequest, GridMap>
Fig. 8. The views of a component builder and a system integrator on services by the example
of a grid map service based on a query communication pattern.
Figure 8 illustrates this concept by means of the query communication pattern which consists
of a query client and a query server. The query pattern expects two communication objects to
define a service: a request object and an answer object. Communication objects are transmitted
by-value to ensure decoupling of the lifecycles of the client side and the server side of a service.
They are arbitrary objects enriched by a unique identifier and get/set-methods. Hidden
from the user and inside the communication patterns, the content of a communication object
provided via E gets extracted and forwarded to the middleware interface H. Incoming content
at H is put into a new instance of the according communication object before providing access
to it via E.
In the example, the system integrator sees a provided port based on a query server with the
communication objects GridMapRequest and GridMap. The map service might be provided by a
map building component. Each component with a port consisting out of a query client with the
same communication objects can use that service. For example, a path planning component
might need a grid map and expose a required port for that service. The GridMapRequest object
provides the parameters of the individual request (for example, the size of the requested
map patch, its origin and resolution) and the GridMap returns the answer. The answer is
self-contained comprising all the parameters describing the provided map. That allows to
interpret the map independently of the current settings of the service providing component
and gives the service provider the chance to return a map as close to but different from the
requested parameters in case he cannot handle them exactly.
A component builder uses the stable interface E. In case of the client side of a service based on
the query pattern, it always consists of the same synchronous as well as asynchronous access
modes independent from the used communication objects and the underlying middleware.
They can be used from any number of threads in any order. The server side in this example
always consists of an asynchronous handler upcall for incoming requests and a separated
answer method. This separation is important since it does not require the upcall to wait until
the answer is available before returning. We can now implement any kind of processing model
inside a component, even a processing pipeline where the last thread calls the answer method,
without blocking or wasting system resources of the upcall or be obliged to live with the
threading models behind the upcall.
In the example, the upcall at the service provider either directly processes the incoming
GridMapRequest object or forwards it to a separate processing thread. The requested map
patch is put into a GridMap object which then is provided as answer via the answer method.
It can be seen that the client side is not just a proxy for the server side. Both sides of a
communication pattern are completely standalone entities providing stable interfaces A and
E by completely hiding all the specifics of H and I (see figure 7). One can neither expose
arbitrary member functions at the outside component hull nor can one dilute the semantics
and behavior of ports. The different communication patterns and their internals are explained
in detail in (Schlegel, 2007).
Robotic Software
Robotic Software Systems:
Systems: From Code-Driven From Code-Driven
to Model-Driven to Model-Driven Software Development
Software Development 487
15
Besides the services defined by the component builder (A), several predefined services exist
to support system level concerns (Lotz et al., 2011). Each component needs to provide a
state service to support system level orchestration (outside view B: activation, deactivation,
reconfiguration; inside view F: manage transitions between service activations, support
housekeeping activities by entry/exit actions). An optional diagnostic service (C, G) supports
runtime monitoring of the component. The optional param service manages parameters by
name/value-pairs and allows to change them at runtime. The optional wiring service allows to
wire required services of a component at runtime from outside the component. This is needed
for task and context dependent composition of behaviors.
A basic principle is that clients of services are not allowed to make any assumptions about offered
services beyond the announced characteristics and that service providers are not allowed to make any
assumptions about service requestors (like e.g. their maximum rate of requests).
This principle results in simple and precise guidelines of how to apply the communication
patterns in order to come up with well-formed services. As long as a service is being offered,
the service provider has to accept all incoming requests and has to respond to them according
to its announced quality-of-service parameters.
We illustrate this principle by means of the query pattern. As long as there are no
further quality-of-service attributes, the service provider accepts all incoming requests and
guarantees to answer all accepted requests. However, only the service provider knows about
its resources available to process incoming requests and clients are not allowed to impose
constraints on the service provider (a request might provide further non-committal hints to
the service provider like a request priority). Thus, the service provider is allowed to provide a
nil answer (the flag is valid is set to false in the answer) in case he is running out of resources to
answer a particular request. In consequence, all service requestors always must be prepared
to get a nil answer. A service requestor is also not allowed to make any assumptions about
the response time as long as according quality-of-service attributes are not set by the service
provider. However, if a service provider announces to answer requests within a certain time
limit, one can rely on getting at least a nil answer before the deadline. If a service requestor
depends on a maximum response time although this quality-of-service attribute is not offered
by the service provider, he needs to use client-side timeouts with his request. This overall
principle ensures (i) loose coupling of services, (ii) prevents clients from imposing constraints
on service providers and (iii) gives service providers the means to arbitrate requests in case of
limited resources.
It now also becomes evident why S MART S OFT offers more than just a request/response and
a publish/subscribe pattern which would be sufficient to cover all communicational needs.
The send pattern explicates a one-way communication although one can emulate it via a
query pattern with a void answer object. However, practical experience proved that a much
better clarity for services with this characteristic is achieved when offering a separate pattern.
The same holds true for the push newest and the push timed pattern. In principle, the push
timed pattern is a push newest pattern with a regular update. However, in case of a push
newest pattern, service requestors rely on having the latest data available at any time. This is
different from a push timed pattern where the focus is on the service provider guaranteeing a
regular time interval (in some cases even providing the same data). Although one could cover
some of these aspects by quality-of-service attributes, they also have an impact on the kind
of perception of its usage by a component developer. Again, achieving clarity and making
the characteristics easily recognizable is of particular importance for the strict separation of
the roles of component developers and system integrators. This also becomes obvious with
the event pattern. In contrast to the push patterns, service requestors get informed only in
case a server side event predicate (service requestors individually parametrize each event
activation) becomes true. This tremendously saves bandwidth compared to publishing latest
changes to all clients since one then always would have to publish a snapshot of the overall
context needed to evaluate the predicate at the client side instead of just the information when
an event fired.
Robotic Software
Robotic Software Systems:
Systems: From Code-Driven From Code-Driven
to Model-Driven to Model-Driven Software Development
Software Development 489
17
<pose>
<v, timed SmartPioneerBaseServer
SmartAmcl send SmartCdlServer send w>
state: neutral, active states: neutral, active tasks: BaseTask
tasks: CdlTask serialport: /dev/ttyS1
<go
>
an
scan>
as
<cu <<Component>>
<l
Fig. 10. Structure of a navigation task based on the S MART S OFT component model.
The SmartLaserLMS200Server component provides the latest laser scan via a push newest
port. Thus, all subscribed clients always get an update as soon as a new laserscan is available.
It is subscribed to the pose service of the robot base to label laser scans with pose stamps.
The component comprises a SmartTask to handle the internal communication with the laser
hardware. This way, the aliveness of the overall component and its services is not affected
by flaws on the laser hardware interface. Parameters like position-offset and serialport are
used to customize the component to the target robotic system. These parameters have to
be set by the application builder during the deployment step. The SmartMapperGridMap
component requires a laser scan to build the longterm and the current map. The current map
is provided by a push newest server port (as soon as a new map is available, it is provided
to subscribed clients which makes sense since path planning depends on latest maps) and
the longterm map by a query server port (since it is not needed regularly, it makes sense
to provide it only on a per-request basis). The state port is used to set the component into
different states depending on which services are needed in the current situation: build no
map at all (neutral), build the current map only (buildCurr), build the longterm map only
(buildLtm) or build both maps (buildBoth). The push newest server publishes the current
map only in the states buildCurr and buildBoth. Requests for a longterm map are answered
as long as the component and its services are alive but with an invalid map in case it is
in the states neutral or buildCurr (valid flag of answer object set to false). Accordingly,
the SmartPlannerBreadthFirstSearch component provides its intermediate waypoints by a
push newest server (update the motion execution component as soon as new information is
available). The motion execution component regularly commands new velocities to the robot
base via a send service. The motion execution component is also subscribed to the laser scan
service to be able to immediately react to obstacles in dynamic environments. This way, the
different services interact to build various control loops to combine goal directed and reactive
navigation while at the same time allowing for replacement of components.
490
18 Robotic Systems – Applications, Control and Programming
Robotic Systems
4.1 Decisions and tools behind the reference implementation - framework builder view
The reference implementation of our S MART MDSD T OOLCHAIN is based on the Eclipse
Modeling Project (EMP) (Eclipse Modeling Project, 2010) and Papyrus UML (PAPYRUS UML,
2011).
Papyrus UML is used as graphical modeling tool in our toolchain. Therefore, it is customized
by the framework builder for the development of S MART S OFT Components (component
builder) and deployments of components (application builder). This includes for example a
customized wizard to create communication objects, components as well as deployments. The
modeling view of Papyrus UML is enriched with a customized set of meta-elements to create
the models. The model transformation and code generation steps are developed with Xpand
and Xtend (Efftinge et al., 2008) which are part of the EMP. These internals are not visible to
the component builder and the application builder. They just see the graphical modeling tool
to create their models and use the CDT Eclipse Plugin (Eclipse CDT, 2011) to extend the source
code and to compile binaries. The S MART MARS meta-model is implemented as a UML Profile
(Fuentes-Fernández & Vallecillo-Moreno, 2004) using Papyrus UML.
The decision to use UML Profiles and Papyrus UML to implement our toolchain is motivated
by the reduced effort to come up with a graphical modeling environment customized to the
robotics domain and its requirements by reusing available tools from other communities.
Although some shortcomings have to be accepted and taken into account we were not
caught in the huge effort related to implementing a full-fledged GMF-based development
environment. This allowed us to early come up with our toolchain and to gain deeper insights
Robotic Software
Robotic Software Systems:
Systems: From Code-Driven From Code-Driven
to Model-Driven to Model-Driven Software Development
Software Development 491
19
and more experience on the different levels of abstraction. However, the major drawbacks of
UML Profiles are:
• UML is a general purpose modeling language covering aspects of several domains and is
thus complex. Using profiles, it is only possible to enrich UML, but not to remove elements.
• Deployment and instantiations of components are not adequately supported.
• UML Profiles provide just a lightweight extension of UML. That means, the structure of
UML itself cannot be modified. The elements can be customized only by stereotypes and
tagged values.
To counter the drawbacks of UML Profiles, we only support the usage of the stereotyped
elements provided by S MART MARS to create the models of the components and
deployments. Directly using pure UML elements in the diagrams is not supported. Thus,
the models are created using just the meta-elements provided by S MART MARS. Restricting
the usage to S MART MARS meta-elements, a mapping to another meta-model implementation
technology like eCore (Gronback, 2009) is straightforward. The stereotyped elements can
be mapped onto eCore without taking into account UML and its structure. In the current
implementation of our toolchain, the restriction to only use S MART MARS meta-elements
is enforced with check (Efftinge et al., 2008), the EMP implementation of OCL (Object
Management Group, 2010). In the model transformation and code generation steps of our
toolchain pure UML elements are ignored. Another approach would be to customize the
diagrams by removing the UML elements from the palette (see fig. 12) and thus restricting
their usage. The latter approach is on the agenda of the Papyrus UML project and will be
supported by future releases.
Fig. 11. The component builder models a component, gets the source code of its overall
structure (component hull, tasks, etc.) generated by the toolchain and can then integrate
user-code into these structures.
The view of the component builder on the toolchain is depicted in figure 12. It is illustrated by
a face recognition component which is a building block in many service robotics scenarios as
part of the human-robot interface (detection, identification and memorization of persons). In
its active state, the component shall receive camera images, apply face recognition algorithms
and report detected and recognized persons. Thus, besides the standard ports for setting
492
20 Robotic Systems – Applications, Control and Programming
Robotic Systems
states (active, neutral) and parameters, we need to specify a port to receive the latest camera
images (based on a push newest client) and another one to report on the results (based on
an event server). The component shall run the face recognition based on a commercially
available library within one thread and optional visualization mechanisms within a second
and separated thread. Thus, we need to specify two tasks within the component.
Fig. 12. Screenshot of our toolchain showing the view of the Component Builder.
To create the model the component builder uses the S MART MARS meta-elements offered in
the palette. The elements of the created model can be accessed either in the outline view or
directly in the graphical representation. Several of the meta-element attributes (tagged values)
can be customized and modified in the properties tab (e.g. customizing services to ports,
specifying properties of tasks, etc.). The model is stored in files specific to Papyrus UML.
Pushing the button, the workflow is started and the PSI (source) files are generated. The
user code files are directly accessible in the src folder. The component builder integrates his
business logic into these files (in our example, the interaction with the face recognition library).
The generated files the component builder must not modify are stored in the gen folder. These
files are generated and overwritten each time the workflow is executed. For the further
processing of the source files, the Eclipse CDT plugin is used (Makefile Project). The makefile
is also generated by the workflow specific to the model properties. User modifications in the
makefile can be done inside of protected regions (Gronback, 2009).
(M2M) transformation (encoded with Xtend) from the PIM into a Platform Specific Model
(PSM). In this step the elements of the PIM are transformed into corresponding elements
of the PSM according to the selected target platform. The second step is the model-to-text
(M2T) transformation (encoded with Xpand and Xtend) from the PSM into a Platform Specific
Implementation (PSI). This transformation is based on customizable code templates.
Fig. 14. Screenshots of excerpt of the UML Profiles created with Papyrus UML showing the
metaelements dedicated to the S MART TASK. Left: PIM; Right: PSM with the two variants (1)
standard task and (2) RTAI task.
An excerpt of the UML Profiles is illustrated in figure 14. In the UML Profile for the PIM,
the S MART TASK extends the UML class and enriches it with attributes (tagged values) like
isPeriodic, isRealtime, period and timeUnit. For the timeUnit an enumeration (TimeUnitKind)
494
22 Robotic Systems – Applications, Control and Programming
Robotic Systems
is used to specify the unit in which time values are annotated. In the UML Profile for the
CORBA-based PSM, an abstract task is specified (cannot be instantiated) and the two variants
(1) standard task and (2) realtime task are derived from it. They are both not abstract and
can thus be instantiated by the component builder to create the model. The standard task
adds an optional attribute referencing to a S MART T IMER meta-element. This is used to
emulate periodic non-realtime tasks which are not natively supported by standard tasks of
the CORBA-based S MART S OFT implementation.
SmartSoft lib
SmartMARS
FIFO schedPolicy : SchedPolicyKind Framework
round−robin isRealtime : Boolean Task Builder ...
sporadic isPeriodic : Boolean
priority : Integer
timeUnit : TimeUnitType <<class>> <<meta−element>>
period : Integer RTAI
PIM wcet : Integer SmartTask Task SmartTask
isRealtime isRealtime
PSM == false a b == true a b
<<metaelement>> <<metaelement>>
SmartCorbaTask RTAITask <<class>>
Generation Gap Pattern
ge
timer [0..1]
schedPolicy : SchedPolicyKind schedPolicy : SchedPolicyKind MyTaskCore ne
rat <<model−element>>
isPeriodic : Boolean isPeriodic : Boolean e
priority : Integer
period : Integer
priority : Integer
period : Integer
MyTask
wcet : Integer schedPolicy: SchedPolicyKind
<<class>> isRealtime: Boolean
e
<<metaelement>> mutex [1] MyTask rat
ne ce
isPeriodic: true
priority: Integer
isPeriodic SmartCorbaCondMutex ge ly on timeUnit: TimeUnitKind
== true on period: Integer
wcet: Integer
User Code
<<metaelement>> <<metaelement>> Modifiable by Component Builder
SmartCorbaTimer SmartCorbaMutex
period : Integer condMutex [0..1] PSI PIM
Fig. 15. Model transformation and code generation steps illustrated by the example of the
S MART TASK. Left: Transformation of the PIM into a PSM. Right: Code generation and
Generation Gap Pattern.
Fig. 16. PIM to PSM model transformation of the S MART TASK depending on the attribute
isRealtime.
Robotic Software
Robotic Software Systems:
Systems: From Code-Driven From Code-Driven
to Model-Driven to Model-Driven Software Development
Software Development 495
23
Fig. 17. PSM to PSI transformation of the S MART TASK. Left: Excerpt of the transformation
template (xPand) generating the PSI of a standard task. Right: The generated code where the
user adds the business logic of the task.
Depending on the attribute isRealtime the S MART TASK is either mapped onto a RTAITASK
or a non-realtime S MART C ORBATASK1 . The Xtend transformation rule to transform the PIM
S MART TASK into the appropriate PSM element is depicted in figure 16.
In case the attributes specify a non-realtime, periodic S MART TASK, the toolchain extends the
PSM by the elements needed to emulate periodic tasks (as this feature is not covered by
standard tasks). In each case the user integrates his algorithms and libraries into the stable
interface provided by the S MART TASK (component builder view) independent of the hidden
internal mapping of the S MART TASK (generated code). Figure 17 depicts the Xpand template
to generate the user code file for the task in the PSI. The figure shows the template on the left
and the generated code on the right.
The PSI consists of the S MART S OFT library, the generated code and the user code (fig. 15
right). To be able to re-generate parts of the component source code according to modified
parameters in the model without affecting the source code parts added by the component
builder, the generation gap pattern (Vlissides, 2009) is used. It is based on inheritance –
the user code inherits from the generated code2 . The source files called generated code are
generated each time the transformation workflow in the toolchain is executed. These files
contain the logic which is generated behind the scenes according to the model parameters
and must not be modified by the component builder. The source files called user code are just
generated if they do not already exist. They are intended for the component builder to add
the algorithms and libraries. The generation of the user code files is more for the convenience
of the component builder to have a code template as starting point. These files are in the
full responsibility of the component builder and are never modified or overwritten by the
transformation workflow of the toolchain. In this context generate once means that the file is
only generated if it does not already exist. This is typically the case if the workflow is executed
for the first time. The clear separation of generated code and user code by the generation gap
pattern allows on the one hand to reflect modifications of the model in the generated source
1 Corba in element names indicates that the element belongs to the CORBA specific PSM.
2 The pattern could also be used in the opposite inheritance ordering so that the generated code inherits
from the user code.
496
24 Robotic Systems – Applications, Control and Programming
Robotic Systems
code without overwriting the user parts. On the other hand it gives the user the freedom to
structure his source code according to his needs and does not restrict the structure as would be
the case with, for example, protected regions. Consequently, the component builder can modify
the period, priority or even the isRealtime attribute of the task in the model, re-generate and
compile the code without requiring any modification in the user code files. The modification
in the model just affects the generated code part of the PSI.
Fig. 18. Screenshot of our toolchain showing the deployment of components to build a
robotic system.
components. In this example, the application builder (system integrator) imports components
specific to a particular robot platform (SmartPioneerBaseServer) and specific to a particular
sensor (SmartLaserLMS200Server). The navigation components (SmartMapperGridMap,
SmartPlannerBreadthFirstSearch, SmartCDLServer) can be used across different mobile
robots. The SmartRobotConsole provides a user interface to command the robot.
The components are presented to the application builder as black boxes with dedicated
variation points. These have to be bound during the deployment step and can be specified
according to system level requirements. For example, a laser ranger component might need
the coordinates of its mounting point relative to the robot coordinate system. One might also
reduce the maximum scanning frequency to save computing resources. Parameters also need
to be bound for the target system. For example, in case RTAI is used inside of a component,
the RTAI scheduler parameters (timer model underlying RTAI: periodic, oneshot) of the target
Robotic Software
Robotic Software Systems:
Systems: From Code-Driven From Code-Driven
to Model-Driven to Model-Driven Software Development
Software Development 497
25
RTAI system have to be specified. If the application builder forgets to bind required settings,
this absence is reported to him by the toolchain.
The application builder can identify the provided and required services of a component via
its ports. He can inspect its characteristics by clicking on the port icon which opens a property
view. That comprises the communication pattern type, the used communication objects
and further characteristics like service name and also port specific information like update
frequencies. The initial wiring is done within the graphical representation of the model. In
case the application builder wants to connect incompatible ports, the toolchain refuses the
connection and gives further hints on the reasons.
If the CORBA-based implementation of S MART S OFT is used, the CORBA naming service
properties IP-address and port-number have to be set. Furthermore, the deployment type
(local, remote) has to be selected. For a remote deployment, the IP-address, username and target
folder of the target computer have to be specified. The deployed system is copied to the target
computer and can be executed there. In case of a local deployment, the system is customized
to run on the local machine of the application builder. This is, for example, the case if no real
robot is used and the deployed system uses simulation components (e.g. Gazebo). Depending
on the initial wiring, parameter files are generated and also copied into the deployment folder.
These parameter files contain application specific adjustments of the components. In addition,
a shell script to start the system is generated out of the deployment model.
Fig. 20. The clean-up scenario. (1) Kate approaches the table; (2/3) Kate stacks objects into
each other; (4) Kate throws cups into the kitchen sink.
5. Example / scenario
The work presented has been used to build and run several real-world scenarios, including
the participation at the RoboCup@Home challenge. Among other tasks our robot “Kate” can
follow persons, deliver drinks, recognize persons and objects and interact with humans by
gestures and speech.
In the clean-up scenario 3 (fig. 20) the robot approaches a table, recognizes the objects which
are placed on the table and cleans the table either by throwing the objects into the trash bin
or into the kitchen sink. There are different objects, like cups, beverage cans and different
types of crisp cans. The cups can be stacked into each other and have to be thrown into the
kitchen sink. Beverage cans can be stacked into crisp cans and have to be thrown into the
trash bin. Depending on the type of crisp can, one or two beverage cans can be stacked into
one crisp can. After throwing some of the objects into the correct disposal the robot has to
decide whether to drive back to the table to clean up the remaining objects (if existing) or to
drive to the operator and announce the result of the cleaning task. The robot reports whether
all objects on the table could be cleaned up or, in case any problems occurred, reports how
many objects are still left.
Such complex and different scenarios can neither be developed from scratch nor can their
overall system complexity be handled without using appropriate software engineering
methods. Due to their overall complexity and richness, they are considered as convincing
stress test for the proposed approach. In the following the development of the cleanup
example scenario is illustrated according to the different roles.
The framework builder provides the tools to develop S MART S OFT components as well
as to perform deployments of components to build a robotic system. In the described
example this includes the CORBA-based implementation of the S MART S OFT framework
3 http://www.youtube.com/roboticsathsulm#p/u/0/xtLK-655v7k
Robotic Software
Robotic Software Systems:
Systems: From Code-Driven From Code-Driven
to Model-Driven to Model-Driven Software Development
Software Development 499
27
and the S MART MDSD toolchain which are both available on Sourceforge (http://
smart-robotics.sourceforge.net).
The component builder view of the S MART MDSD toolchain supports component builders to
develop their components independently of each other, but based on agreed interfaces. These
components are independent of the concrete implementation technology of S MART S OFT.
Component builders provide their components in a component shelf. The models of the
components include all information to allow a black-box view of the components (e.g.
services, properties, resources). The explication of such information about the components
is required by the application builder to compose robotic systems in a systematic way. To
orchestrate the components at run-time, the task coordination language S MART TCL (Steck &
Schlegel, 2010) is used. Therefore, S MART TCL is wrapped by a S MART S OFT component and
is also provided in the component shelf. The S MART TCL component provides reusable action
plots which can be composed and extended to form the desired behavior of the robot.
The application builder uses the application builder view of the S MART MDSD toolchain.
He composes already existing components to build the complete robotic system. In the
above described cleanup scenario, 17 components (e.g. mapping, path planning, collision
avoidance, laser ranger, robot-base) are reused from the component shelf. It is worth noting
that the components were not particularly developed for the cleanup scenario, but can be
used in the cleanup scenario due to the generic services they provide. The S MART TCL
sequencer component is customized according to the desired behavior of the cleanup scenario.
Therefore, several of the already existing action plots can be reused. Application specific
extensions are added by the application builder.
At run-time the S MART TCL sequencer component coordinates the software components of
the robot by modifying the configuration and parametrization as well as the wiring between
the components. As S MART TCL can access the information (e.g. parameters, resources)
explicated in the models of the components at run-time, this information can be taken into
account by the decision making process. That allows the robot not only to take the current
situation and context into account, but also the configuration and resource usage of the
components. In the described scenario, the sequencer manages the resources of the overall
system, for example, by switching off components which are not required in the current
situation. While the robot is manipulating objects on the table and requires all available
computational resources for the trajectory planning of the manipulator, the components for
navigation are switched off.
6. Conclusion
The service-oriented component-based software approach allows separation of roles and is
an important step towards the overall vision of a robotics software component shelf. The
feasibility of the overall approach has been demonstrated by an Eclipse-based toolchain and
its application within complex Robocup@Home scenarios. Next steps towards model-centric
robotic systems that comprehensively bridge design-time and runtime model usage now
become viable.
7. References
Andrade, L., Fiadeiro, J. L., Gouveia, J. & Koutsoukos, G. (2002). Separating computation,
coordination and configuration, Journal of Software Maintenance 14(5): 353–369.
Beydeda, S., Book, M. & Gruhn, V. (eds) (2005). Model-Driven Software Development, Springer.
500
28 Robotic Systems – Applications, Control and Programming
Robotic Systems
Björkelund, A., Edström, L., Haage, M., Malec, J., Nilsson, K., Nugues, P., Robertz, S. G.,
Störkle, D., Blomdell, A., Johansson, R., Linderoth, M., Nilsson, A., Robertsson,
A., Stolt, A. & Bruyninckx, H. (2011). On the integration of skilled robot motions
for productivity in manufacturing, Proc. IEEE Int. Symposium on Assembly and
Manufacturing, Tampere, Finland.
Blogspot (2008). Discussion of Aspect oriented programming(AOP).
URL: http://programmingaspects.blogspot.com/
Bruyninckx, H. (2011). Separation of Concerns: The 5Cs - Levels of Complexity, Lecture Notes,
Embedded Control Systems.
URL: http://people.mech.kuleuven.be/ bruyninc/ecs/LevelsOfComplexity-5C-20110223.pdf
CBDI Forum (2011). CBDI Service Oriented Architecture Practice Portal - Independent
Guidance for Service Architecture and Engineering.
URL: http://everware-cbdi.com/cbdi-forum
Cheddar (2010). A free real time scheduling analyzer.
URL: http://beru.univ-brest.fr/ singhoff/cheddar/
Chris, R. (1989). Elements of functional programming, Addison-Wesley Longman Publishing Co,
Boston, MA.
Delamer, I. & Lastra, J. (2007). Loosely-coupled automation systems using device-level SOA,
5th IEEE International Conference on Industrial Informatics, Vol. 2, pp. 743–748.
Dijkstra, E. (1976). A Discipline of Programming, Prentice Hall, Englewood Cliffs, NJ.
Eclipse CDT (2011). C/C++ Development Tooling for Eclipse.
URL: http://www.eclipse.org/cdt/
Eclipse Modeling Project (2010). Modeling framework and code generation facility.
URL: http://www.eclipse.org/modeling/
Efftinge, S., Friese, P., Haase, A., Hübner, D., Kadura, C., Kolb, B., Köhnlein, J., Moroff,
D., Thoms, K., Völter, M., Schönbach, P., Eysholdt, M. & Reinisch, S. (2008).
openArchitectureWare User Guide 4.3.1.
Fuentes-Fernández, L. & Vallecillo-Moreno, A. (2004). An Introduction to UML Profiles,
UPGRADE Volume V(2): 6–13.
Gelernter, D. & Carriero, N. (1992). Coordination languages and their significance, Commun.
ACM 35(2): 97–107.
Gronback, R. C. (2009). Eclipse Modeling Project: A Domain-Specific Language (DSL) Toolkit,
Addison-Wesley, Upper Saddle River, NJ.
Heineman, G. T. & Councill, W. T. (eds) (2001). Component-Based Software Engineering: Putting
the Pieces Together, Addison-Wesley Professional.
IBM (2006). Model-Driven Software Development, Systems Journal 45(3).
Lastra, J. L. M. & Delamer, I. M. (2006). Semantic web services in factory automation:
Fundamental insights and research roadmap, IEEE Trans. Ind. Informatics 2: 1–11.
Lau, K.-K. & Wang, Z. (2007). Software component models, IEEE Transactions on Software
Engineering 33: 709–724.
Lee, E. A. & Seshia, S. A. (2011). Introduction to Embedded Systems - A Cyber-Physical Systems
Approach, ISBN 978-0-557-70857-4.
URL: http://LeeSeshia.org
Lotz, A., Steck, A. & Schlegel, C. (2011). Runtime monitoring of robotics software components:
Increasing robustness of service robotic systems, Proc. 15th Int. Conference on Advanced
Robotics (ICAR), Tallinn, Estland.
Meyer, B. (2000). What to compose, Software Development 8(3): 59, 71, 74–75.
Robotic Software
Robotic Software Systems:
Systems: From Code-Driven From Code-Driven
to Model-Driven to Model-Driven Software Development
Software Development 501
29
Mili, H., Elkharraz, A. & Mcheick, H. (2004). Understanding separation of concerns, Proc.
Workshop Early Aspects: Aspect-Oriented Requirements Engineering and Architecture
Design, Vancouver, Canada, pp. 75–84.
Neelamkavil, F. (1987). Computer simulation and modeling, John Wiley & Sons Inc.
Object Management Group (2010). Object Constraint Language (OCL).
URL: http://www.omg.org/spec/OCL/
Object Management Group & Soley, R. (2000). Model-Driven Architecture (MDA).
URL: http://www.omg.org/mda
OMG (2008). Robotic Technology Component (RTC).
URL: http://www.omg.org/spec/RTC/
PAPYRUS UML (2011). Graphical editing tool for uml.
URL: http://www.eclipse.org/modeling/mdt/papyrus/
Parnas, D. (1972). On the criteria to be used in decomposing systems into modules,
Communications of the ACM 15(12).
Quigley, M., Gerkey, B., Conley, K., Faust, J., Foote, T., Leibs, J., Berger, E., Wheeler, R. & Ng,
A. (2009). ROS: An open-source Robot Operating System, ICRA Workshop on Open
Source Software.
Radestock, M. & Eisenbach, S. (1996). Coordination in evolving systems, Trends in Distributed
Systems – CORBA and Beyond, Springer-Verlag, pp. 162–176.
Reiser, U., Connette, C., Fischer, J., Kubacki, J., Bubeck, A., Weisshardt, F., Jacobs, T., Parlitz,
C., Hägele, M. & Verl, A. (2009). Care-O-bot 3 – Creating a product vision for service
robot applications by integrating design and technology, Proc. IEEE/RSJ International
Conference on Intelligent Robots and Systems (ICRA), St. Louis, USA, pp. 1992–1997.
Schlegel, C. (2007). Communication patterns as key towards component interoperability, in
D. Brugali (ed.), Software Engineering for Experimental Robotics, STAR 30, Springer,
pp. 183–210.
Schlegel, C. (2011). S MART S OFT – Components and Toolchain for Robotics.
URL: http://smart-robotics.sf.net/
Schlegel, C., Steck, A., Brugali, D. & Knoll, A. (2010). Design abstraction and processes in
robotics: From code-driven to model-driven engineering, 2nd Int. Conf. on Simulation,
Modeling, and Programming for Autonomous Robots (SIMPAR), Springer LNAI 6472,
pp. 324–335.
Schlegel, C., Steck, A. & Lotz, A. (2011). Model-driven software development in robotics:
Communication patterns as key for a robotics component model, Introduction to
Modern Robotics, iConcept Press.
Schmidt, D. (2011). The ADAPTIVE Communication Environment.
URL: http://www.cs.wustl.edu/ schmidt/ACE.html
Sprott, D. & Wilkes, L. (2004). CBDI Forum.
URL: http://msdn.microsoft.com/en-us/library/aa480021.aspx
Stahl, T. & Völter, M. (2006). Model-Driven Software Development: Technology, Engineering,
Management, Wiley.
Steck, A. & Schlegel, C. (2010). SmartTCL: An Execution Language for Conditional Reactive
Task Execution in a Three Layer Architecture for Service Robots, Int. Workshop
on DYnamic languages for RObotic and Sensors systems (DYROS/SIMPAR), Germany,
pp. 274–277.
502
30 Robotic Systems – Applications, Control and Programming
Robotic Systems
Steck, A. & Schlegel, C. (2011). Managing execution variants in task coordination by exploiting
design-time models at run-time, Proc. IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), San Francisco, CA.
Szyperski, C. (2002). Component-Software: Beyond Object-Oriented Programming,
Addison-Wesley Professional, ISBN 0-201-74572-0, Boston.
Tarr, P., Harrison, W., Finkelstein, A., Nuseibeh, B. & Perry, D. (eds) (2000). Proc. of the
Workshop on Multi-Dimensional Separation of Concerns in Software Engineering (ICSE
2000), Limerick, Ireland.
Vlissides, J. (2009). Pattern Hatching – Generation Gap Pattern.
URL: http://researchweb.watson.ibm.com/designpatterns/pubs/gg.html
Völter, M. (2006). MDSD Benefits - Technical and Economical.
URL: http://www.voelter.de/data/presentations/mdsd-tutorial/02_Benefits.pdf
Webber, D. L. & Gomaa, H. (2004). Modeling variability in software product lines with
the variation point model, Science of Computer Programming - Software Variability
Management 53(3): 305–331.
Willow Garage (2011). PR2: Robot platform for experimentation and innovation.
URL: http://www.willowgarage.com/pages/pr2/overview
24
1. Introduction
The provision of goods and services accomplishes a transition to greater value-added-
oriented logistics processes. The philosophy of logistics is changing to a cross-disciplinary
function. Therefore it becomes a critical success factor for competitive companies (Göpfert,
2009). Thus logistics assumes the task of a modern management concept. It provides for the
development, design, control and implementation of more effective and efficient flows of
goods. Further, on aspects of information, money and financing flows are crucial for for the
development of enterprise-wide and company-comprehensive success.
According to (Scheid, 2010a) this can be ensuring, by the automation of logistic processes.
Based (Granlund, 2008), the necessity for automated logistics processes raises the focus on
logistics by existing dominant factors of uncertainty and rapid changes in the business area
environment. Therefore, the adoption of flexible automation systems is essential. Here
robotics appears very promising due to its universal character as a handling machine. This is
how (Suppa & Hofschulte, 2010) characterizes the development of industrial robotics: ‘[...]
increasingly in the direction of flexible systems, which take over new fields with sensors and
innovative fiscal and regulatory approaches.’ Here, logistics represents a major application
field. (Westkämper & Verl, 2009) describe the broad applications for logistics and
demonstrate the capability for flexibility with examples from industry and research. Besides
the technological feasibility, there is also the existing demand by logistics firms concerning
the need for their application.
These representations demonstrate the interaction of robotics-logistics regarding the design
of technical systems for the operator strongly driven by the manufacturer (technology push)
and the technological standardization of the system. Robotic-logistics concentrates on the
development and integration of products. Accordingly, standardization activities of the
technical systems focus on components and sub-systems that represent the manufacturer-
oriented perspective.
The main goal concentrates on the planning, implementation, and monitoring of enterprise-
wide process chains of technological systems under the consideration of economic criteria.
In this context, the interaction of the two domains 'process' and ‘technology’ are essential.
Thus, the configuration design of technological layouts or machines is crucial. The
harmonization of the two domains requires a systematic description framework concerning
their exchange of information and knowledge. A high-level abstraction of knowledge
representation in the description of the relationships and connections is essential. It also
504 Robotic Systems – Applications, Control and Programming
process dynamics and process volatility are a handicap for standardized processes. The
continuous automation of specific and individual processes appears to be difficult due to these
reasons. Machine application requires great flexibility for adapting changing parameters.
A fundamental role belongs to robotics. By definition, industrial robot systems are a central
success factor in process automation due to their universality. The application of robotics
systems in logistics factories should be designed flexibly. The automation of the processes
under the customers' existing requirements can allow individually designed systems
(Günthner & Hompel, 2009). In their recent study, the European Initiative (EUROP, 2009)
identified the application of robotics in logistics as a central issue for the future. Thus, it
highlights the broad range of application and diverse functions in this area. The operation of
the systems under limited process standardization due to the complexity of the processes
because of heterogeneous and manifold variables, leads to individual and special solutions
in today's logistics factories. The adaption of the systems to changing process environment
is hindered due to the process specific character of the systems.
(Fritsch & Wöltje, 2006) identifies the necessity for a paradigm shift from such individual
system configuration and underline the relevance of standardized robot systems. This
necessity establishes (Elger & Hausser, 2009) by describing the demand of more
standardized solutions, which can also serve individual needs. The initiative (EUROP, 2009)
characterizes the standardization of components and technical systems as an essential
challenge for the so-called ‘Robotics 2020.” This concerns both hardware and software,
and their interfaces among these components. In the authors' view, this requirement
influences the system architecture essentially. In the view of the (EUROP, 2009), the
system architecture accords robotics a central role. In the future architectures for robotic
systems will be designed to both comprehensive configuration conditions and technical
subsystems and components. They can be assigned from comparable and very different
applications. Therefore, robotics systems will be more modularized in their architecture
configurations in the medium term (until 2015). The interconnection between the modules
is weakly configured in an overall perspective. On the one hand, this allows a rapid
reconfiguration when changes of the process environment appear. On the other hand, the
standardization of components and systems is besides the repeated partial usage the
second driver of so-called ‘adaptable configuration status.” The long-term perspective for
the year 2020 looks out for the development of architectures down to autonomous self-
configurations.
The second crucial development is represented by the compositionality of robotic systems.
A robotic system is compositional when the complexity of system architecture is based on
compilation of the subsystems or components and their specific functions. The more sub
systems or components will be used, the higher is the probability of complex system
architecture. Thus, this configuration status is dependent on the process environment. This
means that the robotic systems for self-changing or complicated processes must be explicitly
designed to fit these requirements. The robotic system has to be configured process
orientated. Robotics-logistics configuration conditions appear to diverge in comparison to
the configuration condition of production robotics. This can be attributed to the
characteristics of logistics processes. The process environment appears to have an essential
influence on the technological configuration status of robotic systems. Out of the perspective
of system theory, the degree of complexity can be influenced by its technological
configuration status of the robotic systems and the characteristics of the process
environment.
Thus, complicated processes often require robotic system architectures, which are composed
of numerous components and are individually configured. This relationship can result in
Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 507
of ontology. (Neches et al., 1991) specifies this idea and describes ontologies as ‘basic terms
and relations comprising the vocabulary of a topic area, as well as the rules for combining
terms and relations, to define extensions to the vocabulary.’ According to this
understanding, concepts are defined through basic distinctions of objects and their rule-
based relationships to each other. (Bunge, 1977) describes ontology as the only area of
science besides the fields of natural and social sciences, which focuses on concrete objects
and concrete reality. Ontologies are to be assigned based on philosophy since they stress the
basic principles of virtual science, which cannot be proven or refused by experiments.
Ontologies represent knowledge, which is structured and provided with information
technologies. They can be a crucial part of knowledge management. According to (Staab,
2002) knowledge management has the goal to optimize the requirements for employees'
performance. The following factors ‘persons”, ‘culture”, ‘organization” and ‘basic
organization processes” are the major success criteria for knowledge management.
According to (Gruber, 1993) ontologies can facilitate the sharing and exchange of
knowledge.
There are many kinds and types of ontologies. Depending on their internal structure,
ontologies vary in their complexity, as represented in Fig. 2:
and connections, which (Steinmann & Nejdl, 1999) designate as the ‘encyclopedia of reality’.
In the first sense, they function as meta-models. Abstract modeling concepts are described
and provide the framework for the ontology. Specific meta-model-oriented ontologies will
be designated as representation ontologies. For example, the frame-ontology in Ontolingua
can be mentioned here, according to (Gruber, 1993). The ontology provides a grammar
composed of concepts, relations and attributes. In the second sense, ontologies describe
conceptual models are based on structures and correlation of the area of a specific
application. Examples of existing conceptual models are legal texts, integration of
application systems or open systems.'
Comparing classic and ontological methods, some differences can be identified. Ontologies
describe the composition of reality. Traditional modeling approaches assume this
information to be known. In this context (Herb, 2006) ascertains, that ontologies are applied
for concept-based structuring of information. In his view, ontologies are essentially for more
detailed structured information than conventional sources. (Stuckenschmidt, 2009) describes
the existence of objects and items and the representation form. (Steinmann & Nejdl, 1999)
detail this approach and describe it as a central factor for understanding of items. He
concludes that ontologies always are based on a highly abstract level in comparison to
model-based approaches.
The authors also indicate the borders of ontological modeling. The crucial difficulties are
inconsistencies in classification of meta-data in ontologies, application of meta-data and the
distinct classification and structuring of information. Therefore, these aspects are
attributable to the highly abstract level of ontologies. Abstract notations lead to such
assignment, classification, and structuring issues.
Ontologies can be differentiated in two aspects, conceptual and formal logical nature.
According to (Swartout & Tate, 1999), the first aspect has the task of depicting and
composing structures of reality. The second addresses the creation of the semantic
framework with the definition of objects, classes and contexts. There are many basic
approaches to different ontologies in the literature, oriented to the areas of application.
(Bateman, 1993) describes the existence of basic types of interconnected entities and
describes the so-called ‘design patterns.’ These are entities that can be differentiated
according to the types ‘endurant’, ‘perdurant/occurrence,’ ‘quality’ and ‘abstract.’ While
entities of the ‘endurant’ type have a continuous and predictable nature, entities of the
type ‘perdurant/occurrence’ describe events that occur unexpectedly and unpredictably.
Entities of the types ‘quality’ and ‘abstract’ unite properties, attributes, relations, and
comparatives.
The ontology DOLCE is an example of the application of these basic types. DOLCE was
developed by the Institute of Cognitive Science and Technology in Trento, Italy and stands
for ‘Descriptive Ontology for Linguistic and Cognitive Engineering.’ DOLCE attempts to
impart meanings to things and events. Here, entities deal with the meanings through use of
agents in order to obtain consensus among all entities regarding to the meaning. (Gangemi
et al., 2002) treated this principle in a plausible way. Further examples of conceptual
ontologies are WordNet, the ‘Unified Medical Language’ ontology, ‘Suggested Upper
Merged Ontology,” the ontology of ‘ε-Connection’ (Kutz et al., 2004), the ontology of
‘Process Specification Language,’ and the ontology ‘OntoClean.”
Another key component of conceptual ontologies is ontology engineering. Ontology
engineering is concerned with the process design of ontology development, in order to
510 Robotic Systems – Applications, Control and Programming
create and to apply ontologies. There are multiple methods here. (Wiedemann, 2008) lists
these as follows:
• Ontology Development
• Ontology Re-Engineering
• Ontology Learning
• Ontology Alignment/Merging
• Collaborative Ontology Construction
Ontology Development deals with the question of methodological development and the
composition of ontologies. Ontology Re-Engineering focuses on existing approaches and
adapts them to the current task. Ontology Learning focuses on approaches for semi- or fully-
automatic knowledge acquisition. The Collaborative Ontology Construction issued
guidelines for the generation of consensual knowledge. Ontology Merging combines two or
more ontologies in order to depict various domains. This method allows handling
knowledge that is brought together from different worlds.
(Gruninger, 2002) describes formal logical ontologies as communication, automatic
conclusion and representation and re-utilization of knowledge. Formal logical ontologies
aim to depict a semantic domain through syntax. The concept of semantics is to be classed in
semiotics and describes the theory of signs. Semantics can be also defined as the ‘theory of
the relationships among the signs and the things in the world, which they denote’
(Erdmann, 2001). Semantics are relevant for formal logical ontologies for modeling and
generating calculations on a mathematical foundation. This basic approach with its syntax
performs a key relevance by providing the mathematical grammar and the concretely
denotable model. Exemplary syntaxes are algebraic terms, logical formulas or informational
programs. Formal logic provides a language for formalizing the description of the real
world and the tool for representing ontologies. It is differentiated according to propositional
logic and predicate logic. In propositional logic, there exist exactly two possible truth-
values: true or false. Predicate logic consists of terms and describes real world objects in an
abstract manner by means of variables and functions. (Stuckenschmidt, 2009) presents
methods and techniques of the notation.
Formal logic ontologies do not allow automatic proofs. Only computer-based evidence for
sub-problems is possible. Examples of formal logical ontologies are OntoSpace, DiaSpace,
OASIS-IP, CASL, OIL, and OWL.
In summary, it can be stated that both ontology types can be classified in different
types according to (Guarino, 1998): ‘top-level ontologies,” ‘domain ontologies’ and
‘application ontologies” which already represent known data and class models. ‘Top-level
ontologies’ describe fundamental and generally applicable basic approaches which are
independent of a specific real world. Their level of abstraction is high that allows a wide
range of users.
‘Domain ontologies’ focus on a specific application area and describe these fundamental
events and activities by specifying the syntax of ‘top-level’ ontologies. ‘Application
ontologies’ make use of known data or class models which apply to a specific application
area.
The following table finally summarizes the described ontologies and compares them
according to the presented properties and characteristics. Furthermore, the relevance of
ontologies applicable for robotic logistics is specified and the ontologies of ‘Process
Specification Language’ and the ontology ‘OntoClean’ are highlighted:
Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 511
DOLCE Institute of cognitive semantic Web cognitive basis partially relevant; wide
Science a Technology, knowledge, uses cognitive
Italien aspects
Onto Clean Laboratory for hierarchical strucutre of checking relevant for structuring
Applied Ontology, knowledge inconsistencies processes and robotics
Trento automatically technologies
PSL National Institute of neutral representation of modular d relevant for describving
Standards and process knowledge relations between processes
Technology, and robotics
Gaithersburg
WordNet Princeton University representation of natural lexical database not relevant
languages in IT
applications
UMLS National Libary of database for terminology for not relevant; especcially for
Health communicating medical medical applications biomedicine
terminology
SUMO Teknowledge providing information in combined out of not relevant; designed for
Corporation databases and in the multiple ontologies automated verification
internet
E- University of complex correlation of connecting different relevant; complex
Connection Liverpool different domains domains in a formal
and logical way
CASL Common Framework first-Order-logics for modular concept not relevant; formal approach
Initiative subsuming specific
languages
F-Logic Stony Brook deductive database conceptional and not relevant, formal approach
University object oriented
OIL Vrje Universität, web based language formal infrastructure not relevant, formal approach
Amsterdam for semantic web
Ontology World wide Web representation of base for integration not relevant, formal approach
Web Consortium correlation in the semantic of software
Language web
Fig. 3. Class structure of the 'process' and 'technology' domains [source: authors illustration]
The entities in these two class structures are the main processes of meta-model, which will
be presented in this chapter. On this basis, process modules and process elements of the
reference process are derived with application of the regulatory framework.
The reference process is situated in its systemic environment. It influences the reference
process with input and output variables. Thereby the input describes the general framework
and restrictions, which are valid for the robotic system. The output results directly from the
target dimension of the automation task. The parameters cover technical, organizational,
and economic aspects. (Kahraman et al., 2007) define a multi-criteria system for the
evaluation of robotic systems, which provides a multiple key factors for the evaluation of
robotic systems.
With the development of entity structures of the two domains of the reference process and
the inputs and outputs of the environment, the fundamentals of ontology development are
set. Based on these structures the hierarchical ontological taxonomies are created with the
aid of the ontology OntoClean. This is necessary in order to be able to describe the relations
between the two domains through ontology, the Process Specification Language.
descriptive notation of functions and processes through its manifold concepts and relations
in different levels of detail. Each participant in a pair of relationship is standardized and the
relationship is jointly depicted. Due to the functional and procedural point of view of the
ontology, the relationship can be well illustrated. The representation is done by focusing on
process elements of one domain that cause an impact on the process elements of the second
domain.
For preparation, the conceptual framework is defined as the delimitation of the considered
environment to be covered. It is defined according to the procedure model developed by
(Figgener & Hompel, 2007). They describe a regulatory framework for reference processes:
Fig. 4. Procedure model for the creation of reference process models [source: authors
illustration following (Figgener & Hompel, 2007)]
The aspects of an application area are defined. Thus, displayed in fig. 4, six phases for the
generation of a reference model are described. For the existing problem phase 1.2, phase 2.1,
and phase 2.2 are especially relevant. Phase 1.2 describes the regulatory framework. The
process modules and process elements are defined in phase 2.1 and 2.2. This distinction
allows the reduction and control of the complexity and expenditure for model creation
through the ontology.
With these results, both phases of the ontology model can be completed.
Fig. 5 shows the interdependence of both ontologies:
514 Robotic Systems – Applications, Control and Programming
essential aspect. Thereby both the direction of the relationship and the qualitative
description will be identified. With the ontology model, the defined requirements will be
satisfied as follows:
description of the concepts. Due to this circumstance, the analysis and validation of the
developed taxonomies is an essential part of ontology development.
A special case is presented due to the class structure ‘environment’. Here, both are
structured the environmental framework conditions and the target dimensions. This
taxonomy is independent of the reference process, and provides an example, shown in its
basic structure, as follows:
The meta-property 'identity' describes criteria, which distinctly identify classes and
differentiates instances from each other. Both classes and upper classes can provide these
identity criteria. The upper classes inherit the criteria. In the first case, the classes are
marked with +I. Thus, the identity criterion has been inherited by an upper class. In the
second case, the criterion of identity is first defined in an upper class and is marked with
+O. Classes that require a further identity criterion as restrictions for distinct definition are
denoted with –I.
The third meta-property 'unity’ is related to the property 'identity' and describes the
affiliation of certain entities to a class. A unity criterion defines a unifying relation of all
entities, which are interconnected. The corresponding classes are distinguished with +U. ~U
denotes those entities of a class that cannot be distinctly described. If there is no unity
criterion provided, the class is described with –U.
The fourth meta-property 'dependence’ describes the dependence of a class to other. This
fact is relevant if an instance of a class may not be an instance of a second class. Dependent
classes will be listed with +D, while independent classes are notated with –D.
In summary, the meta-properties are defined as follows, according to (Herb, 2006):
Meta-property definition
notification
+R a property is essential for all valid instances
-R a property that has not inevitable an entity that is an instance of its
class
~R a property where an instance of an allowing class belongs to an
instance of a regarded class
+I classes which differentiate due to the criteria of the allowing instances
-I class that does not have an identity criteria
+O identity criteria that is defined for the first time and is not transmitted
+U unity criteria that denotes connected entities
-U none unity criteria is existing
~U connection of entities which cannot described definitely
+D dependent classes
-D independent classes
and ‘schedules’. The module ‘PSL outer core’ deals with generic and broadly based concepts
regarding to their applicability. The module ‘generic activities’ defines a terminology to
describe generic activities and their relations. The module ‘schedules’ describes the
application and allocation of resources to activities under the premise of satisfying the
temporary restrictions:
term definition
The third concept, 'object’ describes all activities which do not correspond to any of the
above concepts. The concept has two functions. The first relation of the type 'participate in'
describes the concept 'object' which receives a non further defined relevance for a concept
'activity.’ The second relation 'exists-at’ describes an existing relevance to a particular point
of time.
The entities are, inclusive of their properties, distinguished from the taxonomies of process
and technology domains with these concepts. Here, procedural entities and properties can
exist which are either calculable or definable. These activities relate to the concept 'activity’.
Unpredictable, indefinable or changing conditions can be described as ad-hoc activities and
assigned the concept of 'activity occurrence.’ Other logistical or technical objects are called
objects and assigned to the concept 'object.’ An example describes the entity ‘general cargo’
as an activity (code 1.1) with its property ‘cubic’ and the entity ‘stock situation’ for a concept
named ad-hoc activities (code 1.2) with the property ‘chaotic’. An example of an object (code
1.3) is a technical process such as the process of recognizing the cargo. The following table
summarizes the results of the relevant vocabulary:
domains. The lines depict the process domains. The individual hierarchy steps of
taxonomies are presented. As described, they were indicated by the hierarchic structure.
The coding of the lines and columns indicates the respective levels of the hierarchic
structure. Additionally, the identified concepts of the respective sub-entities and
properties are noted on the lowest structural level. In the cells, the interaction from tab. 6
are noted and distinguished by means of the coding. For example, the procedural sub-
entity 1.1.1 affects the technical components 1.1.1 through the relationship ‘object-
participates-in’ (code 1.3.1).
system system
technique … … technique
1.1 n.m
component component
PSL Core … …
1.1.1 n.m.o
PSL-
Code process taxonomy concept 1.z ... ... concept 1.z
concept
P.1 Class 1 … …
P.1.1 entity 1.1 … …
sub-
P.1.1.1 entity concept 1.x 1.3.1 … … 1.x.y
1.1.1
… … ... ... ... … … … … … … … …
… … … … ... … … … … … … … …
… … … … ... … … … … … … … …
P.n class n … …
entity
Pn.m … …
n.m
sub-
P.n.m.o entity concept 1.x 1.x.y … … 1.x.y
n.m.o
plastic boxes
on pallet
t
robo
d ow gripper
nstre upst
am c ream
o nvey
o r conv
e y or
Fig. 9. Industrial application of a robotic system for depalletizing plastic boxes: result of the
ontological configuration [source: author's illustration]
The illustration points out that the configuration of the technical system depends on the
parameters of the process. The upstream conveyor supply box pallets including a buffer
function. The downstream conveyor conveyors the single boxes into the distribution cycle.
The task of an robot-based automation system focuses on the handling of single or multiple
boxes and the lay down onto the roller conveyor. Using the presented framework for
configuring the robotics architecture, the first step of generating the procedural and
technical taxonomies has to be executed. The class structure with its entities and attributes of
the ‘process domain’ can be assigned with attributed as followed:
Table 8. Entities and attributes of the class structure ‘piece goods’ of the ‘process domain’
[source: author's illustration]
524 Robotic Systems – Applications, Control and Programming
The table displays the various entities with its attribute regarding the class structure ‘piece
goods’. For instance, the meta-properties define the taxonomy of this class. Also the
hierarchical structure is defined. For instance, incorrect assignments of sub-entities will be
avoided. Additionally the table assigns relevant attributes of the reference process to
entities. The meta-properties figure out that the sub-entities ‘geometry’ and ‘mass’ are quite
important due to their essential (+R). Furthermore they give identity to their class (+I).
Finally, the corresponding taxonomy is presented in figure 10:
Fig. 10. Backbone taxonomy of the class structure “piece goods” of the “Process domain”
[source: author's illustration]
It displays the corrected taxonomy of the exemplary class structure of the industrial
application. The class “piece goods” consists of four entities (2nd level) and seven sub-
entities. These are related with to attributes. The relations ‘subsumption’ (is) or ‘attribute’
(att) describes the connection to the entities. Subsumption are given if the attribute is part of
the sub-entity.
The following step focuses on interactions between the “process domain” and “technology
domain”. Therefore, the relations are noted. Table 9 exhibits the relationship to the system
technology “robotics” which unite the entities “kinematics”, “geometry”, “load”, “accuracy”
and “installation”. Table 9 displays the relation codes between the two domains. For
instance, there are some relations of the type “ad-hoc-activity” and “activity”. For instance,
the strapping has an influence to the accuracy of the robot. Also the mass defines the type of
the robot. The table resumes these relations:
Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 525
Table 9. Entities and attributes of the class structure ‘piece goods’ of the ‘process domain’
[source: author's illustration]
This example clarify the potential of the ontological framework. The framework offers a
general and systemic knowledge to configure the best technical components and modules
for the specific application due to the system technologies “robotics”, “gripping
technology”, “pattern recognition” and “robot control and communication”.
5. Conclusion
The paper presents an ontological approach to standardize robotic systems in logistic
processes. Ontologies allow the systematic depiction of the technical systems in the
procedural environment. Through their high level of abstraction, this chapter describes the
conceptualization and elaboration of an ontological vocabulary for configuration process
customized robotic architectures. The vocabulary allows the description of the relational
structure for a dedicated reference process. It describes the relations among the procedural
entities and the technical components. This ontology framework is the basis for the
formation of modules and the configuration of the modules in robotics architectures.
The main goal provides a descriptive approach to the relationship between process and
technology. Here, representations of conceptual ontologies were consulted. Due to the
conceptual approach, the notation is on an abstract level, so that an automatic conclusion
through formal ontologies is realistic. The representation of a dedicated solution space of
possible technical configuration states of robotics system architectures is feasible, too.
In further research requirements, the development of formal ontologies in the context of this
scope reduces the level of abstraction and enables the mechanical and automatic generation
of ontologies.
In this way, interpretation and manipulation opportunities will be reduced and the
interconnections of relationships between process and technology detailed. In this
connection, formal ontologies allow the development of so-called architecture Configurator.
They are based on the provided procedural and technical information and the possible
ontological interrelationships. With this information, automatically development, including
economic criteria, prioritizes configurations of robotic system architectures for dedicated
526 Robotic Systems – Applications, Control and Programming
reference processes. This approach can also serve as an appreciation of the nature of a
‘Rapid Configuration Robotics’ approach, which can digitally review prototyping activities
such as technical feasibility and economic usefulness. The requirement for this type of IT-
based configuration planning is shown by the RoboScan10 survey:
Fig. 11. Study RoboScan10: Answers about necessity for an IT-based system that plans the
configuration of robotic systems [source: (Burwinkel, 2011)]
Fig. 11 shows the field of opinions in the context of RoboScan10 about the necessity for IT-
based configuration planning of robotic systems. The question asked was: ‘from a planning
perspective: Could you envisage using an IT-based planning tool, which enables the
configuration of both single robot systems and multi robot systems? 75% of all respondents
could envisage the application of such tools.
6. References
Arnold, D. (2006). Intralogistik : Potentiale, Perspektiven, Prognosen, Springer, ISBN 978-3-540-
29657-7, Berlin, Germany
Bateman, J. (1993): On the relationship between ontology construction and natural language:
a socio-semiotic view. International Journal of Human-Computer Studie, Vol.43,
No.5/6, pp.929-944, ISSN 1071-5819
Baumgarten, H. (2008). Das Beste der Logistik : Innovationen, Strategien, Umsetzungen,
Springer, ISBN 978-3-540-78405-0, Berlin, Germany
Bunge, M. (1977). Treatise on basic philosophy, Reidel, ISBN 9027728399, Dordrecht,
Netherlands
Burwinkel, M. & Pfeffermann, N. (2010). Die Zukunft der Robotik-Logistik liegt in der
Modularisierung : Studienergebnisse "RoboScan'10". Logistik für Unternehmen,
Vol.24, No.10, pp.21-23, ISSN 0930-7834
Chen, P. (1976). The entity-relationship model - toward a unified view of data. ACM Trans.
Database Syst, Vol. 1, No. 4, pp. 9–36
Elger, J., & Haußener, C. (2009). Entwicklungen in der Automatisierungstechnik. In: Internet
der Dinge in der Intralogistik, W. Günthner (Ed.), 23–27, Springer, ISBN 978-3-642-
04895-1, Berlin, Germany
Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 527
Erdmann, M. (2001). Ontologien zur konzeptuellen Modellierung der Semantik von XML,
University Karlsruhe, ISBN 3-8311-2635-6, Karlsruhe, Germany
EUROP. (2009). Robotics Visions to 2020 And Beyond : The Strategic Research Agenda For
Robotics in Europe, In: European Robotics Technology Platform, 20.06.2011, available
from: www.robotics-platform.eu
Figgener, O. & Hompel, M. Beitrag zur Prozessstandardisierung in der Intralogistik. In: Logistics
Journal (2007). ISSN 1860-5923 S. 1–12
Fritsch, D. & Wöltje, K. (2006). Roboter in der Intralogistik : Von der Speziallösung zum
wirtschaftlichen Standardprodukt. wt Werkstattsstechnik Online, Vol.96, No.9, pp.
623–630, ISSN 1436-4980
Gangemi, A., Guarino, N., Masolo, C., Oltramari, A. & Schneider, L. (2002). Sweetening
Ontologies with DOLCE, In: Knowledge Engineering and Knowledge Management:
Ontologies and the Semantic Web, Gómez-Pérez, A. & Benjamins, V. (Ed.),. pp.223–
233, Springer, ISBN 978-3-540-44268-4, Berlin, Germany
Gómez-Pérez, A., Fernández-López, M., Corcho, O. (2004). Ontological engineering: With
examples from the areas of knowledge management, e-commerce and the semantic web,
Springer, ISBN 978-1-852-33551-9, London, England Göpfert, I. (2009). Logistik der
Zukunft - Logistics for the future, Gabler, ISBN 978-3-834-91082-0, Wiesbaden,
Germany
Gruber, T. (1993). A translation approach to portable ontology specifications. Knowledge
Acquisition, Vol. 5, No.2, pp.199‐220, ISSN 1042-8143
Gruninger, M. (2002). Ontology - Applications and Design. Communication of the ACM,
Vol.45, No.2, pp.39–41, ISSN 00010782
Guarino, N. (1998). Formal ontology in information systems: Proceedings of the first international
conference (FOIS '98), June 6 - 8, Trento, Italy, FOIS, ISBN 905-1-993-994, Amsterdam,
Netherlands
Günthner, W. & HompelM. (2009.). Internet der Dinge in der Intralogistik, Springer, ISBN 978-
3-642-04895-1, Berlin, Germany
Herb, M. (2006). Ontology Engineering mit OntoClean. In: IPD University Karlsruhe,
10.06.2011, available from:
http://www.ipd.uni-karlsruhe.de/~oosem/S2D2/material/1-Herb.pdf
Kahraman, C., Cevik, S., Ates, N.Y. & Gulbay, M. (2007). Fuzzy multi-criteria evaluation of
industrial robotic systems. Computers & Industrial Engineering, Vol.52, No.4, pp. 414-
433, ISSN 0360-8352
Kastens, U. & Kleine Büning, H. (2008). Modellierung : Grundlagen und formale Methoden,
Hanser, ISBN 978-3-446-41537-9, München, Germany
Kiencke, U. (1997). Ereignisdiskrete Systeme: Modellierung und Steuerung verteilter Systeme,
Oldenbourg, ISBN 348-6-241-508, München, Germany
Kutz, O., Lutz, C., Wolter, F., Zakharyaschev, M. (2004). ε-connections of abstract
description systems. Artif. Intelligence, Vol.156, No.1, pp.1‐73, ISSN 0004-
3702Neches, R., Fikes, R., Finin, T., Gruber, T., Patil, R., Senator, T. & Swartout, W.
(1991). Enabling technology for knowledge sharing. AI Mag, Vol.12, No.3, pp.36–56
Scheid, W. (2010). Perspektiven zur Automatisierung in der Logistik : Teil 1 - Ansätze und
Umfeld. Hebezeuge Fördermittel - Fachzeitschrift für technische Logistik, Vol.50, No.9,
pp. 406–409, ISSN 0017-9442
528 Robotic Systems – Applications, Control and Programming
1. Introduction
In Alex Proyas’s science fiction movie “I, Robot” (2004) a detective suspects a robot as
murderer. This robot is a representative of a new generation of personal assistants that help
and entertain people during daily life activities. In opposition to the public opinion the
detective proclaimed that the robot is able to follow his own will and is not forced to Isaac
Asimov’s three main rules of robotics (Asimov, 1991). In the end this assumption turned out
to be the truth.
Even though the technological part of this story is still far beyond realization, the idea of a
personal robotic assistant is still requested. Experts predicted robotic solutions to be ready to
break through in domestic and other non-industrial domains (Engelberger, 1989) within the
next years. But up to now, only rather simple robotic assistants like lawn mowers and
vacuum cleaners are available on the market. As stated in (Gräfe & Bischoff, 2003), all these
systems have in common that they only show traces of intelligence and are specialists,
designed for mostly a particular task. Robots being able to solve more complex tasks have
not yet left the prototypical status. This is due to the large number of scientific and technical
challenges that have to be coped with in the domain of robots acting and interacting in
human environments (Kemp et al., 2007).
The focus of this paper is to describe a tool based process model, called the
“FRIEND::Process”1, which supports the development of intelligent robots in the domain of
personal assistants. The paper concentrates on the interaction and close relation between the
FRIEND::Process and configurable task-knowledge, the so called process-structures.
Process-structures are embedded in different layers of abstraction within the layered control
architecture MASSiVE2 (Martens et al., 2007). Even though the usage of layered control
architectures for service robots is not a novel idea and has been proposed earlier (Schlegel &
1 The name FRIEND::Process is related to the FRIEND projects (Martens et al., 2007). It has been
developed within the scope of these projects, but is also applicable to other service robots.
2 MASSiVE – Multilayer Control Architecture for Semi-Autonomous Service Robots with Verified Task
Execution
530 Robotic Systems – Applications, Control and Programming
Woerz, 1999; Schreckenghost et al., 1998; Simmons & Apfelbaum, 1998), MASSiVE is
tailored for process-structures and thus is the vehicle for the realization of verified
intelligent task execution for service robots, as it is shown in the following. The advantages
of using process-structures shall be anticipated here:
• Determinism: Process-structures represent the complete finite sequence of actions that
have to be carried out during the execution of a task. Due to the possibility of a bijective
transformation from process-structures to Petri-Nets, a-priori verification with respect
to deadlocks, reachability and liveness becomes possible. Thus, the task planner and
executor, as part of the layered architecture, operate deterministically when using
verified task-knowledge.
• Real-time capability: Additionally, the complexity of the task planning process satisfies
real-time execution requirements, because this process is reduced to a graph search
problem within the state-graph of the associated Petri-Net.
• Fault-Tolerance: Erroneous execution results are explicitly modeled within process-
structures. Additionally, redundant behavior is programmatically foreseen. If an
alternative robotic operation, which shall cope with the unexpected result, is not
available, the user is included as part of a semi-autonomous task execution process.
To be able to provide a user-friendly configuration of process-structures and to guarantee
consistency throughout all abstraction levels of task-knowledge, a tool-based process model
– the FRIEND::Process – has been developing. The process model, on the one hand, guides
the development and programming of intelligent behavior for service robots with process-
structures. On the other hand, process-structures can be seen as a process model for the
service robot itself, which guides the task execution of the robot during runtime. The unique
feature of the FRIEND::Process in comparison to other frameworks (Gostai, 2011; Microsoft,
2011; Quigley et al., 2009) and the above mentioned control architectures is to completely
rely on configurable process-structures and thus on determinism, real-time capability and
fault tolerance.
The FRIEND::Process consists of the following development steps:
• Analysis of Scenario and Task Sequence: A scenario is split up into a sequence of
tasks.
• Configuration of Object Templates and Abstract Process-Structures: The task
participating objects are specified as Object Templates and pictographic process-
structures on the symbolic (abstract) level are configured and verified.
• Configuration of Elementary Process-Structures: Process-structures on the level of
system resources and sub-symbolic (geometric) information are configured and verified
with the help of function block networks.
• Configuration and Testing of Reactive Process-Structures: Process-structures on the
level of algorithms and closed loop control, operating sensors and actuators, are
configured and tested, also with configurable function blocks.
• Task Testing: Task planning and execution is applied on all levels of process-structures
and a complete and complex task execution is tested.
In the following Section 2, the motivation for the introduction of process-structures is
explained in more detail by discussing the complexity of task planning for service robots
with the help of an exemplary scenario. The description of the FRIEND::Process
development steps is subject of Section 3. Throughout this description, exemplary process-
structures of the sample scenario of Section 2 are introduced for each development step.
Programming of Intelligent Service Robots
with the Process Model “FRIEND::Process” and Configurable Task-Knowledge 531
Finally, Section 4 summarizes and concludes the description of the FRIEND::Process for
programming intelligent service robots.
• Open the microwave-oven door again, grasp and retrieve the meal, close the
microwave-oven door
• Place the meal in front of the user, take away the lid
• In a cycle, take food with the spoon and serve it near the user’s mouth, finally put the
spoon back to the meal-tray
• Clear the wheelchair tray
Fig. 3. Decomposition of a scenario on four abstraction levels, illustrated with the sample
scenario “Meal preparation and eating assistance”
As shown in Fig. 3, the overall scenario is decomposed into tasks, abstract operators,
elementary operators and reactive operators according to the layered control architecture
MASSiVE. Abstract process-structures (PSA3) model behavior on task planning level and
elementary process-structures (PSE) model behavior on system planning level. The reactive
process-structures (PSR) define reactive operations on the executable algorithmic level. From
viewpoint of task planning, the “meal preparation and eating assistance” scenario is split up
into 6 tasks, 19 abstract operators and 43 elementary task planning operators. Additionally,
a large set of reactive operators is required within the execution layer.
In typical human environments, it is impossible to predefine a static sequence of operators
beforehand. Many dynamic aspects resulting from dynamic environmental changes have to be
considered, e. g. caused by changing lighting conditions, arbitrarily placed and filled objects,
changing locations of objects and the robotic platform, various obstacles, and many more.
Consequently, a strategy to plan a sequence of actions that fulfills a certain task is mandatory.
Many task planners are based upon deliberative approaches according to classical artificial
intelligence. Typically, the robotic system models the world with the help of symbolic facts
(e. g. first order predicate logic, (Russel & Norvig, 2003)), where each node of a graph
represents a state (snapshot) of the world. The planner has to find a sequence of operations
which transforms a given initial state into a desired target state. In the worst cases this leads
to NP-complete problems, as there is an exponential complexity of classical search
algorithms (Russel & Norvig, 2003). If we consider breadth-first search as a simple example,
a calculation time of hours results at search depth 8; and with a depth of 14, hundreds of
years are required for exhaustive search (branching factor 10 and calculation time of 10.000
nodes/s are assumed). The search depth is related to the number of required operators for a
certain task and the branching factor results from the number of applicable operators in one
node. Compared to the number of required and available operations shown in Fig. 3 it
becomes obvious that only trivial problems can be solved on this basis. Certainly, the mean
search time can be improved in comparison to breadth-first search, with e. g. heuristic
approaches like A*, with hierarchical planning, search in the space of plans or successive
reduction of abstraction (Russel & Norvig, 2003; Weld, 1999), but in worst cases a planning
complexity as mentioned has to be faced. Even though the improvements of deliberative
task planners are notable, it is still questionable whether they are efficient (real-time
capable) and robust (deterministic and fault-tolerant) enough for the application in real
world domains (Cao & Sanderson, 1998; Dario et al., 2004; Russel & Norvig, 2003).
structures are still flexible to adapt to diverse objects, so that their re-usability in different
scenarios is achieved. Technical details of process-structures beyond this summarized
concept description can be found in (Martens et al., 2007).
3. The FRIEND::Process
Process models structure complex processes in manifold application areas. With respect to
system- and software-engineering, a process model shall organize the steps of development,
the tools to be used and finally the artifacts to be produced throughout the different
development stages. The overall scheme of the FRIEND::Process is depicted in Fig. 5.
Central elements of the process and consequently the specialty in comparison to other
process models are the process-structures. Within the development steps, the building
blocks of process-structures are decomposed as shown in Table 2. In the following sections
the five development steps of the FRIEND::Process are discussed in detail. Thus, the
contents of Table 2, i. e. the composition of process-structures and the decomposition on the
next level as well as the abbreviations will be explained. Also, the application of the
FRIEND::Process for the development of the sample task of “meal preparation and eating
assistance” is shown in each step.
Fig. 5. Scheme of the FRIEND::Process with five development steps and the respective
process-structure levels as well as the involved tools for configuration, planning and
execution
Process-Structure Decomposition Process-Structure Building Blocks
Scenario Task Sequence Tasks
Task PSA System, Object Templates (OTs), Object Constellations
(OCs), Facts, Composed Operators (COPs)
COP PSE System, Object Templates (OTs), Facts, Skills
Skill PSR System, Object Templates (OTs), Reactive Blocks
Fig. 7. Use case diagram with tasks (use cases) of the sample scenario. For each task, a
detailed description as well as the set of task participating objects (TPO) is specified
Programming of Intelligent Service Robots
with the Process Model “FRIEND::Process” and Configurable Task-Knowledge 537
The objects involved in task execution are the elements that are relevant in all subsequent
development steps. To follow the principle of re-usable task-knowledge, the TPOs are
specified as abstract object classes. For example, a task that describes the fetching of an
object from a container-like place (see Fig. 4) can be re-used to fetch either a bottle or a meal
from the refrigerator. In the FRIEND::Process the re-usable classes of objects are specified as
hierarchical UML ontology. An exemplary ontology for the scenario “meal preparation and
assistance to eat” is depicted in Fig. 8. It is depicted that the TPOs are constructed from basic
geometric bodies (cuboid and cylinder) and more complex objects are created with
inheritance and aggregation.
Fig. 8. Ontology of task participating objects (TPO) for the scenario “Meal preparation and
assistance to eat”
To embed the TPOs in the tool-chain that covers all further development steps, the concept
of “Object Templates” (OT) has been introduced (Kampe & Gräser, 2010). The configuration
of Object Templates and their integration into the different levels of process-structure
configuration will be discussed in more detail within the following process steps.
environment, the so-called PSA-Configurator. Fig. 10 shows the PSA-Configurator with the
PSA “Fetch meal from fridge”.
Fig. 10. PSA-Configurator with the pictographic abstract process-structure modeling the
Task “Fetch meal from fridge”4
Fig. 11. The meal tray of the eating scenario as photo (left) and modeled by means of an
Object Template (right)
Fig. 12. The different separation stages (detached lid, detached spoon, both lid and spoon
detached) of the meal tray
540 Robotic Systems – Applications, Control and Programming
situation consists of two object constellations. The first one models the manipulator in a free
position in the workspace (instance number “0” is assigned to this object constellation
“MP.1_0”). The second object constellation models the already opened fridge containing the
meal tray (“Fr.1-Mt.1_0”). The two OCs are connected via the assembly COP
“GraspObjectInContainer(MP.1, Mt.1, Fr.1)”. If physically possible, a complementary
disassembly operator is assigned to model the reverse operation for re-usage of the PSA in
another scenario context. In this case this is the COP “DepartFromContainer(MP.1, Mt.1,
Fr.1)”. The assembled object constellation is depicted on the bottom left side and the
associated abstract planning symbol is “Fr.1-MP.1-Mt.1_0”. Due to the associated symbolic
facts, which are imposed within the object constellation by the post-condition facts of the
COP, the pictographic representation is rendered so that the manipulator grasps the meal
tray in the fridge.
Besides assembly and disassembly operators, the And/Or-Net syntax provides operators
modeling the internal state transition (IST) of object constellations (IST COPs). IST COPs are
applied when the physical contact state of the involved objects is not changed. From the
viewpoint of planning on abstract level, objects being in close relative locations to each other
are considered to be in a physical contact situation. Therefore, the IST COP
“GetObjectOutside(MP.1, Mt.1, Fr.1, InsertLoc)” is applied to transform the OC “Fr.1-MP.1-
Mt.1_0” on the left side into the OC “Fr.1-MP.1-Mt.1_1” on the right side. Finally, the COP
“MoveObjectFromRelLoc(MP.1, Mt.1, Fr.1, InsertLoc)” models the disassembly operation
and results in two object constellations which model the target situation of this abstract
process-structure: “Fr.1_0” is the empty fridge and “MP.1-Mt.1_0” is the manipulator with
the gripped meal tray in a free position in the work space.
To be able to develop and verify the three levels of process-structures independently, i. e. in
a modular manner, the consistency of task-knowledge on all levels has to be assured. This is
achieved with common building blocks of the different process-structures as shown in the
decomposition chain in Table 2. The common elements are the interfaces to the next level of
process-structures. The important interfacing elements between PSA and PSE are the pre-
and post-condition facts of the COP to be decomposed as PSE in the next process step. For
the COP “GraspObjectInContainer” the facts are shown in Table 4.
Pre-Facts Post-Facts
HoldsNothing(Manipulator) = True HoldsNothing(Manipulator) = False
IsInFreePos(Manipulator) = True IsInFreePos(Manipulator) = False
- IsGripped(Manipulator, Object) = True
ContainerAccessible(Container) = True -
IsInsideContainer(Object, Container) = True -
Table 4. Pre- and Post facts of COP “GraspObjectInContainer(Manipulator, Object,
Container)”
priori verification of task-knowledge on this level takes place with the help of Petri-Nets,
which result from automatic conversion of FBNs.
explicitly connected to an abort block to increase the readability of the network structure.
The typical construction rule for a semi-autonomous system (like FRIEND) is to provide
user interactions as redundant action for autonomous system operations. As shown in Fig.
15, the failure of an autonomous operation (e. g. “AcquireObjectBySCam”) is linked to the
user interaction “DetermineObjectBySCam”, replacing the failed system action.
6 http://imagenets.sourceforge.net/
Programming of Intelligent Service Robots
with the Process Model “FRIEND::Process” and Configurable Task-Knowledge 545
The PSR Configuration Framework consists of the following five parts (see Fig. 17):
• PSR-Configurator,
• Embedding of PSR into any C++ code via PSR-Executor,
• Reactive process-structures (PSR), which are executable function block networks,
• Extensible set of Plug-Ins and
• Configurable function blocks
Skill 3a Skill 3b
Block 1 Block 4a
Block 5
Block 3a
Block 2 Block 4b
Block 3b
algorithm. The PSR-Executor is in fact also a function block, which can load and process a
PSR. The connection between the PSR inside an Executor and the outer net is established by
special input and output blocks. For example the PSR “Color2Color3D” shown in Fig. 18
calculates a colored point cloud out of a stereo image pair. On the left side there are two
input blocks, which hand over the images from the block in orange. This block only exists in
this PSR for testing the net and will be ignored on execution if this PSR is loaded by a PSR-
Executor (see Fig. 19, right side).
Fig. 18. The functionality of calculating a colored point cloud out of a stereo image pair is
depicted in this PSR, called “Color2Color3D”
Fig. 19. The previously shown PSR can be loaded as one PSR-Executor block
Fig. 20. Left: original image, right: resulting point cloud of the stereo camera images
visualized in 3D by the PSR-Configurator
The PSR in Fig. 19 shows the use of a subnet of an image acquisition together with the
calculation of the extrinsic matrices of a stereo camera, which describe the relation of the
cameras to the robot. These matrices depend on an invariant transformation frame inside
the pan-tilt-head (see Fig. 1) and its rotation angles. By combining the two subnets, a live
view of the stereo camera’s point cloud can be calculated (depicted in Fig. 20, in the center of
the images the meal tray can be seen). As a visually guided robot is a real world object,
which moves in the three dimensional Cartesian space, it is useful to display the vision
Programming of Intelligent Service Robots
with the Process Model “FRIEND::Process” and Configurable Task-Knowledge 547
results in the same space. While configuring a PSR with the PSR-Configurator, intermediate
results can be visualized in two and three dimensions; depending on the data type, for
example scalar values can only be visualized in 2D, camera matrices can be visualized in 2D
and 3D (using OpenGL (Wright et al., 2010), see Fig. 20, right).
To be able to execute a PSR as skill block within the context of the PSE layer and to guarantee
that the PSE interfaces are respected, a special type of “Verified PSR-Executor block” is
created. During configuration of this kind of block, the PSR-Configurator checks that the
used resources as well as input and output parameters match the specification of a certain PSE
skill to be modeled as PSR. For example in the case of the PSR “AquireObjectBySCam(Object)”
the allowed resource is the stereo camera system. The input parameter is the Object Template
of the given object and the output parameter are the return values “Success” and “Failure”.
Fig. 21. PSR “MajorAxisPoints” which detects red areas of a certain size and calculates the
major axes of these areas. Orange blocks are omitted when this PSR is used in a PSR-Executor
To grasp the meal tray handle with the manipulator, the determination of its location in 3D
is required. Thus, a 2D detection of the meal tray is not sufficient. However, the previously
created and tested PSR “MajorAxisPoints” can be used twice, one for each image of the
stereo camera. Fig. 22 depicts the usage of the previous net to calculate the 3D line,
548 Robotic Systems – Applications, Control and Programming
describing the handle of the meal tray. The block Optimal Stereo Triangulation computes a 3D
contour based on key feature points, extracted from a stereo image. With the known camera
matrices and the 2D feature correspondences, the 3D points are found by the intersection of
two projection lines in the 3D space using optimal stereo triangulation, as described in
(Natarajan et al., 2011).
Fig. 23. Meal tray detection and Object Template placement based on 3D handle detection,
frame calculation and Object Template movement (UD = user data)
Programming of Intelligent Service Robots
with the Process Model “FRIEND::Process” and Configurable Task-Knowledge 549
For simulation and PSR unit testing in the PSR-Configurator, the fridge, the static
environment (wheelchair, monitor and user) and the robot with its current configuration can
be placed in the same 3D scene with the meal tray. Fig. 24 and Fig. 25 show the real scene
and the 3D simulation result in comparison.
4. Conclusion
As shown in Section 2.1 it is a challenging task to establish intelligent behavior of service
robots operating in human environments. Typical operation sequences of support tasks in
daily life activities seem to be simple from human understanding. However, to realize them
with a robotic system, a huge complexity arises due to the variability and unpredictability of
human environments.
In this paper the FRIEND::Process – an engineering approach for programming robust
intelligent robotic behavior – has been presented. This approach is an alternative solution in
contrast to other existing approaches, since it builds on configurable process-structures as
central development elements. Process-structures comprise a finite-sized and context-related
set of task knowledge. This allows a priori verification of the programmed system behavior
and leads to deterministic, fault-tolerant and real-time capable robotic systems.
The FRIEND::Process organizes the different stages of development and leads to consistent
development artifacts. This is achieved with the help of a tool chain for user-friendly
configuration of process-structures.
The applicability of the here proposed methods has been proven throughout the realization
of the AMaRob project (IAT, 2009) where task execution in three complex scenarios for the
support of disabled persons in daily life activities has been solved. One of theses scenarios is
the “Meal preparation and eating assistance” scenario, used for exemplification throughout
this paper. The most error prone and thus challenging action in this scenario is the correct
recognition of smaller objects (e. g. the handle of the meal tray) under extreme lighting
conditions. However, with the inclusion of redundant skills in the elementary process-
structures, the system’s robustness has been raised in an evolutionary manner. In cases
where even redundant autonomous skills did not execute successfully, the accomplishment
of the desired task was achieved via inclusion of the user within a user interaction skill.
Currently, the methods and tools discussed in this paper are continuously developed
further and are applied in the project ReIntegraRob (IAT, 2011). The mid-term objective is to
integrate the different configuration tools for process-structures into one integrated
Programming of Intelligent Service Robots
with the Process Model “FRIEND::Process” and Configurable Task-Knowledge 551
5. Glossary
COP Composed Operator
FBN Function Block Network
FRIEND Functional Robotarm with user-frIENdly interface for Disabled people
MASSiVE Multilayer Control Architecture for Semi-Autonomous Service Robots with
Verified Task Execution
OC Object Constellation
OT Object Template
PS Process-Structure
PSA Abstract Process-Structure
PSE Elementary Process-Structure
PSR Reactive Process-Structure
TPO Task Participating Object
6. References
Asimov, I. (1991). Robot Visions, Roc (Reissue 5th March 1991), ISBN-10: 0451450647
Cao, T. & Sanderson, A. C. (1998). AND/OR net representation for robotic task sequence
planning, In: IEEE Transactions on Systems, Man, and Cybernetics - part C: Applications
and Reviews, 28(2)
Dario, P., Dillman, R., and Christensen, H. I. (2004). EURON research roadmaps. Key area 1
on ‘Research coordination’, Available from http://www.euron.org
Engelberger, J. F. (1989), Robotics in Service, MIT Press, Cambridge, MA, USA, 1st ed, 1989
Gostai. (2010). Urbi 2.0. Available from http://www.gostai.com
Gräfe, V. & Bischoff, R. (2003). Past, present and future of intelligent robots, Proceedings of the
2003 IEEE International Symposium on Computional Intelligence, In: Robotics and
Automation (CIRA 2003), volume 2, ISBN 0-7803-7866-0, Kobe, Japan
IAT (2009). AMaRob Project, Institute of Automation, University of Bremen, Germany.
Available from http://www.amarob.de
IAT (2011). ReIntegraRob Project, Institute of Automation, University of Bremen, Germany.
Available from http://www.iat.uni-bremen.de/sixcms/detail.php?id=1268
Kampe, H. & Gräser, A. (2010). Integral modelling of objects for service robotic systems,
Proceedings for the joint conference of ISR 2010 (41st International Symposium on
Robotics) und ROBOTIK 2010 (6th German Conference on Robotics), 978-3-8007-
3273-9, Munich, Germany
Kemp, C. C., Edsinger, A. & Torres-Jara, E. (2007). Challenges for robot manipulation in
human environments, In: IEEE Robotics and Automation Magazine, vol. 14, pp. 20-29
Martens, C. (2003). Teilautonome Aufgabenbearbeitung bei Rehabilitations-robotern mit
Manipulator - Konzeption und Realisierung eines software-technischen und
algorithmischen Rahmenwerks, PhD dissertation, University of Bremen, Faculty of
Physics / Electrical Engineering, (in German)
552 Robotic Systems – Applications, Control and Programming
Martens, C., Prenzel, O. & Gräser, A. (2007). The rehabilitation robots FRIEND-I & II: Daily
life independency through semi-autonomous task-execution, In: Rehabilitation
Robotics (Sashi S Kommu, Ed.), pp. 137-162., I-Tech Education and Publishing,
Vienna, Austria, Available from
http://www.intechopen.com/books/show/title/rehabilitation_robotics
Microsoft. (2011). Microsoft Robotic Studio, Available from
http://www.microsoft.com/robotics
Natarajan, S.K., Ristic-Durrant, D., Leu, A., Gräser, A. (2011). Robust stereo-vision based 3D-
modeling of real-world objects for assistive robotic applications, in Proc. of IEEE/RSJ
International Conference on Robots and Systems (IROS), San Francisco, USA
Ojdanic, D. (2009). Using cartesian space for manipulator motion planning - application in
service robotics, PhD dissertation, University of Bremen, Faculty of Physics and
Electrical Engineering
Prenzel, O. (2005). Semi-autonomous object anchoring for service-robots, in B. Lohmann (Ed.),
A. Gräser, Methods and Applications in Automation, pp. 57 - 68, Shaker-Verlag,
Aachen, 2005, ISBN 3-8322-4502-2
Prenzel, O., Boit, A. and Kampe H. (2008) Ergonomic programming of service robot
behavior with function block networks, in Methods and Applications in Automation,
Shaker-Verlag, pp. 31-42
Prenzel, O. (2009). Process model for the development of semi-autonomous service robots, PhD
dissertation, University of Bremen, Faculty of Physics and Electrical Engineering
Quigley, M., Conley, K., Gerkey, B. P., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A. Y.
(2009) ROS: an open-source Robot Operating System, In: Proc. Of ICRA Workshop on
Open Source Software
Russel, S., and Norvig, P. (2003). Articial Intelligence - A Modern Approach, Prentice Hall,
Upper Saddle River, New Jersey, 2nd ed.
Schlegel, C., and Woerz, R. (1999) The software framework SmartSoft for implementing
sensorimotor systems, In: Proc. of the IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), pp. 1610-1616
Schmidt, D. C. (2006). Model-driven-engineering, In: guest editor's introduction, pp. 25-31,
IEEE Computer
Schreckenghost, D., Bonasso, R., Kortenkamp, D., Ryan D. (1998) Three tier architecture for
controlling space life support systems, In: Proc. of IEEE SIS'98, Washington DC,
USA
Simmons, R., Apfelbaum, D. (1998) A task description language for robot control, In: Proc. of
Conference on Intelligent Robotics and Systems
Weld, D. S. (1999). Recent advances in AI planning, in AI Magazine, vol 20, pp. 93-123
Wright, R. S., Lipchak, B., Haemel, N. & Sellers, G. (2010). OpenGL SuperBible: Comprehensive
Tutorial and Reference (5th Edition), Addison-Wesley, ISBN 978-0321712615
26
1. Introduction
Thanks to the incorporation of robotic systems, the development of industrial processes has
generated a great increase in productivity, yield and product quality. Nevertheless, as far as
technological advancement permits a greater automation level, system complexity also
increases, with greater number of components, therefore rising the probability of failures or
anomalous operation. This can result in operator's hazard, difficulties for users, economic
losses, etc. Robotic automatic systems, even if helped in minimizing human operation in
control and manual intervention tasks, haven't freed them from failure occurrences.
Although such failures can´t be eliminated, they can be properly managed through an
adequate control system, allowing to reduce degraded performance in industrial processes.
near to it. These systems are called fault tolerant systems and have become increasingly
important for robot manipulators, especially those performing tasks in remote or hazardous
environments, like outer space, underwater or nuclear environments.
In this chapter we will address the concept of fault tolerance applied to a robotic
manipulator. We will consider the first three degrees of freedom of a redundant SCARA-
type robot, which is intended to follow a Cartesian test trajectory composed by a
combination of linear segments. We developed three fault-tolerant controllers by using
classic control laws: hyperbolic sine-cosine, calculated torque and adaptive inertia. The essays
for such controllers will be run in a simulation environment developed through
MatLab/Simulink software. As a performance requirement for those controllers, we
considered the application of a failure consisting in blocking one of the manipulator's
actuators during trajectory execution. Finally, we present a performance evaluation for each
one of the above mentioned fault-tolerant controllers, through joint and Cartesian errors, by
means of graphics and rms rates.
Fig. 2b. Stages included in the design of a fault tolerant control system
556 Robotic Systems – Applications, Control and Programming
In the evaluation of fault tolerant controllers it is assumed that a robotic manipulator where
a failure has arisen in one or more actuators, can be considered as an underactuated system,
that is, a system with less actuators than the number of joints (El-Salam, El-Haweet, & and
Pertew, 2005). Those underactuated systems present a greater degree of complexity
compared with the simplicity of conventional robot control, being not so profoundly studied
yet (Rubí, 2002). The advantages of underactuated systems have been recognized mainly
because they are lighter and cheaper, with less energy consumption. Therefore, a great deal
of concern is being focused on those underactuated robots (Xiujuan & Zhen, 2007). In figure
3 it is shown a diagram displaying the first three degrees of freedom of a SCARA type
redundant manipulator, upon which essays will be conducted considering a failure in the
second actuator, making the robot become an underactuated system.
The considered failure is the blocking of the second actuator, what makes this robot an
underactuated system.
Having in mind the exposed manipulator, it is necessary to obtain its model; therefore we
will consider that the dynamic model of a manipulator with n joints can be expressed
through equation (1):
τ = M( q ) q
+C ( q , q ) +G( q ) + F ( q ) (1)
where:
τ : Vector of generalized forces (n×1 dimension).
M: Inertia matrix (n×n dimension).
C : Centrifugal and Coriolis forces vector (n×1dimension).
q : Components of joint position vector.
: Components of joint speed vector.
G : Gravity force vector (n×1dimension).
: Joint acceleration vector (n×1dimension).
F : Friction forces vector (n×1dimension).
Under failure conditions in actuator number 2, that is, it’s blocking, the component 2 of
equation (1) becomes a constant.
Fig. 4. Scheme of the three first DOF of a redundant SCARA-type robotic manipulator
4. Considered controllers
Considering the hybrid nature of fault tolerant control, it is proposed an active fault tolerant
control having a different control law according to the status of the robotic manipulator, i.e.
558 Robotic Systems – Applications, Control and Programming
normal or failing, with on-line sensing of possible failures and, in correspondence with this,
reconfiguring the controller by selecting the most adequate control law (changing inputs
and outputs).
Next, we will present a summary of the controllers considered for performance evaluation
when a failure occurs in the second actuator of the previously described manipulator.
qe = qd - q (3)
lim qe → 0 (4)
t→∞
This behavior is proved analyzing equation (5) and pointing that the only equilibrium point
for the system is the origin (0,0).
d q e a1
=
dt q a2
a1 = -q (5)
a2 = M ( q )
-1
( K sinh ( q ) cosh ( q ) -K sinh ( q ) - C ( q,q ) q )
p e e v
ˆ (q) q
τ=M (
d + K vq e + K pqe +C )
ˆ ( q , q ) + G
ˆ ( q ) + Fˆ ( q ) (6)
where:
M̂ : Estimation of inertia matrix (n×n dimension).
Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 559
K v 1
K v2
Kv = (7)
K v n
Kv : Diagonal definite positive matrix (n×n dimension).
K p 1
K p2
Kp = (8)
K p n
Kp : Diagonal definite positive matrix (n×n dimension).
d :
q Desired joint acceleration vector (n×1 dimension).
q e = q d - q (9)
e + K vq e + Kpqe ≈ 0
q (10)
τ = M(q ) q
+ Vm ( q , q ) q +G ( q ) + F ( q ) (11)
In this case, we define an auxiliary error signal r and its derivative , as shown in equations
(12) and (13), respectively:
r = Λqe + q e (12)
r = Λq e + q
e (13)
where:
Λ : Diagonal definite positive matrix (n×n dimension).
560 Robotic Systems – Applications, Control and Programming
λ1
λ2
Λ= (14)
λn
When replacing equations (3), (9), (12) and (13) into expression (11), we obtain:
τ = M( q ) (q
d + Λq e ) +Vm ( q , q ) ( q d + Λqe ) +G ( q ) +F ( q ) - M ( q ) r - Vm ( q , q ) r (15)
where:
Y11 Y12 Y1 n
Y21 Y22 Y2 n
Y ( q , q , q d , q d ,
qd ) = (17)
Y Ynn
n 1 Yn 2
τ = Y ( ) φ - M ( q ) r - Vm ( q , q ) r Y ( ) (18)
τ = Y ( ) φ
ˆ + Kvr (19)
where:
φ̂ : Parameter estimation vector (n×1dimension).
Kv: Diagonal definite positive matrix (n×n dimension).
The adaptive control updating rule can be expressed by:
ˆ = − φ
φ = Γ YT ( ⋅) r (20)
where:
Γ: Diagonal definite positive matrix (n×n dimension).
In Addendum B we show the set of parameter values employed in the manipulator dynamic
model, and the gains considered for each kind of fault tolerant controller.
Fig. 5. Block diagram of the structure of the fault tolerant controller used to test the above
mentioned control laws
9. Results
After establishing the control laws being utilized, we determine the trajectory to be entered
in the control system to carry out the corresponding performance tests of fault tolerant
control algorithms. This trajectory is displayed in figure 6.
562 Robotic Systems – Applications, Control and Programming
Fig. 7. Joint trajectory error with fault control using hyperbolic sine-cosine controller
Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 563
Fig. 8a. Cartesian trajectory error with fault control using hyperbolic sine-cosine controller
The performance of fault tolerant controller by computed torque is shown in figures 8b and
9, displaying the curves for joint and Cartesian errors under the same failure conditions than
the previous case.
0
-2
-4 eq1
-6 eq2
-8 eq3
-10
0 0.5 1 1.5 2 2.5 3 3.5 4
Time (sec)
Fig. 8b. Joint trajectory error with fault control using computed torque controller
564 Robotic Systems – Applications, Control and Programming
Fig. 9. Cartesian trajectory error with fault control using computed torque controller
In figures 10 and 11 we can see charts displaying respectively joint and Cartesian errors
corresponding to the performance of fault tolerant controller by adaptive inertia, under the
same failure conditions imposed to the previous controllers.
Fig. 10. Joint trajectory error with fault control using adaptive inertia controller
Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 565
Fig. 11. Cartesian trajectory error with fault control using adaptive inertia controller
3,00 eq1
Joint error rms ( ° )
2,50 eq2
eq3
2,00
1,50
1,00
0,50
0,00
sinh-cosh
calculated torque
adaptive inertia
Controller type
0,015 ex
ey
0,01
0,005
0
sinh-cosh
calculated torque
adaptive inertia
Controller type
Fig. 13. Performance index corresponding to Cartesian trajectory
Finally, in figures 12 and 13 it is shown a performance summarization of the analyzed fault
tolerant controllers in terms of joint and Cartesian mean square root errors, accordingly to
equation (21)
n
1
rms =
n
e
i =1
i
2
(21)
10. Conclusions
In this work we presented a performance evaluation of three fault tolerant controllers based
on classic control techniques: hyperbolic sine-cosine, calculated torque and adaptive inertia.
Those fault tolerant controllers were applied on the first three degrees of freedom of a
redundant SCARA-type robotic manipulator. The different system stages were
implemented in a simulator developed using MatLab/Simulink software, allowing to
represent the robotic manipulator behavior following a desired trajectory, when blocking of
one of its actuators occurs. In this way we obtained the corresponding simulation curves.
From the obtained results, we observed that the adaptive inertia fault tolerant controller
have errors with less severe maximums than the other controllers, resulting in more
homogeneous manipulator movements. We noticed that greater errors were produced with
the calculated torque fault tolerant controller, both for maximums and rms. Consequently,
the best performance is obtained when using the adaptive inertia controller, as shown in
figures 14 and 15. It is also remarkable that the hyperbolic sine-cosine fault tolerant
controller have a lesser implementation complexity, since it does not require the second
derivative of joint position. This can be a decisive factor in the case of not having high
performance processors.
Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 567
2
M11 = I 1zz + I 2zz + I 3zz + m1lc1 + m2 l12 + lc2
2
(
+ m2 2l1lc2 cosθ 2 + ... )
(a2)
(
m3 l + l + l + 2 l1l2 cosθ 2 + 2l2 lc3 cosθ 3 + 2 l1lc3 cos (θ 2 +θ 3 )
2
1
2
2
2
c3 )
2
M21 = M12 = I 2zz + I 3zz + m2 lc2 (
+ l1lc2 cosθ 2 + ... )
(a3)
(
m3 l + l + l1l2 cosθ 2 + 2 l2 lc3 cosθ 3 + l1lc3 cos (θ 2 + θ 3 )
2
2
2
c3 )
2
M31 = M13 = I 3zz + m3 lc3 (
+ l2 lc3 cosθ3 + m3l1lc3 cos (θ 2 + θ 3 ) ) (a4)
2
M22 = I 2zz + I 3zz + m2 lc2 + m3 l22 + lc3
2
(
+ 2l2 lc3 cosθ 3 ) (a5)
2
M32 = M23 = I 3zz + m3 lc3 + l2 lc3 cosθ 3 ( ) (a6)
C = [C 11 C 21 C 31 ]
T
(a8)
C 31 = m3 ( l2 lc3 sin θ 3 + l1lc3 sin (θ 2 + θ 3 ) )θ12 + 2m3l2 lc3 sin θ 3 ⋅θ1θ2 + ...
(a12)
m l l sin θ ⋅θ 2
3 2 c3 3 2
G = [ 0 0 0]
T
(a13)
where:
m1 : First link mass.
m2 : Second link mass.
Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 569
Controller Type
14. References
Barahona, J., Espinosa & L., C.F., 2002. Evaluación Experimental de Controladores de
Posición tipo Saturados para Robot Manipuladores. In Congreso Nacional de
Electrónica, Centro de Convenciones William o Jenkins. Puebla. México, 2002.
Blanke, M., Kinnaert, M., Lunze, J. & Staroswiecki, M., 2000. What is Fault-Tolerant Control.
In IFAC Symposium on Fault Detection, Supervision and Safety for Technical Process -
SAFEPROCESS 2000. Budapest, 2000. Springer-Verlag Berlin Heidelberg.
El-Salam, A., El-Haweet, W. & and Pertew, A., 2005. Fault Tolerant Kinematic Controller
Design for Underactuated Robot Manipulators. In The Automatic Control and
Systems Engineering Conference. Cairo, 2005.
Lewis, F., Dawson, D. & Abdallah, C., 2004. Robot Manipulator Control Theory and Practice.
New York: Marcel Dekker, Inc.
Patton, R.J., 1997. Fault-Tolerant Control: The 1997 Situation. In Proc. IFAC Symposium
Safeprocess. Kingston Upon Hull, 1997.
Puig, V. et al., 2004. Control Tolerante a Fallos (Parte I): Fundamentos y Diagnóstico de
Fallos. Revista Iberoamericana de Automática e Informática Industrial, 1(1), pp.15-31.
Rubí, J., 2002. Cinemática, Dinámica y Control de Robots Redundantes y Robots Subactuados. Tesis
Doctoral. San Sebastián, España: Universidad de Navarra.
Siciliano, B. & Khatib, O., 2008. Handbook of Robotics. Berlin, Heidelberg: Springer-Verlag.
Xiujuan, D. & Zhen, L., 2007. Underactuated Robot Dynamic Modeling and Control Based on
Embedding Model. In 12th IFToMM World Congress. Besançon. France, 2007.
Zhang, Y. & Jiang, J., 2003. Bibliographical Review on Reconfigurable Fault-Tolerant Control
Systems. In Proceedings IFAC SAFEPROCESS. Washington, D.C., USA, 2003.
0
27
1. Introduction
Programming robotic systems is not an easy task, even developing software for simple
systems may be difficult, or at least cumbersome and error prone. Those systems are usually
multi-threaded and multi-process, so synchronization problems associated with processes
and threads must be faced. In addition distributed systems in network environments are
also very common, and coordination between processes and threads in different machines
increases programming complexity, specially if network environments are not completely
reliable like a wireless network. Hardware abstraction is another question to take into account,
uncommon hardware for an ordinary computer user is found in robotics systems, sensors
and actuators with APIs (Applications Programming Interfaces) in occasions not very stable
from version to version, and many times not well supported on the most common operating
systems. Besides, it is not rare that even sensors or actuators with the same functionality
(i.e. range sensors, cameras, etc.) are endowed with APIs with very different semantics.
Moreover, many robotic systems must operate in hard real time conditions in order to warrant
system and operation integrity, so it is necessary that the software behaves obeying strictly
specific response times, deadlines and high frequencies of operation. Software integration
and portability are also important problems in those systems, since many times only in one
of them we may find a variety of machines, operating systems, drivers and libraries which
we have to cope to. Last but not least, we want robotic systems to behave “autonomous” and
“intelligently”, and to carry out complex tasks like mapping a building, navigating safely in a
cluttered, dynamic and crowded environment or driving a car safely in a motorway, to name
a few.
Despite there is no established standard methodology or solution to the situation described in
the previous paragraph, in the last ten years many approaches have blossomed in the robotics
community in order to tackle with it. In fact, many software engineering techniques and
experiences coming from other areas of computer science are being applied to the specific area
of robotic control software. A review of the state of the art of software engineering applied
* This
work has been partially supported by the Research Project TIN2008-06068 funded by the Ministerio
de Ciencia y Educación, Gobierno de España, Spain, and by the Research Project ProID20100062 funded
by the Agencia Canaria de Investigación, Innovación y Sociedad de la Información (ACIISI), Gobierno de
Canarias, Spain, and by the European Union’s FEDER funds.
572
2 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
specifically to robotics can be found in [Brugali (2007)]. Many of the approaches that have
come up in these last years, albeit different, are either based completely, or follow or share
to a certain extend some of the fundamentals ideas of the CBSE (Component Based Software
Engineering) [George T. Heineman & William T. Councill (2001)] paradigm as a principle of
design for robotic software.
Some of the significant approaches freely available within the robotics community based on
the CBSE paradigm are Gen oM/BIP [Mallet et al. (2010)][Basu et al. (2006)][Bensalem et al.
(2009)], Smartsoft [Schlegel et al. (2009)], OROCOS [The Orocos Project (2011)], project ORCA
[Brooks et al. (2005)], OpenRTM-aist [Ando et al. (2008)] and Willow Garage’s ROS project
[ROS: Robot Operating System (2011)]. All these approaches, in general incompatible among
them, use many similar abstractions in order to build robotic software out of a set of software
components. Each of these approaches usually solve or deal with many of the mentioned
problems faced when programming robotic systems (hard real-time operation, distributed,
multithread and multiprocess programming, hardware abstractions, portability, etc.), and
using any of them implies to get used to its own methodology, abstractions and software
tools to develop robotic software. Our group have also developed an approach to tackle
with the problem of programming robotic systems. It is some years already that we have
been using a CBSE C++ distributed framework designed and developed in our laboratory,
termed CoolBOT [Antonio C. Domínguez-Brito et al. (2007)], which is also aimed at easing
software development for robotic systems. Along several years of use acquiring experience
programming mobile robotic systems, we have ended up integrating in CoolBOT some new
developments in order to improve their use and operation. Those new improvements have
been focused mainly in two main questions, namely: transparent distributed computation,
and “deeper” interface decoupling; in next sections they will be presented more deeply. In
order to do so this paper is organized as follows. In section 2 we will introduce briefly an
overview about CoolBOT. Next, we will pass to focus on each one of the mentioned topics
respectively, in sections 4 and 5. The last section is devoted to presenting the conclusions of
this work.
2. CoolBOT. Overview
CoolBOT [Antonio C. Domínguez-Brito et al. (2007)] is a C++ component oriented
programming framework aimed to robotics, developed at our laboratory some years ago
[Domínguez-Brito et al. (2004)], which is normally used to develop the control software for
the mobile robots we have available at our institution. It is a programming framework that
follows the CBSE paradigm for software development. The key concept in the CBSE paradigm
is the concept of software component which is a unit of integration, composition and software
reuse [Brugali & Scandurra (2009)][Brugali & Shakhimardanov (2010)]. Complex systems
might be composed of several ready-to-use components. Ideally, interconnecting available
components out of a repository of components previously developed, we can program a
complete system. Thus, it should be only necessary a graphical interface or alike, to set up
a system. Hence, being CoolBOT CBSE oriented, it also makes use of this central concept to
build software systems.
Fig. 1 gives a global view of a typical system developed using CoolBOT. As we can observe,
there are five CoolBOT components and two CoolBOT views, forming all them four CoolBOT
integrations involving three different machines sharing a computer network. In addition,
hosted by one of the machines, there is a non CoolBOT application which uses a CoolBOT
probe to interact with one of the components of the system. Thus, in CoolBOT we can find
three types of software components: components, views and probes. All these three types
An Approach
An Approach to Distributed
to Distributed Component-Based Component-Based
Software for Robotics Software for Robotics 5733
machine machine
non coolbot
application
other integration
probe component
integration
integration
view
view
machine
integration
output component
port
input
port component
port
connection component
TCP/IP
component
connections
network
Fig. 1. Diagram of the elements and interconnections of a system designed with CoolBOT.
are software components in the whole sense, since we can compose them indistinctly and
arbitrarily without changing their implementation to built up a given system. The main
difference among them is that views and probes are “light-weight” software components in
relation to CoolBOT components. Views are software components which implement graphical
control and monitoring interfaces for CoolBOT systems, which are completely decoupled
from them. On the other side, probes mainly allow to implement decoupled interfaces for
interoperation of CoolBOT systems with non CoolBOT applications, as depicted in the figure.
Both will be explained in more detail in section 5.
In CoolBOT, systems are made of CoolBOT components (components for short). A component
is an active entity, in the sense that it has its own unit of execution. It also presents a clear
separation between its external interface and its internals. Components intercommunicate
among them only through their external interfaces which are formed by input and output ports.
When connected, they form port connections, as depicted on Fig. 1. Through them, components
interchange discrete units of information termed port packets. Views and probes have a similar
574
4 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
external interface of input and output ports, hence, they can also be interconnected among
them and with components using port connections. The functionality of a whole system
comes up from the interaction through port connections among all the components integrating
the system, including views and probes.
component component
Fig. 2. Port connections: one publisher, Fig. 3. Port connections: many publishers,
many subscribers. one subscriber.
To establish a port connection the ports involved should be compatible, i.e., they must have
compatible types, and should transport the same types of port packets. In particular, when
defining an output or input port we have to specify three aspects for them, namely:
1. An identifier: This is an identifier or name for the port. It has to be unique at component
(or view or probe) scope. The port identifier is what we use to refer to a specific port when
establishing/de-establishing port connections.
2. A port type: This is the type of the port. There are several typologies available for input
and output ports, and depending on how we combine them, we can establish a different
model of communication for each connection. The typologies of the input and output
ports involved in a connection determine the pattern and semantics of the communication
through it, following the same philosophy that the communications patterns of Smartsoft
[Schlegel et al. (2009)], and the interfaces for component communication available in
OROCOS [The Orocos Project (2011)]. On Tables 1 and 2 we can see all the types of
connections we have available in CoolBOT right now, we will elaborate on this later.
3. Port packet types: Those are the types of port packets accepted by the output or input
port. Most input and output port types only accept one type of port packet through them,
although we have also some of them that accept a set of different port packet types.
An Approach
An Approach to Distributed
to Distributed Component-Based Component-Based
Software for Robotics Software for Robotics 5755
Bear in mind that in CoolBOT port connections are established dynamically, but the definition
of each input and output port for each software component, whether a component, a view
or a probe, is static. Thus, for a given component we define statically its external interface
of input and output ports, each one with its identifier, port type and accepted port packet
types. In opposition, port connections are established at runtime. Only a compatible pair of
output and input ports can form a port connection, and we say that they are compatible when
two conditions fulfill: first, they have compatible port types (the compatible combinations are
shown in Tables 1 and 2); and second, the port packets the pair of port accepts also match.
As commented, Tables 1 and 2 show all the possible types and combinations of output and
input ports available in CoolBOT. As we can observe we have two groups of types of port
connections depending on the types of the output and input ports we connect, namely:
• Active Publisher/Passive Subscriber (AP/PS) connections. In this kind of connections
we say the publisher is the active part of the communication through the connection,
since it is the publisher (the sender) who invest more computational resources in the
communication. More specifically, in these connections, there is a buffer (a cache, a
memory) in the input port where incoming port packets get stored, when they get to the
subscriber (the receiver) end. We say the publisher is active, because the copy of port
packets in the input port buffers is made by the publisher’s threads of execution. Those
memories get signaled for the subscribers to access them at their own pace. Evidently, if
the output port has several subscribers, the publisher has to make a copy for all of them,
so the computational cost for copies increases and this cost is afforded by the publisher.
Table 1 enumerates all the available types of port connections following this model of
communication.
Table 2. Available port connections types: passive publisher/ active subscriber (PP/AS)
connections.
packets and keeps doing something different, the subscriber gets packets at its own pace and
not necessarily at the right moment they get to its input ports.
As to port packets, when defining an output or input port, we have to specify which port
packet type or types (depending on the port type being defined), the port will accept. In
general, port packet types are defined by the user, as we will see in section 3, we may also use
port packets types provided by CoolBOT itself (the available ones are shown in Fig. 3), port
packet types previously developed, or third party port packet types.
An Approach
An Approach to Distributed
to Distributed Component-Based Component-Based
Software for Robotics Software for Robotics 5777
control monitoring
port port
user user
input output
ports <<component>> ports
component suspended
automaton
starting ready
running end
error error
recovery recovery
error
error
A key design principle for CoolBOT components is to take advantage of multithreaded and
multicore capacities of mainstream operating systems, and the infrastructure they provide for
multithreaded programming. Another key principle of design for components was to separate
functional logic from thread synchronization logic. The user should be only worried about the
functional logic, synchronization issues should be completely transparent to the developer.
CoolBOT should be responsible for them behind the scenes. As active objects, CoolBOT
components can organize its execution using multiple threads of execution as depicted on
Fig. 4. Those threads are mapped on the underlying operating system (see Fig. 5). Thus,
when developing a component the user assigns disjointly threads to automaton states, and to
input ports and internal events provoking transitions. Those transitions, i.e. their associated
callbacks, will be executed by the specific threads being assigned. The synchronization
among them is guaranteed by the underlying framework infrastructure. All components are
endowed at least with one thread of execution; the rest, if any, are user defined.
As depicted in Fig. 1, CoolBOT provides means for distributed computation. A given
system can be mapped on a set formed by different machines sharing a computer network.
Port connections among components, views and probes are transparently multiplexed using
TCP/IP connections (see section 4). Furthermore, each machine can host one or several
CoolBOT integrations. A CoolBOT integration is an application (a process) which integrates
instances of CoolBOT components, views and probes. Integrations can be instantiated in any
machine, and are user defined using a description language as we will see in next section.
infraestructure
CoolBOT threads,
DC3 Protocol
ICC mechanisms & time
OS APIs & OS API ACE
Utility Libraries (threads, mutexes, cond. variables, time, etc.) (sockets & data marshalling)
Operating
System GNULinux/Windows
Fig. 5. Abstraction layers in CoolBOT.
latter tool, the CoolBOT compiler coolbot-c, generates C++ skeletons for components, port
packets, views and integrations, and for each component it also generates its corresponding
probe. All, the probe and the skeletons are C++ classes, and CoolBOT uses a description
language as source code to generate those C++ classes. Except for probes, which are complete
functional C++ classes, coolbot-c generates incomplete C++ classes which constitute the
mentioned C++ skeletons. They are incomplete in the sense that they lack functionality, the
user is responsible for completing them. Once completed, and using the CMake templates
provided by coolbot-ske, they can be compiled. Components, probes, port packets, and
views compile yielding dynamic libraries, integrations compile yielding executable programs.
Moreover, the coolbot-c compiler preserves information when recompiling description files
which have been modified, in such a way that, all C++ code introduced by the user into the
skeletons is preserved.
coolbot-ske
.cpp
directory + .cpp .cpp .cpp
structure for .h .h .h .h .cpp
development,
cmake templates,
description language integration
component component port packets view
templates & test programs C++ skeleton
C++ skeleton probe C++ skeletons C++ skeleton
generated automatically. In addition, we have endowed also CoolBOT with a rich set of C++
templates and classes to support marshalling and demarshalling of port packets (or any other
arbitrary C++ class).
<<probe>>
<<view>>
graphical
interface
be interconnected with other components, views or probes. Equally they implement the same
network decoupled support of threads for transparent network communications. In CoolBOT,
probes are devised as interfaces for interoperability with non CoolBOT software, as illustrated
graphically in Fig. 1. A complete functional C++ class implementing a probe is generated
when a component is compiled by coolbot-c. The probe implements the complementary
external interface of their corresponding component. Those probes generated automatically
can be seen as automatic refactorings of external component interfaces in order to support
interoperability of CoolBOT components with non CoolBOT software. As mentioned in
[Makarenko et al. (2007)] this is an important factor in order to facilitate integration of different
development robotic software approaches.
6. A real integration
In its current operating version, CoolBOT has been mainly used to control mobile robotic
systems with the platforms we have available at our laboratory: Pioneer mobile robots models
3-DX, and P3-AT from Adept Mobile Robots [Adept Mobile Robots (2011)] (former Activ Media
Robotics).
In this section we will show next a real robotic system using one of our Pioneer 3-DX
mobile robots, in order to give a glimpse of a real system controlled using CoolBOT. The
system is illustrated in Fig. 8. The example shows a secure navigation system for an indoor
mobile robot. This is a real application we have usually in operation on the robots we have
at the laboratory. The system implements a secure navigation system based on the ND+
algorithm for obstacle avoidance [Minguez et al. (2004)]. It has been implemented attending
to [Montesano et al. (2006)]. In the figure, input ports, output ports, and port connections,
have been reduced for the sake of clarification. Some of them represent several connections
and ports in the real system.
The system is organized using two CoolBOT integrations, one only formed by CoolBOT
component instances, and the other one containing CoolBOT view instances. The former
one is really the integration which implements the navigation system. As we can observe,
it consists of five component instances, namely: PlayerRobot (this is a wrapper component
for hardware abstraction using the Player/Stage project framework [Vaughan et al. (2003)]),
MbICP (this is a component which implements the MbICP scan matching algorithm based on
laser range sensor data [Minguez et al. (2006)]), GridMap (this component maintains a grid
map of the surroundings of the robot built using robot range laser scans, it also generates
An Approach
An Approach to Distributed
to Distributed Component-Based Component-Based
Software for Robotics Software for Robotics 583
13
machine machine
integration
planner
Pla-
MbICP web browser
nner
map
java
view applet
Map
view
probe
MbICP
commands
view
commands
Robot
sensory
view info
commands
machine
integration
Short
output Term
Planner
port
commands
map
input
port ND
Navigation virtual
port
GridMap
scan
connection
sensory
TCP/IP info
commands
connections
Player
Robot MbICP
network
Fig. 8. CoolBOT. Secure Navigation System.
periodically a 360o virtual scan for the ND+ algorithm), NDNavigation (implements the ND+
algorithm) and ShortTermPlanner (a planner which uses the grid map for planning paths in
the robot surroundings using a modification of the numerical navigation function NF2 found
in [Jean-Claude Latombe (1991)]). On other machine another integration is shown hosting four
view instances through which we can control and monitor the system remotely. In addition,
in another machine, there is a web browser hosting a Java applet using a CoolBOT probe to
connect to some of the components of the system.
In order to clarify how the integration of Fig. 8 has been built, and also to clarify the process
of development of each of its components, in next paragraphs we will have a look at the
description files used to generate some of them, including the whole integration shown in the
figure. Thus, in Fig. 9 we can see the description file accepted by coolbot-c of one of the
584
14 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
/*
* File: player-robot.coolbot
* Description: description file for PlayerRobot component
* Date: 02 June 2010
* Generated by coolbot-ske
*/
component PlayerRobot
{
header
{
author "Antonio Carlos Domí nguez Brito <[email protected]>";
description "PlayerRobot component";
institution "IUSIANI - Universidad de Las Palmas de Gran Canaria";
version "0.1"
};
constants
{
LASER_MAX_RANGE="LaserPacket::LASER_MAX_RANGE";
SONAR_MAX_RANGE=5000; // millimeters
private FIFO_LENGTH=5;
private ROBOT_DATA_INCOMING_FREQUENCY= 10; // Hz
private LASER_MIN_ANGLE= -90; // degrees
...
};
// input ports
input port Commands type fifo port packet CommandPacket length FIFO_LENGTH;
input port NavigationCommands type fifo port packet NDNavigation::CommandPacket length FIFO_LENGTH;
//output ports
output port RobotConfig type poster port packet ConfigPacket;
output port Odometry type generic port packet OdometryPacket network buffer FIFO_LENGTH;
output port OdometryReset type generic port packet PacketTime;
output port BumpersGeometry type poster port packet "BumperGeometryPacket";
output port Bumpers type generic port packet BumperPacket;
output port SonarGeometry type poster port packet "SonarGeometryPacket";
output port SonarScan type generic port packet "SonarPacket";
output port LaserGeometry type poster port packet PacketFrame3D;
output port LaserScan type generic port packet LaserPacket;
output port Power type generic port packet PacketDouble;
output port CameraImage type poster port packet CameraImagePacket;
output port PTZJoints type generic port packet "PacketPTZJoints";
exception RobotConnection
{
description "Robot connection failed.";
};
exception NoPositionProxy
{
description "Position proxy not available in this robot.";
};
exception InternalPlayerException
{
description "A Player library exception has been thrown.";
};
/*
* File: grid-gtk.coolbot-view
* Description: description file for GridGtk view
* Date: 29 April 2011
* Generated by coolbot-ske
*/
view GridGtk
{
header
{
author "Antonio Carlos Domínguez-Brito <[email protected]>";
description "GridGtk View";
institution "IUSIANI - ULPGC (Spain)";
version "0.1"
};
constants
{
private DEFAULT_REFRESHING_PERIOD=500; // milliseconds
...
};
// input ports
input port ROBOT_CONFIG type poster port packet PlayerRobot::ConfigPacket;
input port GRID_MAP type poster port packet GridMap::GridMapPacket;
input port PLANNER_PATH type last port packet ShortTermPlanner::PlannerPathPacket;
// output ports
output port PLANNER_COMMANDS type generic port packet ShortTermPlanner::CommandPacket;
output port ND_COMMANDS type generic port packet NDNavigation::CommandPacket;
};
/*
* File: mbicp-integration.coolbot-integration
* Description: description file for mbicp-integration integration.
* Date: 29 April 2011
* Generated by coolbot-ske
*/
integration mbicp_integration
{
header
{
author "Antonio Carlos Domínguez-Brito <[email protected]>";
description "MbICP’s views integration";
institution "IUSIANI - ULPGC (Spain)";
version "0.1"
};
machine addresses
{
local my_machine: "127.0.0.1";
the_other_machine: "...";
};
ROBOT_VIEW_PORT: 1955;
MBICP_VIEW_PORT: 1985;
NAVIGATION_MAP_VIEW_PORT: 1975;
NAVIGATION_PLANNER_VIEW_PORT: 1995;
};
local instances
{
view robotView:PlayerRobotGtk listening on ROBOT_VIEW_PORT with description "Robot";
view mbicpView:MbICPGtk listening on MBICP_VIEW_PORT with description "MbICP";
view mapView:GridGtk listening on NAVIGATION_MAP_VIEW_PORT with description "Map";
view navigationPlannerView:PlannerGtk listening on NAVIGATION_PLANNER_VIEW_PORT with description "Planner";
};
...
};
};
Fig. 12. The integration containing Robot. MbICP, Map and Planner views of Fig. 8
588
18 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
Fig. 13. Java view implemented using a probe to access range sensor information for a mobile
robot. A snapshot.
An Approach
An Approach to Distributed
to Distributed Component-Based Component-Based
Software for Robotics Software for Robotics 589
19
7. Conclusions
In this document we have presented the last developments which have been integrated in
the last operating version of CoolBOT. The developments have been aimed mainly to two
questions: transparent distributed computation, and “deeper” interface decoupling. It is
our opinion that the use and operation of CoolBOT has improved considerably. CoolBOT
is an open source initiative supported by our laboratory which is freely available via
www.coolbotproject.org, including the secure navigation system depicted in Fig. 8.
8. References
Adept Mobile Robots (2011). http://www.mobilerobots.com/.
Ando, N., Suehiro, T. & Kotoku, T. (2008). A Software Platform for Component Based
RT-System Development: OpenRTM-Aist, in S. Carpin, I. Noda, E. Pagello,
M. Reggiani & O. von Stryk (eds), Simulation, Modeling, and Programming for
Autonomous Robots, Vol. 5325 of Lecture Notes in Computer Science, Springer Berlin /
Heidelberg, pp. 87–98.
Antonio C. Domínguez-Brito, Daniel Hernández-Sosa, José Isern-González & Jorge
Cabrera-Gámez (2007). Software Engineering for Experimental Robotics, Vol. 30 of
Springer Tracts in Advanced Robotics Series, Springer, chapter CoolBOT: a Component
Model and Software Infrastructure for Robotics, pp. 143–168.
Basu, A., Bozga, M. & Sifakis, J. (2006). Modeling heterogeneous real-time components in
BIP, In Fourth IEEE International Conference on Software Engineering and Formal
Methods, pages 3-12, Pune (India).
Bensalem, S., Gallien, M., Ingrand, F., Kahloul, I. & Thanh-Hung, N. (2009). Designing
autonomous robots, IEEE Robotics and Automation Magazine 16(1): 67–77.
Brooks, A., Kaupp, T., Makarenko, A., S.Williams & Oreback, A. (2005). Towards
component-based robotics, In IEEE International Conference on Intelligent Robots and
Systems, Tsukuba, Japan, pp. 163–168.
Brugali, D. (ed.) (2007). Software Engineering for Experimental Robotics, Springer Tracts in
Advanced Robotics, Springer.
Brugali, D. & Scandurra, P. (2009). Component-based robotic engineering (part i) [tutorial],
Robotics Automation Magazine, IEEE 16(4): 84 –96.
Brugali, D. & Shakhimardanov, A. (2010). Component-based robotic engineering (part ii),
Robotics Automation Magazine, IEEE 17(1): 100 –112.
Domínguez-Brito, A. C., Hernández-Sosa, D., Isern-González, J. & Cabrera-Gámez, J. (2004).
Integrating Robotics Software, IEEE International Conference on Robotics and
Automation, New Orleans, USA.
Douglas C. Schmidt (2010). The Adaptive Communication Environment (ACE),
www.cs.wustl.edu/∼schmidt/ACE.html.
Ellis, C. & Gibbs, S. (1989). Object-Oriented Concepts, Databases, and Applications, ACM Press,
Addison-Wesley, chapter Active Objects: Realities and Possibilities.
Gamma, E., Helm, R., Johnson, R. & Vlissides, J. (1995). Design Patterns: Elements of
Reusable Object-Oriented Software, Addison-Wesley Professional Computing Series,
Addison-Wesley.
George T. Heineman & William T. Councill (2001). Component-Based Software Engineering,
Addison-Wesley.
Glade - A User Interface Designer (2010). glade.gnome.org.
J. Paul Morrison (2010). Flow-Based Programming, 2nd Edition: A New Approach to Application
Development, CreateSpace.
590
20 Robotic Systems – Applications, Control andWill-be-set-by-IN-TECH
Programming
1. Introduction
Trajectory planning for robots is a very important issue in those industrial activities which
have been automated. The introduction of robots into industry seeks to upgrade not only
the standards of quality but also productivity as the working time is increased and the
useless or wasted time is reduced. Therefore, trajectory planning has an important role to
play in achieving these objectives (the motion of robot arms will have an influence on the
work done).
Formally, the trajectory planning problem aims to find the force inputs (control ( )) to
move the actuators so that the robot follows a trajectory ( ) that enables it to go from the
initial configuration to the final one while avoiding obstacles. This is also known as the
complete motion planning problem compared with the path planning problem in which the
temporal evolution of motion is neglected.
An important part of obtaining an efficient trajectory plan lies with both the interpolation
function used to help obtain the trajectory and the robot actuators. Ultimately actuators will
generate the robot motion, and it is very important for robot behavior to be smooth.
Therefore, the trajectory planning algorithms should take into account the characteristics of
the actuators without forgetting the interpolation functions which also have an impact on
the resulting motion. As well as smooth robot motion, it is also necessary to monitor some
working parameters to verify the efficiency of the process, because most of the time the user
seeks to optimize certain objective functions. Among the most important working
parameters and variables are the time required to get the trajectory done, the input torques,
the energy consumed and the power transmitted. The kinematic properties of the robot´s
links, such as the velocities, accelerations and jerks are also important.
The trajectory algorithm should also not overlook the presence of possible obstacles in the
workspace. Therefore it is very important to model both the workspace and the obstacles
efficiently. The quality of the collision avoidance procedure will depend on this
modelization.
592 Robotic Systems – Applications, Control and Programming
In this chapter we will describe two algorithms for solving the collision-free trajectory
planning for industrial robots that we have developed. We have called them “sequential”
and “simultaneous” algorithms. The first is an indirect method while the second is a direct
one. The “sequential” algorithm considers the main properties of the actuators (torque,
power, jerk and energy consumed). The “simultaneous” algorithm analyzes what is the best
interpolation function to be used to generate the trajectory considering a simple actuator
(only the torque required). The chapter content is based on the previous work done by the
authors (see Valero et al., 2006, and Rubio et al., 2007). Specifically, the two approaches to
solving the trajectory planning problem are explained.
3. Robot modelling
The robot model used henceforth is the wire model corresponding to the PUMA 560 robot
shown in Fig. 1. The robot involves rigid links that are joined by the corresponding
kinematic joints (revolution).The robot has F degrees of freedom and each robot´s
configuration C j can be unequivocally set using the Cartesian coordinates of N points,
which are called significant points. These points, defined as α ji (x j3(i-1)+1 ,x j3(i-1)+2 , x j3(i-1)+3,
i=1..F, j=number of configuration) are chosen systematically. Therefore, ultimately, every
configuration will be expressed in Cartesian coordinates by means of the significant points,
i.e. C j= C j (α ji), which represent the specifics of the robot under study. It is important to
point out that they do not constitute an independent set of coordinates. Besides the
significant points, some other points p jk, called interesting points, will be used to improve
the efficiency of the algorithms, the coordinates of which are obtained from the significant
points and the geometric characteristics of the robot.
Fig. 1. Model of robot PUMA 560. Significant and Interesting Points (mobile base)
594 Robotic Systems – Applications, Control and Programming
The PUMA 560 robot can be modelled with a movable base or a fixed base. The mobile-
based robot is shown in Fig. 1 with the seven significant points used (α1, α2, α3, α4, α5 , α6
and α7 ), together with another five interesting points p jk. As a result of this, the
configuration C j is determined by twenty-one variables corresponding to the coordinates of
the significant points. These variables are connected through fourteen constraint equations
relative to the geometric characteristics of the robot (length of links, geometric constraints
and range of motion). See Rubio et al, 2009 for more details. It must be noted that any other
industrial robot can be modelled in this way by just selecting and choosing appropriately
those significant points that best describe it.
This property is very important as far as the effectiveness of the algorithm is concerned.
4. Workspace modelling
The workspace is modelled as a rectangular prism with its edges parallel to the axes of the
Cartesian reference system. The work environment is defined by the obstacles bound to
produce collisions when the robot moves within the workspace. The obstacles are
considered static, i.e. their positions do not vary over time and they are represented by
means of unions of patterned obstacles.
The fact of working with patterned obstacles introduces two fundamental advantages:
1. It allows the modelling of any generic obstacle so that collisions with the robot´s links
can be avoided.
2. It permits working with a reduced number of patterned obstacles in order to model a
complex geometric environment so that its use is efficient. It means that a small number
of constraints are introduced into the optimization problem when obtaining collision-
free adjacent configurations.
The patterned obstacles have a geometry based on simple three-dimensional figures,
particularly spheres, rectangular prisms and cylinders. Any obstacle present in the
workspace could be represented as a combination of these geometric figures.
The definition of a patterned obstacle is made in the following way:
• Spherical obstacle SOi, is defined when the position of its centre and its radius are
known. It is characterized by means of
- Centre of Sphere i: c iSO = ( c xiSO , c yiSO , c ziSO )
c SO
i r SO
i
• Cylindrical obstacle COk, is defined when the coordinates of the centres of its bases and
its radius are known. It is characterized by means of
Centre of base 1 for cylinder k: c 1CO k = ( c 1 xk , c 1 yk , c 1 zk )
CO CO CO
-
- Centre of base 2 for cylinder k: c 2 k = ( c 2 xk , c 2 yk , c CO
CO CO CO
2 zk )
- Radius of cylinder k: rkCO
(
Therefore COk = c1CO CO
)CO
k , c 2 k , rk . See Fig. 3
CO
r
k
q2PO
l
q1PO
l q3PO
l
alPO
Fig. 4. Generic Prismatic obstacle POl
Cartesian reference system is created and whose opposite vertices correspond to the
positions of the end-effector of the robot in the initial and final configurations from which
the connecting path is calculated. The set of positions that the end-effector of the robot can
adopt within the prism is restricted to a finite number of points resulting from the
discretization of the prism according to the following increases:
α nxf − α nx
i
α nyf − α ny
i
α nzf − α nzi
Δx = Δy = Δz = (1)
Nx Ny Nz
Where the values of Δx , Δy and Δz are calculated from the values of the number of intervals
Nx , N y and N z in which the prism is discretized, and those increments should be smaller than
the smallest dimension of the obstacle modelled in the workspace. Points (αnxf, αnyf, αnxf) and
(αnxi, αnyi, αnzi ) correspond to the coordinates of the end-effector of the robot for the initial and
final configurations. Fig. 6 demonstrates the way in which the prism that gives rise to the set of
nodes that the end-effector of the PUMA 560 robot with a mobile base can adopt is discretized.
Fig. 5. Rectangular prism with edges parallel to the axes of the Cartesian reference system
6. Obstacle avoidance
By controlling the distance from the different patterned obstacles to the cylinders that cover
the robot links, collision avoidance between the robot and the obstacles is possible.
Distances are constraints in the optimization problem. They serve to calculate collision-free
adjacent configurations (for adjacent configuration see Section 7).
Projecting C onto the extended line through AB provides the solution. If the projection point
P lies within the segment, then P itself is the correct answer.
If P lies outside the segment, then the segment endpoint closest to C is instead the closest
point (A or B).
B) Cylinder-Cylinder
See Fig. 6. Here we compute the distance between two line segments. The problem of
determining the closest points between two line segments S1 (P1Q1) and S2 (P2Q2) (and
therefore the distance) is more complicated than computing the closest points of the lines L1
and L2 of which the segments are a part. Only when the closest points of L1 and L2 happen to
lie on the segments does the method for closest points between lines apply. For the case in
which the closest points between L1 and L2 lie outside one or both segments, a common
misconception is that it is sufficient to clamp the outside points to the nearest segment
endpoint. It can be shown that if just one of the closest points between the lines is outside its
corresponding segment, that point can be clamped to the appropriate endpoint of the
segment and the point on the other segment closest to the endpoint is computed.
If both points are outside their respective segments, the same clamping procedure must be
repeated twice.
C) Cylinder-Prism
See Fig. 6. The prismatic surfaces are divided into triangles. In this case we compute the
distance between a line segment and a triangle. The closest pair of points between a line
segment PQ and a triangle is not necessarily unique. When the line segment is parallel to the
plane of the triangle, there may be an infinite number of equally close pairs. However,
regardless of whether the segment is parallel to the plane or not, it is always possible to
locate a point such that the minimum distance falls either between the end point of the
segment and the interior of the triangle or between the segment and an edge of the triangle.
Thus, the closest pair of points (and therefore the distance) can be found by computing the
closest pairs of points between the following entities:
- Segment PQ and triangle edge AB.
- Segment PQ and triangle edge BC.
- Segment PQ and triangle edge CA.
- Segment endpoint P and plane of triangle (when P projects inside ABC)
- Segment endpoint Q and plane of triangle (when Q projects inside ABC).
The number of tests required to calculate the distance can be reduced in some cases.
A) B) C)
Cylinder-Sphere Cylinder-Cylinder Cylinder-Prism
Q2
C L2,S2 Q
P1 C
L1,S1
P
Q’
A P’ B
P A B P2 Q1
p p
α nx
k
− α nx = Δx α ny
k
− α ny = Δy α nzk − α nzp = Δ z (2)
n being the subscript corresponding to the significant point associated to the end-effector
of the robot. For PUMA 560 with mobility in the base, n=7, as can be seen in Fig. 1.
What we obtain is a sequence of configurations that is contained in the path, so that by
using interpolation we can obtain a collision-free and continuous path.
2. Verification of the absence of obstacles between adjacent configurations C k and C p.
Since the algorithm works in a discrete space it is necessary to verify that there are no
obstacles between adjacent configurations, for which the following condition is set out:
α ikα ip ≤ 2 ⋅ min ( rj ) (3)
(( )
n
) + (α ) + (α )
2 2 2
C p − C f = α xip − α xif p
yi − α yif p
zi − α zif (4)
i =1
n being the number of significant points of the robot. This third property facilitates the
final configuration to be reached even for redundant robots, i.e. the robot´s end-effector
should not be part, at the final node, of a configuration different from the desired one.
On the other hand, this property has an influence on the configurations generated,
facilitating the configurations in the neighbourhoods at the end so that they are
compatible with the end.
An optimization procedure is set by using a sequential quadratic programming method
(SQP). This method serves to minimize a quadratic objective function subject to a set of
constraints which might include those from a simple limit to the values of the variables,
linear restrictions and nonlinear continuous constraints. This is an iterative method.
Applying this procedure to the path planning problem, the objective function used is given
by Eq. (4). The constraints are associated to the geometry of the robot, the limits of the
actuators and the avoidance of collision. And the configuration C k is used as an initial
estimation for its resolution. The solution of this optimization problem gives the adjacent
configuration C p looked for. By repeating the obtaining of adjacent configuration, the discrete
configuration space of the robot is obtained. These configurations are recorded in a graph.
Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning
Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 599
(x )
2
a ( k ,p ) = p
i − xik (5)
i =1
when that Ck and Cp are adjacent. In addition Ck and Cp must satisfy both type (3) constraints
that avoid the obstacles between configurations and the angle increased from Ck to Cp must
be smaller than the magnitude of the forbidden zone for that joint, so that large
displacements are avoided for movement between adjacent configurations.
In case the points above mentioned are not satisfied, then we consider that a( k ,p) = ∞ .
Finally, the searching is started in the weighed graph with the path that joins the node
corresponding to the initial configuration to the node corresponding to the final
configuration. Since the arcs satisfy that a( k ,p) ≥ 0 , the Dijkstra´s algorithm is used to obtain
the path that minimises the distance between the initial and final configurations. If this path
exists, it is easy to obtain a sequence of m robot configurations S.
∀t ∈ t j −1 ,t j qij = aij + bijt + cijt 2 + dijt 3 for i=1,…,dof (dof being the degrees of freedoom of
the robot) and j=1,…,m-1. (m is the number of the robot configuration)
To ensure motion continuity between configurations, the following conditions associated to
the given configurations are considered.
• Position: it gives a total of (2dof (m-1)) equations:
( )
q ij t j − 1 = aij + bij t j − 1 + c ij t 2j − 1 + dij t 3j − 1 (6)
( )
q ij t j = aij + bij t j + c ij t 2j + dij t 3j (7)
• Velocity: for each interval, the initial and final velocity is zero, the velocity condition
gives place to (2dof) equations:
q i 1 ( t0 ) = 0 (8)
qim ( tm ) = 0 (9)
When passing through each configuration, the final velocity of the previous configuration
should be equal to the initial velocity of the next configuration, leading to (dof (m-2))
equations
( ) ( )
q ij t j = q ij + 1 t j (10)
• Acceleration: For each intermediate configuration, the final acceleration of the previous
configuration should be equal to the initial acceleration of the next configuration, giving
rise to (dof(m-2)) equations:
( )
q ij t j =
q ij + 1 t j ( ) (11)
In addition, the minimum time trajectory must meet the following constraints:
• Maximum torque on the actuators,
qi ( t ) ≤
qimin ≤
qimax ∀t ∈ [ 0, tmin ] , i = 1…dof (14)
• Consumed Energy,
m−1
dof
ε ij ≤Ε , (15)
j =1 i =1
Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning
Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 601
ε ij being the energy consumed by the i actuator between configurations j and j+1
Given the large number of iterations required by the process, the technique used for
obtaining the coefficients is crucial. The first task is to normalize the polynomials that define
the stages (see Suñer et al., 2007).In short, the optimization problem is set by using
incremental time variables in each interval, so that in the interval between Sj and Sj+1, the
time variable should be Δtj=tj-tj-1, and the objective function,
m−1
Δt
j =1
j = tmin (16)
The solution is obtained by means of SQP procedures, so that at each iterative step it is
necessary to obtain the above mentioned polynomials coefficients from the estimation of the
variables of the problem.
these constraints on the generation of minimum time collision-free trajectories for industrial
robots. The results obtained show that constraints on the energy consumed must enable the
manipulator to exceed the requirements associated with potential energy, as the algorithm
works on the assumption that the energy can be dissipated but not recovered. Also, an
increase in the severity of energy constraints results in longer time trajectories with more
soft power requirements. When constraints are not very severe, efficient trajectories can be
obtained without high penalties on the working time cycle. An increase in the severity of the
jerk constraints involves longer time trajectories with more soft power requirements and
lower energy consumed. When constraints are very severe, times are also severely penalized
even the jerk might appear. To obtain competitive results in the balance between time cycle
and energy consumed, the actuators should work with the maximum admissible value of
the jerk so that the robot can work with the desired accuracy.
n
(
Find C k , minimizing Min C k − C f ) = Min ( x k
i )
− xif
(17)
i =1
and subject to:
a. Geometrical constraints of the robot structure;
b. Constraints on the mobility of robot joints;
c. Collision avoidance within the robot workspace;
(where xik and xif are the Cartesian coordinates of intermediate and final configurations C k
and C k respectively).
The process for calculating the whole trajectory between initial and final configurations ( C 1
and C f ) is based on a second and different optimization problem which can be stated as:
Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning
Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 603
tf
M ( q ( t ) )
q ( t ) + C ( q ( t ) , q ( t ) ) q ( t ) + g ( q ( t ) ) = τ (t) (20)
Boundary conditions for initial and final configuration (used to solve the first and final step)
( )
q ( 0 ) = qo ; q t f = q f
(22)
q ( 0 ) = 0 ; q ( t ) = 0
f
dij ≥ rj + w j (24)
where dij is the distance from any obstacle pattern j (sphere, cylinder or prism) to link i; rj is
the characteristic radius of the obstacle pattern and w j is the radius of the smallest cylinder
that contains the link i.
As well, q ( t )∈ R is the vector of joint positions (n being the number of degrees of freedom
n
space state in which the vector of actuator torques is feasible. Each time a new adjacent
configuration C k is generated, an uncertainty to be overcome lies in the fact that at this stage
we do not know its kinematic characteristics (particularly velocity and acceleration),
although we know they should be compatible with the dynamic characteristics of the robot.
It should also be noted when calculating the minimum time between two adjacent
configurations that each step starts from a configuration with its kinematic properties
known, obtaining the time and the kinematic properties at the end configuration, so that if
the dynamic capabilities of the actuators had been exhausted ( τ i ( tint ) ≅ τ min or τ i ( tint ) ≅ τ max )
due to the kinematic properties generated at the end configuration ( q ( tint ) , q ( tint ) and
q ( tint ) ), it would have been impossible to observe constraints on the next generation step of
604 Robotic Systems – Applications, Control and Programming
the trajectory τ min ≤ τ ( tint +1 ) ≤ τ max . Finally, by connecting adjacent configurations, the whole
trajectory is generated.
The process explained is applied repeatedly to generate adjacent configurations until
reaching the final configuration. Finally, by connecting adjacent configurations, the whole
trajectory is generated.
f (t ) = C 0 + C 1 cos(t + θ1 ) (25)
whose C1 coefficient is the value of the amplitude for the fundamental component and θ1 is
the phase angle. It can be demonstrated that the values of the function are limited to the
interval [C1,-C1] (the coefficients of the cosine terms). Analyzing the harmonic function on
the basis of the type of trajectory span we distinguish three types. We use different
interpolation functions to determine their impact on the characteristics of the solution
generated. The cases analyzed and interpolation functions for each case are as follows.
a. Initial span: Cases A, B and C
In all three cases we have used the same interpolation function for the first span, therefore
the procedure to calculate the constants is identical
with i =1..Ndof and 1 is for the first span. Ndof is the number of the robot´s degrees of freedom.
For this type of interpolation function, velocity and acceleration values are limited by the
coefficients aij, bij. The known boundary conditions are three: the initial and final
configuration of the interval and the initial velocity. They allow the set of coefficients aij, bij
and cij to be obtained, which are dependent on time.
b. Intermediate span.
Three different interpolation functions corresponding to cases A, B and C have been used.
To calculate the constants in each case we have proceeded as follows:
Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning
Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 605
b1) Case A
The interpolation function is
qij = aij ⋅ sin ( t ) − bij ⋅ cos ( 2 ⋅ t ) + cij ⋅ sin ( 3 ⋅ t ) − dij ⋅ cos ( 4 ⋅ t ) (27)
with i=1..Ndof and j =1..Nspan. . Nspan is the number of the span that is being analyzed.
From experience in the resolution of a great number of cases, a polynomial term has been
added to ensure the boundary conditions of velocity and acceleration along the trajectory in
this span. Velocity and acceleration equations are
Their values are limited by the coefficients aij, bij, cij and dij. The known boundary conditions
are four: the initial and final configurations of the span and the velocities and accelerations
at the beginning, and they allow the expressions for the constants aij, bij, cij and dij to be
determined which, as in the previous case, are dependent on time.
b2) Case B
The interpolation function is
qij = cos( t ) ⋅ ( - aij ⋅ sin2 ( t ) - sin( t ) ⋅ (aij ⋅ sin( t ) + bij ) +2 ⋅ aij ⋅ cos 2 ( t ))-
cos( t ) ⋅ (sin( t ) ⋅ (aij ⋅ sin( t ) + bij ) + cij ) - 2 ⋅ sin( t ) ⋅ (cos( t ) ⋅ (aij ⋅ sin( t ) (32)
+ bij ) + aij ⋅ cos( t ) ⋅ sin( t ))
Their values are limited by the new coefficients aij, bij, cij and dij. The known boundary
conditions are also four: the initial and final configurations of the span and the velocities
and accelerations at the beginning. Therefore the constants aij, bij, cij and dij can be
determined which, as in the previous case, are dependent on time.
b3. Case C
The interpolation function is
q ij = sin(t) (aij ⋅ cos2 (t) - sin(t) ⋅ (aij ⋅ sin(t) + bij )) + cos(t) ⋅ (cos(t) ⋅ (aij ⋅ sin(t) + bij ) + cij ) (34)
qij = 2 ⋅ cos(t) ⋅ (aij ⋅ cos 2 (t) - sin(t) ⋅ (aij ⋅ sin(t) + bij )) - sin(t) ⋅ (cos(t) ⋅ (aij ⋅ sin(t) + bij ) +
(35)
cij ) + sin(t) ⋅ ( - cos(t) ⋅ (aij ⋅ sin(t) + bij ) - 3 ⋅ aij ⋅ cos(t) ⋅ sin(t))
606 Robotic Systems – Applications, Control and Programming
Their values are limited by the new coefficients aij, bij, cij and dij. The known boundary
conditions are also four: the initial and final configurations of the span and the velocities
and accelerations at the beginning. Therefore the constants aij, bij, cij and dij can be
determined which, as in the previous case, are dependent on time.
c. Final span: Cases A, B and C
In all three cases we used the same interpolation function for the last span and therefore the
procedure to calculate the constants is identical
q iF = aiF ⋅ cos ( t ) − biF ⋅ sin ( t ) + 2 ⋅ ciF ⋅ sin ( t ) ⋅ cos ( t ) + diF + 2 ⋅ eiF ⋅ t (37)
Their values are limited by the coefficients aij, bij, cij, dij and eij. The known boundary
conditions are five: the initial and final configuration in the last span or interval, the velocity
and acceleration at the beginning of the interval and the velocity at the end. These boundary
conditions enable the coefficients aij, bij, cij, dij and eij to be obtained.
Whenever a new adjacent configuration is generated by solving Eq. (4), a new trajectory
span will also be created (by solving the second optimization problem Eq. (17)), and the
necessary time tj to perform the span is then obtained. The joint positions are adjusted using
the corresponding harmonic interpolation function again. The solution of equations is
obtained by iteration using quadratic sequential programming techniques (SQP) through
the mathematical commercial software NAG (Numerical Algorithms Group). An each step
of the iterative process it is necessary to recalculate the coefficients of the harmonic
interpolation functions used, since they are time functions. To facilitate calculations, each
span has been discretized using ten subintervals, so that the kinematic and dynamic
characteristics are to be calculated at this discrete set of points. The solution of the
optimization process provides the minimum time tj to go from one configuration to its
adjacent one and consequently the joint positions q ( t ) that must be followed between these
two configurations, as well as the necessary torques in the actuators τ ( t ) and the
corresponding kinematic characteristics q ( t ) and q( t ) .
been characterized by the maximum and minimum torque it can provide, see Eq. (23).
Nonetheless, both the computational and execution time are very high compared with the
results obtained using the “sequential algorithm”.
c ( j ) = time ( 1 , j ) (39)
When a set of adjacent configurations has been created, Eq. (40) is used to select that one
from which the process is expected to branch out again. Given two adjacent configurations
the minimum time between them is calculated as explained in Section 4. Time is used to
select the new configuration as just explained in Section 5, which is used to repeat the
branching process and this in turn is repeated until the final configuration is reached.
10. Conclusion
In this paper, two algorithms that solve the trajectory planning problem for industrial robots
in an environment with obstacles have been introduced and summarized. They have been
called “sequential” and “simultaneous“ algorithm respectively. Both are off-line algorithms.
The first one is based on an indirect methodology because it solves the trajectory planning in
two sequential steps (first a path is generated and once the path is known, a trajectory is
adjusted to it). Polynomial interpolation functions have been in this algorithm because they
yield the best results. Besides, the trajectories calculated meet constraints on torque, power,
jerk and energy consumed. The second algorithm is a direct method, which solves the
equations in the state space of the robot. Unlike other direct methods, it does not use
previously defined paths, which enables working with mobile obstacles although the
obstacles used in this chapter are statics. Three types of interpolation functions have been
used for the computation of intermediate configurations (harmonic functions). Polynomial
interpolation functions have been excluded from this algorithm because during the
resolution phase of the examples, because converge problems in the optimization problem
have come up.
The main conclusions are summarized as follows:
a. The algorithms solve the trajectory planning problem for industrial robots in
environments with obstacles therefore avoiding collisions.
b. It can be applied to any industrial robot.
c. “Sequential” algorithm:
c.1. Constraints on the energy consumed must be compatible with the robot´s
demanded potential energy, as energy recovery is not considered, as the algorithm
works on the assumption that the energy can be dissipated but not recovered.
Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning
Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 609
c.2. To obtain competitive results in the balance between time cycle and energy
consumed, the actuators should work with the maximum admissible value of the
jerk so that the robot can work with the desired accuracy.
c.3. The cubic interpolation function gives the best computational and execution time.
d. “Simultaneous” algorithm: as for the peculiarities of the interpolation functions in
relation to the four monitored operating parameters (computational time, execution
time, distance travelled and number of configurations generated), the main point is that
the best results are obtained when using the interpolation function of case C (taking into
account that each actuator has been characterized by the maximum and minimum
torque it can provide). With this algorithm the cubic interpolation function does not
work because during the resolution phase of the examples, they exceeded the dynamic
capabilities of the actuators and therefore the algorithm failed to reach any solution.
11. Acknowledgment
This paper has been possible thanks to the funding of Science and Innovation Ministry of
the Spain Government by means of the Researching and Technologic Development Project
DPI2010-20814-C02-01 (IDEMOV).
12. References
Abdel-Malek, K., Mi, Z., Yang, J.Z. & Nebel, K. (2006), Optimization-based trajectory
planning of the human upper body, Robotica, Vol. 24, nº 6, pp. (683-696).
Bobrow, J.E., Dubowsky, S. & Gibson, J.S. (1985), Time-Optimal Control of Robotic
Manipulators Along Specied Paths, International Journal of Robotics Research, Vol. 4,
nº 3, pp. (3-17).
Chen, Y. & Desrochers, A.A. (1989), Structure of minimum time control law for robotic
manipulators with constrained paths, IEEE Int Conf Robot Automat, ISBN: 0-8186-
1938-4, pp. (971-976), Scottsdale, USA, 1989.
Chettibi, T., Lehtihet, H.E., Haddad, M. & Hanchi, S. (2002), Optimal pose trajectory
planning for robot manipulators. Mechanism and Machine Theory, vol 37, nº 10, pp.
(1063-1086).
Chettibi, T., Lehtihet, H.E., Haddad, M. & Hanchi, S. (2004), Minimum cost trajectory
planning for industrial robots. European Journal of Mechanics a-solids 23 (4): 703-
715.
Cho, B. H., Choi, B. S. & Lee, J. M. (2006), Time-optimal trajectory planning for a robot
system under torque and impulse constraints, International Journal of Control,
Automation, and Systems, Vol. 4, nº 1, pp. (10-16).
Constantinescu, D. & Croft, E.A. (2000), Smooth and time-optimal trajectory planning for
industrial manipulators along specified paths, Journal of Robotic Systems, Vol. 17, no
5, pp. (233-249).
du Plessis, L. J. & Snyman, J. A. (2003 ), Trajectory-planning through interpolation by
overlapping cubic arcs and cubic splines. International Journal for Numerical Methods
in Engineering, Vol. 57, nº 11, pp. (1615-1641).
Field, G. & Stepanenko, Y. (1996), Iterative dynamic programming: an approach to
minimum energy trajectory planning for robotic manipulators, Proc. of the IEEE
610 Robotic Systems – Applications, Control and Programming
1. Introduction
This paper describes the methodology for system description and application so that the
system can be managed using real time system adaptation. The term system here can
represent any structure regardless its size or complexity (industrial robots, mobile robot
navigation, stock market, systems of production, control systems, etc.). The methodology
describes the whole development process from system requirements to software tool that
will be able to execute a specific system adaptation.
In this work, we propose approaches relying on machine learning methods (Bishop, 2006),
which would enable to characterize key patterns and detect them in real time and in their
altered form as well. Then, based on the pattern recognized, it is possible to apply a suitable
intervention to system inputs so that the system responds in the desired way. Our aim is to
develop and apply a hybrid approach based on machine learning methods, particularly
based on soft-computing methods to identify patterns successfully and for the subsequent
adaptation of the system. The main goal of the paper is to recognize important pattern and
adapt the system’s behaviour based on the pattern desired way.
The paper is arranged as follows: Section 1 introduces the critical topic of the article.
Section 2 details the feature extraction process in order to optimize the patterns used as
inputs into experiments. The pattern recognition algorithms using machine learning
methods are discussed in section 3. Section 4 describes the used data-sets and covers the
experimental results and a conclusion is given in section 5. We focus on reliability of
recognition made by the described algorithms with optimized patterns based on the
reduction of the calculation costs. All results are compared mutually.
observation and prepare suitable response to these patterns that emerge from time to time
and adapt to any deviation in the system’s behaviour.
As we are using Gershenson’s methodology we are not going to describe it in detail because
detailed info can be found in his book (Gershenson, 2007). Let’s mention crucial parts of his
methodology that is important to our work. The methodology is useful for designing and
controlling complex systems. Basically a complex system consists of two or more
interconnected components and these components react together and it is very complicated
to separate them. So the system’s behaviour is impossible to deduce from the behaviour of
its individual components. This deduction becomes more complicated how more
components #E and more interactions # I the system has (Csys corresponds with system
complexity; Ce corresponds with element complexity; Ci corresponds with interaction
complexity).
E
I
# E
Csys ~ C
j =0 e j
(1)
#I
Cik
k = 0
each other to achieve desired state or behaviour. If we determine the state as a self-organizing
state, we can call that system self-organizing and define our complex self-organizing system.
In our example with manufacturing line our self-organizing state will be a state where the
production runs smoothly without any production delays. But how can we achieve that?
Still using Gerhenson’s General Methodology we can label fulfilling agent’s goal as its
satisfaction σ ∈ [0,1]. Then the system’s satisfaction σsys (2) can be represented as function
f : R → [ 0,1] and it is a satisfaction of its individual components.
w0 represents bias and other weights wi represents an importance given to each σi.
Components, which decrease σsys and increase their σi shouldn’t be considered as a part of the
system. Of course it is hard to say if for higher system’s satisfaction it is sufficient to increase
satisfaction of each individual component because some components can use others fulfilling
their goals. For maximization of σsys we should minimize the friction among components and
increase their synergy. A mediator arbitrates among elements of a system, to minimize
conflict, interferences and frictions; and to maximize cooperation and synergy. So we have two
types of agents in the MAS. Regular agents fulfil their goals and mediator agents streamline
their behaviour. Using that simple agent’s division we can build quite adaptive system.
patterns in system’s behaviour a try to adapt to any changes. However, in order to react
quickly and appropriately, it is good to have at least an expectation of what may happen
and which reaction would be appropriate, i.e. what to anticipate. Expectations are subjective
probabilities that we learn from experience: the more often pattern B appears after pattern
A, or the more successful action B is in solving problem A, the stronger the association
A → B becomes. The next time we encounter A (or a pattern similar to A), we will be
prepared, and more likely to react adequately. The simple ordering of options according to
the probability that they would be relevant immensely decreases the complexity of decision-
making (Heylighen, 1994).
Agents are appropriate for defining, creating, maintaining, and operating the software of
distributed systems in a flexible manner, independent of service location and technology.
Systems of agents are complex in part because both the structural form and the behaviour
patterns of the system change over time, with changing circumstances. By structural form,
we mean the set of active agents and inter-agent relationships at a particular time. This form
changes over time as a result of inter-agent negotiations that determine how to deal with
new circumstances or events. We call such changing structural form morphing, by analogy
with morphing in computer animation. By behaviour patterns, we mean the collaborative
behaviour of a set of active agents in achieving some overall purpose. In this sense,
behaviour patterns are properties of the whole system, above the level of the internal agent
detail or of pair wise, inter-agent interactions. Descriptions of whole system behaviour
patterns need to be above this level of detail to avoid becoming lost in the detail, because
agents are, in general, large grained system components with lots of internal detail, and
because agents may engage in detailed sequences of interactions that easily obscure the big
picture. In agent systems, behaviour patterns and morphing are inseparable, because they
both occur on the same time scale, as part of normal operation. Use case maps (UCMs)
(Burth & Hubbard, 1997) are descriptions of large grained behaviour patterns in systems of
collaborating large grained components.
domains that involve such data. The recognition of structural shapes plays a central role in
distinguishing particular system behaviour. Sometimes just one structural form (a bump, an
abrupt peak or a sinusoidal component), is enough to identify a specific phenomenon. There is
not a general rule to describe the structure – or structure combinations – of various
phenomena, so specific knowledge about their characteristics has to be taken into account. In
other words, signal structural shape may be not enough for a complete description of system
properties. Therefore, domain knowledge has to be added to the structural information.
However, the goal of our approach is not knowledge extraction but to provide users with an
easy tool to perform a first data screening. In this sense, the interest is focused on searching
for specific patterns within waveforms (Dormido-Canto et al., 2006). The algorithms used in
pattern recognition systems are commonly divided into two tasks, as shown in Fig. 1. The
description task transforms data collected from the environment into features (primitives).
features
data Identification
Description Classification
Δwij = η ai aj (3)
where wij is the change in the strength of the connection from unit j to unit i, ai and aj are the
activations of units i and j respectively, and η is a learning rate. When training a network to
classify patterns with this rule, it is necessary to have some method of forcing a unit to respond
strongly to a particular pattern. Consider a set of data divided into classes C1, C2,...,Cm.
Each data point x is represented by the vector of inputs (x1, x2, …, xn). A possible network for
learning is given in Figure 4. All units are linear. During training the class inputs c1, c2, …,cm
for a point x are set as follows (4):
ci = 1 x ∈ Ci
(4)
ci = 0 x ∉ C i
Each of the class inputs is connected to just one corresponding output unit, i.e. ci connects to
oi only for i = 1, 2, …,m. There is full interconnection from the data inputs x1, x2, …, xn to each
of these outputs.
Fig. 4. Hebb network. Weights of connections w11-wij are modified in accordance with the
Hebbian learning rule
618 Robotic Systems – Applications, Control and Programming
output
hidden
input
2. If node j is a hidden node, then δj is the product of φ'(vj) and the weighted sum of the
δ's computed for the nodes in the next hidden or output layer that are connected to
node j.
[The actual formula is δj = φ'(vj) &Sigmak δkwkj where k ranges over those nodes for
which wkj is non-zero (i.e. nodes k that actually have connections from node j. The δk values
have already been computed as they are in the output layer (or a layer closer to the output
layer than node j).]
4. Experimental results
4.1 Used datasets
This approach allows a search of structural shapes (patterns) inside time-series. Patterns are
composed of simpler sub-patterns. The most elementary ones are known as primitives.
Feature extraction is carried out by dividing the initial waveform into segments, which are
encoded. Search for patterns is accomplished process, which is performed manually by the
Methodology for System Adaptation Based on Characteristic Patterns 621
Fig. 9. The test set with one marked peak, which is searched
selected parts out of peaks. These randomly selected parts were used to network can learn
to recognize what is or what is not a search pattern (peak). All patterns were normalized to
the square of a bitmap of the edge of size a = 10. The effort is always to choose the size of
training set as small as possible, because especially backpropagation networks increases
their computational complexity with the size of a training set.
Fig. 10. Graphic representation of learning patterns (S vectors) that have been made by
selection from training data set. The first three patterns represent peaks. Next four patterns
are representatives of non-peak “not-interested” segments of values
No. S T
--------+-|-------++-|-------+++|------++++|------++++|
0. -+
-----+++++|-----+++++|--++++++++|++++++++++|++++++++++
----------|----------|--------+-|-------++-|-------+++|
1. -+
------++++|------++++|-----+++++|-----+++++|--++++++++
----------|----------|-------++-|-----++++-|----++++++|
2. -+
----++++++|---+++++++|++++++++++|++++++++++|++++++++++
----------|----------|----------|----------|----------|
3. +-
----------|----------|----------|-------+++|++++++++++
----------|----------|----------|----------|----------|
4. +-
----------|----------|----------|----------|+++++++++-
----------|----------|----------|----------|----------|
5. +-
----------|--------++|-------+++|++----++++|++++++++++
----------|----------|----------|----------|----------|
6. +-
----------|--------+-|------++++|--++++++++|++++++++++
Table 1. Vectors T and S from the learning pattern set. Values of ‘-1’ are written using the
character ‘-’ and values of ‘+1’ are written using the character ‘+’ because of better clarity
Methodology for System Adaptation Based on Characteristic Patterns 623
Fig. 11. Graphic representation of test patterns (S vectors) that have been made by selection
from the test data set. The first pattern represents the peak. Next four patterns are
representatives of non-peak “not-interested” segments of values
No. S T
---+------|--+++-----|--+++---+-|-++++++++-|-+++++++++|
0. -+
++++++++++|++++++++++|++++++++++|++++++++++|++++++++++
----------|----------|----------|----------|----------|
1. +-
----------|----------|-----+----|---+++++--|-++++++++-
----------|----------|----------|----------|----------|
2. +-
----------|----------|----------|-----++---|--++++++++
----------|----------|----------|----------|----------|
3. +-
----------|----------|----------|---++++---|-+++++++++
----------|----------|----------|----------|----------|
4. +-
----------|----------|----------|+++++++++-|++++++++++
Table 2. Vectors T and S from the test pattern set. Values of ‘-1’ are written using the
character ‘-’ and values of ‘+1’ are written using the character ‘+’ because of better clarity
Two types of classifiers: Backpropagation and classifier based on Hebb learning were used
in our experimental part. Both used networks classified input patterns into two classes.
Backpropagation network was adapted according the training set (Fig.10, Tab. 1) in 7 cycles.
After its adaptation, the network was able to also correctly classify all five patterns from the
test set (Fig. 11, Tab. 2), e.g. the network was able to correctly identify the peak and
"uninteresting" data segments too. Other experiments gave similar results too.
Backpropagation network configuration:
Fig. 12. Learning patterns from Fig. 10 with uncovered redundant components (gray colour).
The redundant components prevented the Hebbian-learning-based-classifier in its default
variant to learn patterns properly. So the modified variant had to be used
effective, with global optimization ability. It does not require the objective function to be
differentiable, and it works well even with noisy and time-dependent objective functions.
The technique for the solving of this problem by means of analytic programming was
inspired in neural networks. The method in this case study used input values and future
output values – similarly as training set for the neural network and the whole structure
which transfer input to output was synthesized by analytic programming. The final solution
of the analytic programming is based on evolutionary process which selects only the
required components from the basic sets of operators (Fig. 6 and Fig 7). Fig. 13 shows
analytic programming experimental result for exact modelling during training phase.
Fig. 13. Analytic programming experimental result for exact modelling during training
phase. Red colour represents original data from training set (Fig. 8), while green colour
represents modelling data using formula (6)
The resulting formula, which calculates the output value xn was developed using AP (6):
0.010009⋅xn−2
xn = 85.999 ⋅ e(
17.1502 − xn−3 )
(6)
Analytic programming experimental results are shown in Fig. 14. Equation (6) also
represents the behaviour of training set so that the given pattern was also successfully
identified in the test set (Fig. 9). Other experiments gave similar results too.
The operators used in GFS were (see Fig. 7): +, -, /, *, Sin, Cos, K, xn-1 to xn-4, exp, power. As
the main algorithm for AP and also for constants estimation in meta-evolutionary process
differential evolution was used. The final solution of the analytic programming is based on
evolutionary process which selects only the required components from the basic sets of
operators. In this case, not all components have to be selected as can be seen in one of
solutions presented in (6).
626 Robotic Systems – Applications, Control and Programming
Fig. 14. Analytic programming experimental result. Red colour represents original data from
test set (Fig. 9), while green colour represents modelling data using formula (6)
5. Conclusion
In this chapter, a short introduction into the field of pattern recognition using system
adaptation, which is represented via time series, has been given. Two possible approaches
were used from the framework of softcomputing methods. The first approach was based on
analytic programming and the second one was based on artificial neural networks. Both
types of used neural networks (e.g. Hebb and backpropagation networks) as well as analytic
programming demonstrated ability to manage to learn and recognize given patterns in time
series, which represents our system behaviour. Our experimental results suggest that for the
given class of tasks can be acceptable simple classifiers (we tested the simplest type of Hebb
learning). The advantage of simple neural networks is very easy implementation and quick
adaptation. Easy implementation allows to realize them at low-performance computers
(PLC) and their fast adaptation facilitates the process of testing and finding the appropriate
type of network for the given application.
The method of analytic programming described here is universal (from point of view of
used evolutionary algorithm), relatively simple, easy to implement and easy to use. Analytic
programming can be regarded as an equivalent of genetic programming in program
synthesis and new universal method, which can be used by arbitrary evolutionary
algorithm. AP is also independent of computer platform (PC, Apple, …) and operation
system (Windows, Linux, Mac OS,…) because analytic programming can be realized for
example in the Mathematica® environment or in other computer languages. It allows
manipulation with symbolic terms and final programs are synthesised by AP of mapping,
therefore main benefit of analytic programming is the fact that symbolic regression can be
done by arbitrary evolutionary algorithm, as was proofed by comparative study.
Methodology for System Adaptation Based on Characteristic Patterns 627
According to the results of experimental studies, it can be stated that pattern recognition in
our system behaviour using all presented methods was successful. It is not possible to say
with certainty, which of them reaches the better results, whether neural networks or
analytic programming. Both approaches have an important role in the tasks of pattern
recognition.
In the future, we would like to apply pattern recognition tasks with the followed system
adaptation methods in SIMATIC environment. SIMATIC (SIMATIC, 2010) is an appropriate
application environment for industrial control and automation. SIMATIC platform can be
applied at the operational, management and the lowest, physical level. At an operational
level, it particularly works as a control of the running processes and monitoring of the
production. On the management and physical level it can be used to receive any production
instructions from the MES system (Manufacturing Execution System - the corporate ERP
system set between customers’ orders and manufacturing systems, lines and robots). At the
physical level it is mainly used as links among various sensors and actuators, which are
physically involved in the production process (Janošek, 2010). The core consists of the
SIMATIC programmable logic computers with sensors and actuators. This system collects
information about its surroundings through sensors. Data from the sensors can be provided
(e.g. via Ethernet) to proposed and created software tools for pattern recognition in real
time, which runs on a powerful computer.
6. Acknowledgment
The research described here has been financially supported by University of Ostrava grant
SGS23/PRF/2011. It was also supported by the grant NO. MSM 7088352101 of the Ministry
of Education of the Czech Republic, by grant of Grant Agency of Czech Republic GACR
102/09/1680 and by the European Regional Development Fund under the Project CEBIA-
Tech No. CZ.1.05/2.1.00/03.0089. Any opinions, findings and conclusions or
recommendations expressed in this material are those of the authors and do not necessarily
reflect the views of the sponsors.
7. References
Armstrong, M. and Porter, R. ed. (2006): Handbook of Industrial Organization, vol. III. New
York and Amsterdam: North-Holland.
Bishop, C. (2006) Pattern Recognition and Machine Learning. Springer, 2006.
Buhr, R.J.A. and Hubbard, A. (1997) Use Case Maps for Engineering Real Time and
Distributed Computer Systems: A Case Study of an ACE-Framework Application.
In Hawaii International Conference on System Sciences, Jan 7-10, 1997, Wailea, Hawaii,
Available from http://www.sce.carletonca/ftp/pub/UseCaseMaps/hicss-final-
public.ps
Ciskowski, P. and Zaton, M. (2010) Neural Pattern Recognition with Self-organizing Maps
for Efficient Processing of Forex Market Data Streams. In Artificial Intelligence and
Soft Computing, Volume 6113/2010, pp. 307-314, DOI: 10.1007/978-3-642-13208-7_39
Dormido-Canto, S., Farias, G., Vega, J., Dormido, R., Sánchez, J. and N. Duro et al. (2006)
Rev. Sci. Instrum. 77 (10), p. F514.
Fausett, L.V. (1994) Fundamentals of neural networks: architectures, algorithms and applications,
first edition. Prentice Hall. ISBN 978-953-7619-24-4
628 Robotic Systems – Applications, Control and Programming
Gershenson, C. ( 2007): Design and Control of Self-organizing Systems. Mexico: CopIt ArXives,
ISBN: 978-0-9831172-3-0.
Gershenson, C. (2002) Complex philosophy. In: Proceedings of the 1st Biennial Seminar on
Philosophical, Methodological & Epistemological Implications of Complexity Theory. La
Habana, Cuba. 14.02.2011, Available from
http://uk.arXiv.org/abs/nlin.AO/0108001
Gogle finance [online], http://www.google.com/finance?q=NASDAQ:GOOG, 10.8. 2010
Heylighen, F. (1994) Fitness as default: the evolutionary basis for cognitive complexity
reduction. In Trappl (Ed.) Proceedings of Cybernetics and Systems ’94, R. Singapore:
World Science, pp. 1595–1602, 1994.
Janošek, M. (2010) Systémy Simatic a jejich využití ve výzkumu. In: Studentská vědecká
konference 2010. Ostrava: Ostravská univerzita, pp. 177-180. ISBN 978-80-7368-719-9
Kocian, V., Volná, E., Janošek, M. and Kotyrba, M. (2011) Optimizatinon of training sets for
Hebbian-learningbased classifiers. In R. Matoušek (ed.): Proceedings of the 17th
International Conference on Soft Computing, Mendel 2011, Brno, Czech Republic, pp.
185-190. ISBN 978-80-214-4302-0, ISSN 1803-3814.
Lai, K.K., Yu, L. and Wang, S: A (2005) Neural Network and Web-Based Decision Support
System for Forex Forecasting and Trading. In Data Mining and Knowledge
Management, Volume 3327/2005, pp. 243-253, DOI: 10.1007/978-3-540-30537-8_27.
Oplatkova, Z. (2009) Metaevolution - Synthesis of Optimization Algorithms by means of
Symbolic. In Regression and Evolutionary Algorithms, Lambert-Publishing, ISBN 978-
8383-1808-0.
Price, K. (1999) An Introduction to Differential Evolution, In: (D. Corne, M. Dorigo and F.
Glover, eds.) New Ideas in Optimization, pp. 79–108, London: McGraw-Hill.
Seung, S. (2002). Multilayer perceptrons and backpropagation learning. 9.641 Lecture4. 1-6.
Available from:
http://hebb.mit.edu/courses/9.641/2002/lectures/lecture04.pdf
SIMATIC (2010) [online]. SIMATIC Controller, Available from
http://www.automation.siemens.com/salesmaterial-
as/brochure/en/brochure_simatic-controller_en.pdf
Zelinka, I. (2002) Analytic programming by Means of Soma Algorithm. Mendel ’02, In: Proc.
Mendel’02, Brno, Czech Republic, 2002, 93-101., ISBN 80-214-2135-5
Zelinka, I. (2004) SOMA – Self Organizing Migrating Algorithm“, In: B.V. Babu, G.
Onwubolu (eds), New Optimization Techniques in Engineering Springer-Verlag, 2004,
ISBN 3-540-20167X