Švaco Et Al. - 2011 - Autonomous Planning Framework For Distributed Mult
Švaco Et Al. - 2011 - Autonomous Planning Framework For Distributed Mult
Švaco Et Al. - 2011 - Autonomous Planning Framework For Distributed Mult
1 Introduction
Substantial research and development is conducted to multiagent robotics; particularly
in the fields such as service, humanoid or mobile robotics, but industrial robotics is
still based on traditional postulates. Real flexibility and adaptivity to changes are
shortcomings in today’s industrial assembly and handling robotic applications and are
issues that need to be addressed. Distributed multiagent robotics is a system based on
human behavior patterns. When complex tasks arise humans are much more efficient
when working in groups: they exhibit more axis of freedom, more data can be handled
and they delegate particular tasks to individual agents.
Research concentrated around humanoid robotics ([1]-[3]) is developing rapidly.
Dual arm configuration highly sophisticated perceptive mechanisms, human like
motions enable robots to recreate human motion and work patterns. Major drawback
of those kinematical structures is very low repeatability and precision primarily
needed in industrial systems. For assembly and handling tasks which usually have
high precision and repeatability demands industrial robots are necessary. Nowadays
the most flexible industrial robots have 6 or 7 [4] degrees of freedom (DOF) without
the end effector (gripper) which usually has 1 additional DOF. One human arm (with
the hand) has 27 DOF [5]. The flexibility of a robotic arm is quite limited in compari-
son to a human operator. Implementing two or more robots with own controllers that
communicate each with other, a certain multiagent concept can be achieved. The
whole system will be orchestrated and will be able to perform more demanding opera-
tions. Each controller running its own actuator unit should be an agent with defined
level of autonomy. In such systems the multiagent control appears as the main issue.
In this paper a creative action planning algorithm (CAPA) for application in multi
agent robotic systems is presented. One of the main goals is constructing a universal
planning framework which can be implemented on various types of industrial robots
and tasks.
Related works [6], [7] incorporating multiagent planning on similar tasks are vir-
tual applications and cannot be easily implemented on real industrial systems. The
approaches are primarily intended for autonomous planning done by multiple agents
who cannot collide, are of infinite small dimension and share the same computational
time domain. The developed CAPA and the distributed multiagent system (MAS)
operate in a real world environment bounded by rules and limitations. The approach
discussed in this paper is intended to show that some assembly and handling tasks can
be done in close collaboration among agents to gain flexibility and increase overall
system productivity.
2 Contribution to Sustainability
Robotics and in particular industrial robotics have always been a part of a central
planning system. Agents (robots, machines) controlled by own computers are some-
how subordinated to a central system controller [8]. Therefore they exhibit very low
level of autonomy and in most cases do pre-programmed actions not being able to
cope with uncertainties in the system and the environment. Uncertainties may vary
from production quantities to failures of equipment or other agents, etc.
It is suggested that some handling and assembly industry tasks can be accom-
plished by interaction between agents (primarily industrial robots) in the system.
Accordingly some level of autonomy must be introduced.
Production in recent years has switched from high quantity standardized products
to lower quantities of customized products so demands from assembly systems have
grown. Traditional approach with a centralized architecture and strict delegation of
tasks needs to be replaced. Introducing a multiagent configuration and autonomous
planning approach could be proven as a valuable addition. For an assembly system it
implies that agents (robots and machines) before assembling need to generate a plan
that best suits the current state and requirements of the system. After deriving consen-
sus agents begin assembling the structure (product) constantly communicating and
exchanging relevant information and data. In industrial assembly systems this is a
novice approach and it has numerous benefits when implemented: it leads to in-
creased flexibility and adaptivity to unexpected changes and uncertainties in the sys-
tem, i.e. responsiveness [9]. The system becomes insensitive to number of agents
(robots) and new assembly tasks can be resolved with less effort. Clearly, this ap-
proach is not suited for all products but it can be implemented on a variety of indus-
trial examples. Development of such an industrial system scheme is beyond the scope
of this paper and will be considered for further research. In this work an initial version
of the planning framework is presented. The framework is implemented on an actual
system consisting of three 6 DOF robots.
Agents’ tasks are recreating structures which are defined as a final form put together
from various objects with defined relationships. A structure is determined by interre-
lations and arrangement of objects bj,k into a complex entity. Structure S = {Ribj,k} is a
set of relations Ri (i = 1…m-1) between objects (bj,k, j = 1…n, k = 1…u).
The MAS has properties of a market organization type [11], [12] where agents bid
[13] for given resources (blocks) in their workspace (Fig. 1). Time schedules need to
be negotiated when areas of interest in the global workspace are not occupied.
Global goal G is the required structure that must be assembled from available ele-
ments following the given set S. An example of a structure is illustrated in Fig. 2 a).
After observing a structure and finding relations agents are given an arbitrary set of
work pieces (blocks) as depicted in Fig. 2. (b). Using those elements a plan of actions
is generated for assembling the initial structure. Possible solutions are presented in
Fig. 2. (c1) - (c3). A set of rules and propositions for agent behavior is given in a cog-
nition base (CB):
• Mathematical rules for structure sets
• Agents capabilities
• Grasping rules and limitations
• Object properties
• Agent workspace
• Vision system patterns database
• Force and torque sensor threshold values
If a simple structure with limited number of building blocks is presented to the
agents (Fig. 2 a) there might be only one or few feasible solutions (sequence of steps).
If more complex structures are presented (as shown in Fig. 4. a) a variety of feasible
solutions might be possible.
Top down disassembling or bottom up assembling the structure can define a se-
quence of steps for the MAS. The CAPA utilizes a bottom up principle where from a
provided set of objects {b1…bp,r}, which can differ from the initial set {b1…bn,q}
agents need to reach a solution in the given search space. Depending on the CB in-
formation agents can make decisions whether the desirable objectives can be per-
formed in accordance to proposed restrictions and limitations. Implementing an itera-
tive algorithm a solution can be found as shown in Fig. 3. Branches represent solution
sets and each branch leads to one solution. If finding a solution in one solution set
isn’t possible, the system takes one step back and explores other options until it finds
a valid one.
Each agent attains a unified set of sub goals gt which fulfill the global goal G. Ex-
ecution of sub goals (tasks) can be done synchronous or asynchronous giving the
space functions F of the agents. A resource function C is defined as a measure of
resource and time consumption. C (al, bj,k, e) is a function of agents’ al position, speci-
fications of a building block bj,k (size and position in global workspace) and the posi-
tion e where that block is planned to be moved.
3.3 Operators
4 Implementation
The CAPA has been tested to provide solutions for a structure such as the one shown
in Fig. 4. When multiple solutions are possible the MAS executes the one where ∑C
in the entire solution set is minimal. Currently only two dimensional structures (R3)
are being solved but their solutions due to use of real world objects has to be three
dimensional (R4). The planning algorithm was tested on a virtual model of the multia-
gent robotic system (Fig. 5. a). This was done for safety reasons (primarily collision)
and the ability to test and debug the algorithm in parallel on multiple computers. After
satisfactory computational results the algorithm and the entire framework have been
implemented on an actual robotic system – Fig. 5. (b).
Fig. 4. a) Initial space structure b) Randomly scattered building blocks c), d) Space struc-
tures assembled by the agents
The first problem which emerged was sharing of agent workspace. In order to work
on the same task, assembling the same structure, spatial relations need to be taken into
consideration. Agents were calibrated using calibration tools and visual applications.
This creates relations with respect to agent positions (three translations) and rotations
(three angular displacements); introducing a common global workspace (K). A prob-
lem that resulted from the decentralized multiagent architecture was sharing and syn-
chronizing agent time domains. This didn’t introduce an issue while tests were con-
ducted on a computer where all agents used the same CPU clock. Adjustments have
been done using handshaking with digital signals and through TCP/IP communication
which allowed coordinated task execution.
Collision detection was an issue that needed to be addressed. Currently there are no
algorithms to solve real time agent collision or they exist but with limitations. Colli-
sion between two agents with kinematic chains of 3 DOF can be solved in a definite
period of time [15]. For the reason of limited computational power and the collision
detection not being the centre of this research the function (F) described in chapter 3
was used.
Fig. 5. (a) virtual representation of the multiagent robotics system (b) real agents
References
1. Albers A., Brudniok S., Ottnad J., Sauter C., and Sedchaicham K.: Upper Body of a new
Humanoid Robot – the design of ARMAR III, In 6th IEEE/RAS International Conference
on Humanoid Robots, pp. 308--313, (2006)
2. Akachi K., Kaneko K., Kanehira N., Ota S., Miyamori G., Hirata M., Kajita S., and Kanehi-
ro F., Development of humanoid robot hrp-3, In 5th IEEE/RAS International Conference on
Humanoid Robots, pp. 50--55, (2005)
3. Park I.W., Kim J.Y., Lee J. and Oh J.H.: Mechanical design of humanoid robot platform
khr-3 (kaist humanoid robot-3: Hubo), in IEEE/RAS International Conference on Humanoid
Robots, pp. 321--326, (2005)
4. Bischoff R.: From research to products: The development of the KUKA Light-Weight Ro-
bot, 40th International Symposium on Robotics, Barcelona, Spain (2009)
5. Agur A. M. R., Lee M. J.: Grant's Atlas of Anatomy, Lippincott Williams and Wilkins
(1999)
6. Sycara K., Roth S., Sadeh N., Fox M.: Distributed Constrained Heuristic Search, In IEEE
Trans. on System, Man and Cybernetics, pp. 1446--1461, (1991)
7. Ephrati E., Roscensnhein J.: Divide and conquer in multiagent planning, Proc. of the 12th
National Conference on AI, pp. 375--380, Seattle, AAAI (1994)
8. Tang H. P., Wong T. N.: Reactive multi-agent system for assembly cell control, Robotics
and Computer-Integrated manufacturing 21, pp. 87--98 (2005)
9. Seilonen I., Pirttioja T., Koskinen K.; Extending process automation systems with multi-
agent techniques, Engineering Applications of Artificial Intelligence 22, (2009)
10.Schumacher M.: Objective Coordination in Multi-Agent System Engineering, Springer,
New York (2001)
11.Sandholm T.: An Implementation of the Contract Net Protocol Based on Marginal Cost
Calculations, Proc. of the 11th Conference on AI, pp. 256--262, (1993)
12.Shoham Y., Leyton-Brown K.: Multiagent Systems: algorithmic, game-theoretic and logical
foundations, Cambridge Uni. Press, New York (2009)
13.Hsieh F.-S.: Analysis of contract net in multi-agent system, Automatica 42, pp. 733--740
(2006)
14.Stipancic T., Jerbic B.: Self-adaptive Vision System, Emerging Trends in Technological
Innovation, In: Camarinha-Matos L. M., Pereira P., Ribeiro L., pp. 195--202, Springer, Bos-
ton (2010)
15.Curkovic P., Jerbic B.: Dual-Arm Robot Motion Planning Based on Cooperative Coevolu-
tion, Emerging Trends in Technological Innovation, In: Camarinha-Matos L. M., Pereira P.,
Ribeiro L., pp. 169--178, Springer, Boston (2010)