Towards A Generic Simulator For Continuum Robot Control: October 2018

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/328134848

Towards a Generic Simulator for Continuum Robot Control

Conference Paper · October 2018

CITATIONS READS
0 249

3 authors:

Andrey V. Kudryavtsev Kanty Rabenorosoa

13 PUBLICATIONS   102 CITATIONS   
UBFC
66 PUBLICATIONS   680 CITATIONS   
SEE PROFILE
SEE PROFILE

Tamadazte Brahim
Institut FEMTO-ST
94 PUBLICATIONS   755 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Microrobotics View project

µRoCS: MicroRobot-assisted Cholesteatoma Surgery View project

All content following this page was uploaded by Tamadazte Brahim on 07 October 2018.

The user has requested enhancement of the downloaded file.


Towards a Generic Simulator for Continuum Robot Control*
Andrey V. Kudryavtsev1 , Kanty Rabenorosoa1 , and Brahim Tamadazte1

I. INTRODUCTION
Nowadays, it became extremely difficult to deny the
importance of continuum robotics, especially in medical
applications [1]. In contrast to classical industrial robots, that
are typically characterized by a series of discrete rigid links,
continuum robots (CR) have a particular structure going from
robotic arm inspired by elephant trunk to concentric tube
robots (CTR). However, there is a common property which
defines them as an actuated mechanism whose backbone
forms curves with continuous tangent vectors. For developing
advanced control and achieving complex tasks with CR,
a generic simulator would be helpful. Indeed, building a
realistic experimental platform containing continuum robots
may be a challenging task both in terms of time and cost. One
of the most popular CR for medical applications is CTR that
is obtained by assembling elementary precurved tubes with
different diameters in telescopic manner. Generally, these
tubes are made of Nickel Titanium alloy but more recently
3D printed ones were introduced [2]. The tube assembly
constitutes the effective part of the CTR and in order to
get a functional robot, it is necessary to have an actuation
unit including angular and linear stages. Various challenges
Fig. 1. Main window screenshot of the simulator displaying: a 3D scene
remain to be tackled to obtain lightweight, compact, and with CTR, target object, and an image coming from the virtual camera
high degrees of freedom actuation unit. Therefore, the core installed on the end-effector.
of this paper is the development of a generic simulation
platform for CR, especially a CTR. Comparing to widely-
used simulation platforms such as V-Rep or Gazebo, our possible to develop stand-alone desktop applications thanks
simulator has an important advantage: it is possible, and easy, to two following elements:
to add new deformable robots. Actually, the simulator can be • runtime environment called Node.js which allows exe-
used, for instance, to optimize the CTR mechanism design, cuting JS code without a browser;
to develop and validate advanced controls by using various • framework Electron that made possible building desk-
feedbacks, and to achieve complex tasks by integrating top applications using web-development technology
anatomical models, etc. In the reminder of this paper, at first (HTML, CSS, and JS).
the software architecture is presented. Secondly, we discuss The fact of using an object-oriented structure adds very
the capabilities of the simulator from the user point of view, interesting features to our simulator. In particular, in order
i.e., what a person with no background in programming can to add a new robot, one has only to create a new class
achieve with it. containing the robot forward kinematics model.
II. SOFTWARE ARCHITECTURE III. USER-SIMULATOR INTERACTION
The simulator presented here is written mostly in After the application starts, a user will see the main win-
JavaScript (JS). JS is a web-oriented programming language dow displayed in Fig. 1. It contains the following elements:
supporting object-oriented paradigm. Inspite of the fact that
• 3D scene containing CTR and a target object (white box
JS is generally used on web-pages, since 2009 it became
with four black dots) in front of it.
*This work has been supported by the Labex ACTION project (contract • a menu allowing to change some parameters of the
”ANR-11-LABX-0001-01”) and ANR NEMRO (contract ”ANR-14-CE17- scene. In the given example, one can modify joint vari-
0013”). ables (mouse-based control) associated with the CTR;
1 Authors are with FEMTO-ST Institute, Univ.
Bourgogne Franche-Comté, CNRS, Besançon, France. • controls of server parameters defining the IP address and
[email protected] (corresponding author) port number allowing the communication of simulator
TABLE I
E XAMPLES OF COMMANDS THAT CAN BE SENT TO THE SIMULATOR .

Command Target
SETJOINTVEL,0,0,0,0,0,0.001
SETJOINTPOSREL,0,0,0,0,0,0.001
robot
GETJOINTPOS
STOP
GETIMAGE
camera
GETCALIBMAT

with the external world (external controller, software,


etc.) via UDP communication;
• buttons for scenes management (loading a new scene or
Fig. 2. Simulating a scene containing a 3D face with a nasal cavity, and
reloading the current one). a CTR. Images on the right generated by the camera located on the robot
Also, the simulator gives the possibility to place a virtual tip.
camera either on the robot end-effector or freely in 3D space.
This gives the possibility to generate a virtual image with
programming languages: python, MATLAB, etc. It is worth
known and chosen camera model (e.g., intrinsic parameters).
noticing, that an adapter for C++ communication is provided
This last point is of particular interest for simulating vision-
along with the simulator. This adapter, while acting like a
based control schemes such as visual servoing [3] or auto-
UDP client, allows in addition to transform the image coming
matic intracorporeal navigation [4].
from simulator directly to OpenCV or ViSP libraries format.
A. Constructing the Scene C. Example of use
The 3D scene of the simulator must be defined in advance In this subsection, we present an example of a complete
by a user and can contain three types of objects. Firstly, simulated scene (Fig. 2). The scene contains a part of human
it may contain different static objects that are loaded from face as well as the 3D model of nasal cavity. Both objects
standard 3D-model files such as *.stl, *.obj or *.dae files. were loaded from *.stl files. CTR is placed in a way allowing
Secondly, it is a robot with its joint values that are directly entering the cavity and performing an internal navigation.
accessible from the menu in the right-up corner. A scene may In the given example, a CTR model is based on piecewise
contain several robots. Finally, the third type of components constant curvature model. In addition, a camera is located
concerns imaging systems such a standard white-light camera on the robot tip which enables online generation of images.
which allows generating images that can be then acquired This setup can be used for development and testing of various
using UDP protocol. The whole scene is defined using a visual servoing and/or SLAM algorithms.
text file in json format1 .
IV. CONCLUSION
B. Robot Simulation The goal of this work was to create a simulation environ-
During simulation, a user has access to two tools allowing ment for continuum robots. Nowadays, it contains already a
the interaction with the scene. The first one is the menu concentric tube robot build using constant-curvature model.
that allows some basic manipulations such as changing joint Moreover, thanks to simulator flexibility of adding new
values, displaying or hiding frames associated with robot robots, one can use the existing pattern to build other robots
joints, etc. and add them to 3D scene. As user can also acquire the
The second tool is a UDP protocol. Actually, the simulator image given by the virtual camera in real time, it opens new
is a UDP server that can send and receive messages from ex- possibilities for testing such algorithms as visual servoing,
ternal world. Among current possibilities, a user can change SLAM or any other technique based on a visual feedback.
robot configuration (relative or absolute joint position), apply R EFERENCES
joint velocities, get the transformation matrix corresponding
[1] Burgner-Kahrs, J., Rucker, D. C., & Choset, H. (2015). Continuum
to the tool pose, acquire the image from a virtual camera, robots for medical applications: A survey. IEEE Transactions on
etc. Some examples of commands are represented in Table I. Robotics, 31(6), 1261-1280.
There is a variety of ways for sending these UDP messages [2] Morimoto, T. K., & Okamura, A. M. (2016). Design of 3-D Printed
Concentric Tube Robots. IEEE Trans. Robotics, 32(6), 1419-1430.
to the simulator. The easiest one is to use a small software [3] Baran Y., Rabenorosoa K., Laurent G.J, Rougeot P., Andreff N.,
telnet which allows sending UDP messages directly from Tamadazte B., Preliminary results on OCT-based position control of
terminal. Another way would be to use a telnet equivalent a concentric tube robot, IEEE/RSJ Int. Conf. on Intelligent Robots and
Systems (IROS), Vancouver, Canada, 2017.
software with a graphic user interface. Finally, it is always [4] Kudryavtsev, A.V., Chikhaoui, M.T., Liadov, A., Rougeot, P., Spindler,
possible to establish UDP communication using different F., Rabenorosoa, K., Burgner-Kahrs, J., Tamadazte, B., Andreff, N.
(2018). Eye-in-Hand Visual Servoing of Concentric Tube Robots. IEEE
1 for details, see https://github.com/avkudr/visa/wiki/Scene-modeling Robotics and Automation Letters, 3(3), 2315-2321.

View publication stats

You might also like