0% found this document useful (0 votes)
103 views4 pages

Pyrep: Bringing V-Rep To Deep Robot Learning

PyRep is a toolkit built on top of the V-REP robot simulation platform to facilitate robot learning research. It provides three main improvements over V-REP's original Python API: 1) A simple and flexible API for robot control and scene manipulation, 2) A new OpenGL rendering engine, and 3) Speed boosts of up to 10,000x for interacting with the simulation environment. These enhancements make PyRep well-suited for rapid prototyping of robot learning algorithms through efficient collection of large amounts of simulated training data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views4 pages

Pyrep: Bringing V-Rep To Deep Robot Learning

PyRep is a toolkit built on top of the V-REP robot simulation platform to facilitate robot learning research. It provides three main improvements over V-REP's original Python API: 1) A simple and flexible API for robot control and scene manipulation, 2) A new OpenGL rendering engine, and 3) Speed boosts of up to 10,000x for interacting with the simulation environment. These enhancements make PyRep well-suited for rapid prototyping of robot learning algorithms through efficient collection of large amounts of simulated training data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

PyRep: Bringing V-REP to

Deep Robot Learning


Stephen James1 , Marc Freese2 , and Andrew J. Davison1
1
Department of Computing, Imperial College London, UK
arXiv:1906.11176v1 [cs.RO] 26 Jun 2019

2
Coppelia Robotics, Switzerland

Abstract
PyRep1 is a toolkit for robot learning research, built on top of the virtual robotics experimentation
platform (V-REP). Through a series of modifications and additions, we have created a tailored version
of V-REP built with robot learning in mind. The new PyRep toolkit offers three improvements: (1) a
simple and flexible API for robot control and scene manipulation, (2) a new rendering engine, and (3)
speed boosts upwards of 10, 000× in comparison to the previous Python Remote API. With these im-
provements, we believe PyRep is the ideal toolkit to facilitate rapid prototyping of learning algorithms in
the areas of reinforcement learning, imitation learning, state estimation, mapping, and computer vision.

1 Introduction
In recent years, deep learning has significantly impacted numerous areas in machine learning, improving
state-of-the-art results in tasks such as image recognition, speech recognition, and language translation [1].
Robotics has benefited greatly from this progress, with many robotics systems opting to use deep learning
in many or all of the processing stages of a typical robotics pipeline [2, 3]. As we aim to endow robots
with the ability to operate in complex and dynamic worlds, it becomes important to collect a rich variety of
data of robots acting in these worlds. If we are to use deep learning however, it comes at a cost of requir-
ing large amounts of training data, which can be particularly time consuming to collect in these dynamic
environments. Simulations then, can help in one of two primary ways:

• Rapid prototyping of learning algorithms in the hope to find data-efficient solutions that can be trained
on small real-world datasets that are feasible to collect.

• Training on a large amount of simulation data with potentially a small amount of real-world data, and
find ways of transferring this knowledge from simulation to the real world [4, 5, 6, 7, 8].

Two common simulation environments in the literature today are Bullet [9] and MuJoCo [10]. How-
ever, given that these are physics engines rather than robotics frameworks, it can often be cumbersome to
build rich environments and integrate standard robotics tooling such as inverse & forward kinematics, user
interfaces, motion libraries, and path planners.
Fortunately, the Virtual Robot Experimentation Platform (V-REP) [11] is a robotics framework that
makes it easy to design robotics applications. However, although the platform is highly customisable and
1
https://github.com/stepjam/PyRep

1
ships with several API, including a Python remote API, it was not developed with the intention to be used
for large-scale data collection. As a result, V-REP, when accessed via Python, is currently too slow for the
rapid environment interaction that is needed in many robot learning methods, such as reinforcement learning
(RL). To that end, PyRep is an attempt to bring the power of V-REP to the robot learning community. In
addition to a new intuitive Python API and rendering engine, we modify the open-source version of V-REP
to tailor it towards communicating with Python; as a result, we achieve speed boosts upwards of 10, 000×
in comparison of the original V-REP Python API.

2 Background
V-REP [11] is a general-purpose robot simulation framework maintained by Coppelia Robotics. Some of its
many features include:

• Cross-platform content (Linux, Mac, and Windows).

• Several means of communication with the framework (including embedded Lua scripts, C++ plugins,
remote APIs in 6 languages, ROS, etc).

• Support for 4 physics engines (Bullet, ODE, Newton, and Vortex), with the ability to quickly switch
from one engine to the other.

• Inverse & forward kinematics.

• Motion planning.

• Distributed control architecture based on embedded Lua scripts.

Python and C++ are primary languages for research in deep learning and robotics, and so it is imperative
that communication times between a learning framework and V-REP are kept to a minimum. Given that V-
REP was introduced in 2013 when deep learning was in its infancy, prioritisation was not given to rapid
external API calls, which currently rely on inter-thread communication. As a result, this makes V-REP slow
to use for external data-hungry applications.

3 Modifications
Below we outline the modifications that were made to V-REP.

Speed. The 6 remote APIs offered suffer from 2 communication delays. One of these comes from the
socket communication between the remote API and the simulation environment (though this can be de-
creased considerably using shared memory). The second delay, and most notable, is inter-thread commu-
nication between the main thread and the various communication threads. This communication latency can
become noticeable when the environment needs to be queried synchronously at each timestep (which is often
the case in RL). To remove these latencies, we have modified the open-source version of V-REP such that
Python now has direct control of the simulation loop, meaning that commands sent from Python are directly
executed on the same thread. With these modifications we were able to collect robot trajectories/episodes
over 4 orders of magnitude faster than using the original remote Python API; making PyRep an attractive
platform for evaluation of robot learning methods.

2
Figure 1: Example images of environments using the new renderer.

Renderer. V-REP ships with 2 main renderers: a default OpenGL 2.0 renderer, and the POV-Ray ray
tracing renderer. POV-Ray produces high quality images but at a very low framerate. The OpenGL 2.0
renderer on the other hand uses basic shadow-free rendering, and uses the old-style fixed-function pipeline
OpenGL. As part of this report, we release a new OpenGL 3.0+ renderer which supports shadow rendering
from all V-REP supported lights, including directional, spotlight, and pointlight. Examples renderings can
be seen in Figure 1.

PyRep API. The new API manages simulation handles and provides an object-oriented way of interfacing
with the simulation environment. Moreover, we have made it easy to add new robots with motion planning
capabilities with only a few lines of Python code. An example of the API in use can be seen in Figure 2.

1 from pyrep import PyRep


2 from pyrep.objects import VisionSensor, Shape
3 from pyrep.arms import Franka
4
5 pr = PyRep()
6 pr.launch(’my_scene.ttt’, headless=True) # Launch V-REP in a headless window
7 pr.start() # Start the physics simulation
8
9 # Grab robot and scene objects
10 franka = Franka()
11 camera = VisionSensor(’my_camera’)
12 target = Shape(’target’)
13
14 while training:
15 target.set_position(np.random.uniform(-1.0, 1.0, size=3))
16 episode_done = False
17 while not episode_done:
18 # Capture observations from the vision sensor
19 rgb_obs = camera.capture_rgb()
20 depth_obs = camera.capture_depth()
21 action = agent.act([rgb_obs, depth_obs]) # Neural network predicting actions
22 franka.set_target_joint_velocities(action) # Send actions to the robot
23 pr.step() # Step the physics simulation
24 # Check if the agent has reached the target
25 episode_done = target.get_position() == franka.get_tip().get_position()

Figure 2: PyRep API Example. Many more examples can be seen on the GitHub page.

3
4 Conclusion
V-REP has been used extensively over the years in more traditional robotics research and development, but
has been overlooked by the growing robot learning community. The new PyRep toolkit brings the power
of V-REP to the community by providing a simple and flexible API, significant speedup in run-time, and
integration of an OpenGL 3.0+ renderer to V-REP. We are eager to see the tasks that can be solved by new
and exciting robot learning methods.

References
[1] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, p. 436, 2015.

[2] A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu, E. Romo,
et al., “Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-
domain image matching,” International Conference on Robotics and Automation, 2018.

[3] D. Morrison, A. W. Tow, M. McTaggart, R. Smith, N. Kelly-Boxall, S. Wade-McCue, J. Erskine,


R. Grinover, A. Gurman, T. Hunn, et al., “Cartman: The low-cost cartesian manipulator that won the
amazon robotics challenge,” International Conference on Robotics and Automation, 2018.

[4] K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M. Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz, P. Pastor,
K. Konolige, et al., “Using simulation and domain adaptation to improve efficiency of deep robotic
grasping,” International Conference on Robotics and Automation, 2018.

[5] S. James, P. Wohlhart, M. Kalakrishnan, D. Kalashnikov, A. Irpan, J. Ibarz, S. Levine, R. Hadsell,


and K. Bousmalis, “Sim-to-real via sim-to-sim: Data-efficient robotic grasping via randomized-to-
canonical adaptation networks,” Conference on Computer Vision and Pattern Recognition, 2019.

[6] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for
transferring deep neural networks from simulation to the real world,” International Conference on
Intelligent Robots and Systems, 2017.

[7] S. James, A. J. Davison, and E. Johns, “Transferring end-to-end visuomotor control from simulation
to real world for a multi-stage task,” Conference on Robot Learning, 2017.

[8] J. Matas, S. James, and A. J. Davison, “Sim-to-real reinforcement learning for deformable object
manipulation,” Conference on Robot Learning, 2018.

[9] E. Coumans, “Bullet physics simulation,” in ACM SIGGRAPH 2015 Courses, SIGGRAPH ’15, (New
York, NY, USA), ACM, 2015.

[10] E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” International
Conference on Intelligent Robots and Systems, 2012.

[11] E. Rohmer, S. P. Singh, and M. Freese, “V-rep: A versatile and scalable robot simulation framework,”
International Conference on Intelligent Robots and Systems, 2013.

You might also like