RL-Glue (Reinforcement Learning Glue) Home

Update (Sept 2018)

This site has been down and out of service for a while, but I wanted it to stay alive and perhaps be a useful record for future generations.

-- Brian Tanner

Introduction

RL-Glue (Reinforcement Learning Glue) provides a standard interface that allows you to connect reinforcement learning [wikipedia.com] agents, environments, and experiment programs together, even if they are written in different languages.

To use RL-Glue, you first install the RL-Glue Core, and then the codec (listed in the left navigation bar) that allows your language(s) of your choice talk to each other.  The core project and all of the codecs are cross-platform compatible, and can run on Unix, Linux, Mac, and Microsoft Windows (sometimes under Cygwin).

Cite RL-Glue

If you use RL-Glue, we would appreciate it very much if you would cite it in your academic publications.

Brian Tanner and Adam White. RL-Glue: Language-Independent Software for Reinforcement-Learning Experiments. Journal of Machine Learning Research, 10:2133--2136, 2009. 

I'll make this paper available soon.

Where to get Help/How to Contribute

There are several ways that you can help improve RL-Glue for the future.  The first is to not get frustrated if you run into problems and please post questions to the rl-glue mailing list/Google Group: http://groups.google.com/group/rl-glue

We're very happy to help however we can.

If you'd like to make a suggestion or report a bug, you can do it on the appropriate issues page:

Link To Us

glue.rl-community.org 

List Your RL-Glue Compatible Project

If you have published a paper, taught a class, written an agent/environment, or made your reinforcement learning framework compatible with RL-Glue, we want to hear about it!  We would very much like to include your project in either our RL-Glue In Practice or Related Projects page.