067 PDF
067 PDF
067 PDF
Interaction Scenarios
Abstract—Divers work in dangerous environments that place and track a human diver. The sonar data was acquired with a
severe constraints on the types of activities that divers can BlueView P900 2D imaging sonar attached to a VideoRay Pro
accomplish. The development of a underwater robotic assistant 4 as shown in Figure 1.
may help a human diver accomplish tasks more efficiently and
safer. However, before Underwater Human-Robot Interaction
(UHRI) can be deployed in the field, the UHRI algorithms must
be tested and validated in a simulator. In order to test future
UHRI algorithms and behaviors, an underwater simulator, based
on the Modular OpenRobots Simulation Engine (MORSE) was
developed. The Mission Oriented Operating Suite (MOOS) auton-
omy framework was integrated with the Robot Operating System
(ROS) and MORSE to demonstrate a basic UHRI scenario: the
VideoRay Pro 4 Remotely Operated Vehicle (ROV) trailing a
human diver. The UHRI simulator makes use of other open source
projects to enable the programmer to easily incorporate new 3D
models into MORSE and adjust the fidelity of sensor and motion Fig. 1: VideoRay Pro 4 with attached BlueView P900 in water
models based on scenario goals. tank.
Keywords—Underwater Human-Robot Interaction, Robotics
Simulator, MORSE, 2D Imaging Sonar Through the initial study, fifty sonar data sets of a human
diver moving through the water were acquired. In order to
I. I NTRODUCTION decrease autonomy development time, a means to simulate a
greater variety of situations and environments was required.
Both commercial and military divers operate in harsh, un- Thus, the next step in the development of the underwater
forgiving environments. Commercial divers are often required robotic assistant was the implementation of a 2D imaging sonar
to remain at-depth for long periods of time and can only resur- simulation. This simulation allows the developer to process
face after completing a complex sequence of decompression simulated sonar and optical imagery and test various autonomy
stages in order to avoid decompression sickness. Similarly, algorithms to command the motion of the autonomous under-
military divers are required to complete a number of com- water vehicle (AUV) or remotely operated vehicle (ROV). In
plex tasks including the fabrication of underwater structures addition, a model of a simulated diver was developed in order
and repairing underwater cables. Navy Explosive Ordnance to test various Underwater Human-Robot Interaction (UHRI)
Disposal Divers are also required to work with extremely scenarios.
hazardous material. In addition to monitoring their own life
support systems, divers are also faced with the challenge of II. R ELATED W ORK
having to conduct operations in near-zero visibility conditions
due to water depth and turbidity [1]. Robotic assistants have The modeling of high frequency sonar systems has been a
shown great promise in space environments [2]. This research topic of research for numerous researchers in academia and
builds upon this tremendous robotic assistant capability within industry. Early computer simulations of sonar often focused
the unstructured underwater environment. However, there are on synthetic side scan sonar models and the sonar images
a number of machine vision and autonomy challenges posed could only be visualized after the simulation was complete.
in this complex environment that must be addressed before an This was the case in [4], where the authors also discussed the
effective assistant is deployed. difficulties of comparing simulated results to actual sonar data.
The authors of [4] utilized ray tracing in order to generate
An effective underwater robotic assistant will have to be their sonar images, which was also discussed in [5]. The
able to process both optical imagery and high-frequency sonar process of ray tracing involves advancing a simulated ray
imagery in order to accurately assess the environment and out of the sonar transmitter until the ray comes in contact
interact with the diver. In [3], two-dimensional computer vision with an object or boundary. If the temperature of the water is
techniques were applied to real sonar imagery in order to detect assumed to be constant, the ray is an adequate approximation
978-0-933957-40-4 ©2013 MTS This is a DRAFT. As such it may not be cited in other works.
The citable Proceedings of the Conference will be published in
IEEE Xplore shortly after the conclusion of the conference.
of a sonar beam. This is due to the fact that if the water • The simulator must provide hooks to common robotics
temperature changes along the path of the sonar beam, the middleware architectures (i.e. ROS, MOOS, etc.)
beam will bend. The process of projecting rays is repeated for
an array until a three dimensional point cloud is constructed • It should be simple to build or add new 3D objects
from the ray detection points. The authors of [5] noted that ray • The simulator should provide a means to script events
tracing can provide highly accurate sonar images; however, external to the target autonomous agent
ray tracing is a computationally expensive process. More
recently, researchers have been looking for alternatives to ray • There must be a means for adding dynamical models
tracing and image rasterization, which has led to a process to the physics engine
called tube tracing [6]. Tube tracing involves using multiple
rays to form a polygon or footprint on a detected boundary. After comparing various robotics simulators, such as Play-
Tube tracing has the advantage of being less computationally er/Stage/Gazebo, the Microsoft Robotics Developer Studio
expensive compared to ray tracing and the tube volume is (MRDS), and the Urban Search and Rescue Simulator (US-
more characteristic of sonar beams compared to ideal rays. The ARSim), it was concluded that the Modular OpenRobots Sim-
capability of tube tracing was shown in Guriot for forward- ulation Engine (MORSE) provided the most modern robotics
looking sonar simulations [7]. Much of the same theory that simulation environment that was highly customizable [18].
applied to side scan sonar simulations also applied to forward-
looking sonar simulations, but his research also focused on
synthetically generating realistic sea floor images with the A. The Modular OpenRobots Simulation Engine
use of micro-textures. While previous research alluded to the
computational complexity, none provided a specific metric by MORSE was initially developed at LAAS-CNRS, a public
which to measure the computational complexity or evaluate French robotics laboratory, and heavily utilizes the popular
the simulation time. This is mostly due to the fact that most open source game engine and 3D visualizer, Blender [18].
of the simulations in the literature were not close to real-time. Researchers have already developed robotics simulations with
ground, water surface, and aerial robots in MORSE that utilize
The literature on UHRI is not nearly as extensive as the simulated SICK lasers, video cameras, GPS units, IMUs,
literature on the modeling of sonar systems. Most of the depth cameras, etc. Since MORSE sits on top of Blender,
previous work on UHRI has focused on enabling track-and- it uses the Bullet Physics game engine. Finally, MORSE has
trail behaviors for an autonomous agent through the detection already been adapted to work with several popular middleware
and tracking of a cooperative diver. For example, in [8], architectures, such as The Robot Operating System (ROS), the
researchers used the frequency at which the diver’s fins undu- Mission Oriented Operating Suite (MOOS), Yet Another Robot
lated as a detection feature. Sattar and Dudek also published Platform (YARP) [19], and Pocolibs [20].
research on the use of bar-codes and a language they developed
called RobotChat that enabled a human diver to command an The most stable and extensible method of communicating
underwater robot through its optical camera [9]. Finally, in data between the MORSE simulator and an external process
[10], the authors used an ROV’s optical camera and gestures is with the use of ROS. Thus, a new ROS package entitled,
from a diver to implement basic human-robot interaction. One “videoray,” was created to encompass all of the individual
of the main issues with the previously discussed works is that processes that were used to simulate the VideoRay ROV’s
they relied heavily upon an optical camera for interpreting the dynamics model, controller, and associated planning engines.
environment and interacting with the diver. Unfortunately, in Previously, a MOOS/ROS Bridge program was developed in
the underwater environment, the turbidity of the water renders [21] that shuffled data between a MOOS Community and a
most optical detection of limited use past even a few feet, ROS Core based on a simple XML configuration file. Figure
even if the vehicle is equipped with a quality light source. 2 shows how the MOOS/ROS Bridge sits between MOOS and
An additional sensor that can measure range and bearing to ROS in order to pass information concerning desired speed,
a target, such as a 2D imaging sonar, would tremendously heading, and depth from the MOOS side to the ROS side.
improve an ROV’s ability to detect and track a cooperating
Likewise, the ROV’s pose and the human’s pose are
human diver.
transferred from the ROS side to the MOOS side, so that
the IvPHelm engine can generate a desired trajectory. An
III. S IMULATION E NGINE R EQUIREMENTS AND important aspect of the interface between MORSE and ROS
S ELECTION is the type of information that is transferred between the two
components. Internally, Blender performs collision detection,
There are a number of 3D simulation engines for robotics which affects the final poses of obstacles in the simulator. Also,
research and development. Three of the most popular sim- Blender is capable of integrating linear and angular velocities
ulation environments are Player/Stage/Gazebo [11] [12], the in order to determine an object’s pose. Since the VideoRay
Microsoft Robotics Developer Studio (MRDS) [13], and the dynamics model in ROS does not have knowledge of obstacles
Urban Search And Rescue Simulator (USARSim) [14]. There in the World Model, it is not capable of performing collision
are countless other simulation engines that rely upon Bullet detection, which could affect the VideoRay’s final pose. In-
Physics [15] or Open Dynamics Engine (ODE) [16] for the stead, Blender receives the velocity information, integrates the
simulator’s physics engine and Open Scene Graph [17] for model, performs collision detection, and then feeds the full
the simulator’s visualization engine. In the selection of an state model back to the VideoRay dynamics model for the
appropriate simulation engine, the main requirements are: next iteration in the simulation.
Fig. 3: Quad view of the VideoRay Pro 4 3D model.
V. F ORWARD -L OOKING S ONAR M ODEL In order to publish the sonar data from MORSE to the ROS
framework, a new ROS sensor message had to be created. This
While robotics simulators support planar sonar sensors, was accomplished by creating the auv_msgs ROS package
there are not any open source implementations of 2D imag- with the sonar_cloud message defined by:
ing sonars, such as the forward-looking sonars provided by
BlueView [27]. An example of one of the imaging sonars Listing 1: Sonar Cloud Definition
provided by BlueView is shown in Figure 9(a). The imaging uint8 NumPlanes uint8 NumPointsPerPlane
sonar produces an image similar to the image shown in Figure float32 angle_min float32 angle_max
9(b), where the angle of the sonar sweep is dependent upon float32 angle_increment float32
the type of sonar. The features near the bottom of the image range_min float32 range_max float32
are closest to the sonar, while the features near the top of the vert_height float32 vert_resolution
float32[] x float32[] y float32[] z
image are farthest from the sonar. The specifications listed in
Table I were obtained from BlueView’s website and were used
for the development of the sonar model [27]. The Python script that is executed by the sonar model
in MORSE generates the auv_msgs::sonar_cloud
message, which is then converted to a true Point Cloud
by the custom sonar_point_cloud node. The
sonar_point_cloud node uses the sonar_cloud
message’s NumPlanes and NumPointsPerPlane
variables to unravel the one-dimensional array of rays into the
three-dimensional point cloud. The sonar_point_cloud
(a) BlueView P900 series. (b) 2D imaging sonar output.
node is part of the auv_morse package developed during
this research to manage all sonar image creation, sonar image
Fig. 9: BlueView 2D imaging sonar.
processing, obstacle detection, and ROV motion model nodes.
The Point Cloud Library (PCL) was developed by Willow
Garage specifically for ROS nodes. An overview of the sonar
data flow from MORSE to ROS is shown in Figure 11. The
TABLE I: Forward-Looking Sonar Model Specifications.
sonar_point_cloud node also publishes an OpenCV
Operating Frequency 900 kHz
image, which is subscribed to by the sonar_process
Update Rate Up to 15 Hz node. This node uses the OpenCV library to detect corners
Field-of-View 45◦ and edges that are translated into obstacles and features for
Max Range 100 m obstacle detection.
Beam Width 1◦ x 20◦
Number of Beams 256
Beam Spacing 0.18◦
The creation of the sonar image takes places in several steps.
Range Resolution 1.0 in. First, the auv_msgs::sonar_cloud message is converted
into a 3D point cloud. Then the Z-component from each point
in the point cloud is removed and the point is drawn in a 2D
A. Forward-Looking Sonar Development OpenCV matrix, which is essentially a projection of the 3D
point cloud into 2D space. The image is scaled up in size
The first step in developing the ROV simulation was build- by multiplying all of the X and Y components of the rays in
ing the model for the forward-looking sonar. Fortunately, the the image by a times ten scaling factor. This helps to separate
SICK laser model, shown in Figure 10(a), in MORSE provided features from each other and produces an image that is closer to
a starting point. The SICK model uses ray tracing from the the image output from a real sonar. In order to simulate basic
origin of the model to a detected obstacle in order to build an sonar noise, a Gaussian blur algorithm is convoluted across
Fig. 11: MORSE and ROS node interactions.
Fig. 13: Forward-looking sonar detecting obstacles.
Fig. 12: The underwater MORSE environment. Fig. 14: Screen capture of forward-looking sonar image.
the image. Finally, the white lines designating the sonar image top-left of the matrix or image. A final note about Figure 14
boundaries are drawn after the Gaussian blur so that they are is the sharp red corner that is seen closest to the sonar image’s
not affected by the noise. apex. This red corner is a rectangular pier support beam and
an excellent candidate for a feature that could be fed to a
B. Sonar Model Simulation Results navigation algorithm. The brightly colored areas to the sides
of the beam are the detected sea floor, but they do not provide
As shown in Figure 12, several rectangles were added to the strong features for a navigation algorithm. It is important to
environment to simulate pier support beams. A torus was also note the acoustic shadow behind the pier support beam because
added to the environment to provide an interesting object on the pier beam extends all the way to the water’s surface.
the sea floor for the ROV’s sonar to detect. After the simulation
was initialized and executed, the ROV began to move forward In other papers, such as [6], [28], and [5], the authors
at a constant velocity, while using its forward-looking sonar demonstrated the sensing of known 3D objects with their sonar
to avoid nearby obstacles. The external ROS nodes were simulators. Thus, a torus was placed in the environment, so that
processing the published sonar images and detecting features the sonar image of the torus could be examined. The 3D model
in real-time. The ROS nodes were able to process the sonar of the torus is shown in Figure 15 and the sonar image output
images faster than MORSE could generate the sonar images from sensing the torus is shown in Figure 16. The sonar image
due to the computational complexity of ray tracing. of the torus is interesting because the human eye can almost
detect a 3D object in the 2D image. The acoustic shadows that
The forward-looking sonar on the ROV used Blender’s are created behind the inner circle of the torus and behind the
built-in ray tracing functions to detect obstacles in the envi- torus give the image depth.
ronment upon simulation execution. The ray tracing out of the
sonar sensor can be seen in Figure 13, where the sonar rays VI. H UMAN D IVER M ODEL
are colliding with the pier support beams and some of the
surrounding bathymetry. An example of the sonar image output A human diver model is required for any type of UHRI
from the sonar_point_cloud and MORSE ray tracing scenario. The requirements for a basic human diver model
simulator is shown in Figure 14. It should be noted that this consist of a realistic 3D model that can be imaged by the
sonar image is rotated compared to standard sonar images. VideoRay’s simulated sonar sensor and the simulated optical
The choice to put the apex of the sonar image at the top-left camera. Also, the diver’s motion model must capture the
of the image was due to the fact that the OpenCV drawing diver’s ability to rotate in place, move in the forward direction,
functions use a coordinate system that place the origin at the and move vertically within the water column.
B. Human Diver Dynamics Model
The motion model for the human diver is fairly simple.
In order to extend air supply, a diver will move in smooth,
constant velocity trajectories. Since a diver can essentially
rotate in place, the diver’s motion will also assumed to be
holonomic. Given the motion model’s description, the diver’s
motion model can be described on the X-Y plane by the motion
model of a differential drive robot. The system equations for
a differential drive robot are defined by:
r
ẋ = (ul + ur )cos(θ) (10)
2
Fig. 15: 3D model of torus. r
ẏ = (ul + ur )sin(θ) (11)
2
r
θ̇ = (ur − ul ) (12)
L
where r is the radius of the wheel, L is the distance between
the two differential wheels, x is the x-position, y is the y-
position, θ is the heading, ul is the angular velocity of the
left wheel, and ur is the angular velocity of the right wheel.
The differential drive motion model was adapted to the diver
model by assigning the following values: L = 0.5 and r = 1.
Heading control for the diver model was accomplished by
proportionally driving the ul and ur inputs. Similar to the
VideoRay model, the motion in the Z-direction is decoupled
from the motion in the X-Y plane and the velocity in the Z-
direction is provided by:
Fig. 16: Image of torus structure via 2D imaging sonar.
ż = uz (13)
The IvPHelm process within MOOS is the behavior fusion Upon execution of the mission, the diver begin to perform
engine. The main advantage of a behavior fusion engine, versus loiter patterns based on the coordinates defined in the IvPHelm
a simple station machine, is that it is able to resolve conflicting behavior file. Also, the VideoRay begins to transit towards the
goals in an uncertain and changing World Model. For example, diver based on contact reports it received from MORSE. In this
a behavior fusion engine can be used to avoid obstacles while mission example, the diver contact report was generated from
transiting to a goal location by resolving the conflicting goals truth simulated data from MORSE instead of the processing
of the transit behavior and the avoid obstacle behavior. In this of the simulated sonar data to test the behavioral algorithms,
scenario, the diver’s IvPHelm configuration is fairly simple. not the machine vision algorithms. Based on previous work
It consists of the BHV_ConstantDepth and BHV_Loiter [3], it is assumed that the diver’s position can be detected and
behaviors. In this case, the diver’s BHV_AvoidCollision tracked from forward-looking sonar data. The simulated X-Y
is not enabled because it is assumed that the diver confidently trajectories of the diver and VideoRay are plotted in Figure
assumes that the ROV will avoid the diver. The VideoRay 19. From the trajectories, it can be seen that the ROV transits
ROV’s IvPHelm configuration is slightly more complicated. towards the diver and then proceeds to track-and-trail the diver
The ROV is also executing the BHV_ConstantDepth be- throughout the duration of the mission.
havior, but the behavior is receiving diver contact reports A third MOOS community was also configured to act
from MORSE that update the desired depth. In order to avoid as a base station for the VideoRay and diver. This MOOS
collisions with the diver, the ROV’s BHV_AvoidCollision community consisted of the pMarineViewer application and
behavior is enabled, which is updated based on diver contact several MOOS bridges to gather position information from the
reports. Finally, the ROV’s BHV_Trail behavior allows the diver and VideoRay. During mission execution, the positions
ROV to follow the diver. The trail behavior is configured to and headings of the diver and VideoRay were displayed in
follow the diver at a 180◦ angle with a trail range of 2 m. pMarineViewer. An instance of the VideoRay trailing the diver
in pMarineViewer is shown in Figure 20. Finally, a 3D screen
capture of the track-and-trail mission in progress is shown in
B. Mission Launch Configuration Figure 21.
The overall mission consists of three main components:
the MORSE simulator, the diver system, and the VideoRay IX. D ISCUSSION
system. Both the diver and VideoRay systems consist of similar The track-and-trail demonstration between the ROV and
sub-components that could conflict with each other, if not the diver was successful in testing the flow of data be-
configured correctly. Fortunately, the ROS system provides tween MOOS, ROS, and MORSE. The obstacle avoidance
namespaces to separate nodes of similar names and the MOOS and contact management, which were required in the track-
framework provides network separation through MOOS com- and-trail example, could be used for various other and more
munities and MOOS Bridges that can shuttle data between complicated examples. For example, with just the trail and
separate communities. Thus, through proper configuration, a obstacle avoidance behaviors, vessels can move in formation
single launch file was used to launch both the VideoRay and or navigate busy shipping lanes.
diver systems at the same time, both of which individually
communicated with the MORSE simulator, as depicted in The use of MORSE as the main simulation engine has
Figure 18. some major benefits that are not necessarily obvious at first.
X. C ONCLUSIONS AND F UTURE W ORK
While the development of the UHRI simulator is still in
progress, the system has proven to be successful for testing
basic UHRI scenarios. With an active developer community,
MORSE is currently under heavy development. As MORSE
improves and becomes easier to use, so will the UHRI simu-
lator.
The UHRI simulator requires further development before
it can fully simulate human-robot interactions. First, the diver
model must be adapted to allow the diver to grasp and “use”
tools. Likewise, the ROV must be able to use tools, which will
result in some sort of change in the environment. This would
allow the simulation of a human-robot team for underwater
Fig. 20: Visualization of ROV trailing diver in pMarineViewer. construction or salvage. Second, controlling the human via
The diver is represented by the red UUV symbol and the script files would decrease development time. Currently, the
VideoRay is represented by the yellow UUV symbol. human is controlled through IvPHelm, which is specialized
for controlling the movement of ship vessels. It is important
that the VideoRay is controlled by a full autonomy architecture
because it has to be able to adapt to a changing environment,
but the diver is a human that could be scripted. Finally, either
IvPHelm will have to be augmented or a new autonomy
architecture focused on human-robot interaction will have to be
developed. Since IvPHelm is specialized for vessel movements,
it does not have behaviors for grasping tools, using tools, or
order-based planning. An autonomy architecture for human-
robot interactions will require a different set of input behaviors
than are currently provided by IvPHelm.
ACKNOWLEDGMENT
The authors would like to thank the Georgia Tech Research
Institute (GTRI) for supporting this work. Also, the authors
would like to thank VideoRay for providing the 3D model for
the VideoRay Pro 4.
Fig. 21: The VideoRay trailing the diver in the MORSE
simulator. R EFERENCES
[1] D. G. Gallagher, “Diver-based rapid response capability for maritime-
port security operations,” in OCEANS 2011, 2011, p. 110.
[2] W. Bluethmann, R. Ambrose, M. Diftler, S. Askew, E. Huber, M. Goza,
Not exploited in this demonstration, robots and human avatars F. Rehnmark, C. Lovchik, and D. Magruder, “Robonaut: A robot
designed to work with humans in space,” Autonomous Robots, vol. 14,
within MORSE can pick up and move objects within the scene. no. 2-3, pp. 179–197, 2003.
Also, since MORSE is built upon the Blender Game Engine,
[3] K. DeMarco, M. E. West, and A. M. Howard, “Sonar-based detection
events can be triggered when certain conditions within the and tracking of a diver for underwater human-robot interaction scenar-
environment are satisfied. The fact that Blender is capable ios,” in SMC 2013. IEEE, Oct. 2013, pp. 1–8, (Accepted July 2013).
of importing, manipulating, and building complete 3D models [4] J. M. Bell and L. M. Linnett, “Simulation and analysis of syn-
means that with just basic Blender experience, any user can thetic sidescan sonar images,” in Radar, Sonar and Navigation, IEE
quickly and easily add new robots, scenes, and objects to the Proceedings-, vol. 144, 1997, pp. 219–226.
MORSE simulation environment. [5] E. Coiras and J. Groen, “Simulation and 3D reconstruction of side-
looking sonar images,” Advances in Sonar Technology, p. 114, 2009.
[6] D. Gueriot, C. Sintes, and B. Solaiman, “Sonar data simulation &
Another interesting aspect of this simulation environment performance assessment through tube ray tracing.”
is that the programmer can choose to either use Blender’s [7] D. Guriot and C. Sintes, “Forward looking sonar data simulation through
internal integration engine for basic physics models, such as tube tracing,” in OCEANS 2010 IEEE-Sydney, 2010, p. 16.
the differential drive robot, or the programmer can integrate [8] J. Sattar and G. Dudek, “Where is your dive buddy: tracking humans
complex models with the use of an external C++ program, underwater using spatio-temporal features,” in Intelligent Robots and
which was done for the VideoRay model. This allows for Systems, 2007. IROS 2007. IEEE/RSJ International Conference on,
2007, pp. 3654–3659.
adjusting the fidelity of the simulation without major changes
[9] ——, “A vision-based control and interaction framework for a legged
to the simulation environment. While most of the physics underwater robot,” in Computer and Robot Vision, 2009. CRV’09.
models in MORSE utilize the internal Blender physics engine, Canadian Conference on, 2009, pp. 329–336.
it will be useful for the community to develop a standard [10] H. Buelow and A. Birk, “Gesture-recognition as basis for a human robot
messaging scheme when using the external physics models. interface (HRI) on a AUV,” in OCEANS 2011, 2011, pp. 1–9.
[11] B. Gerkey, R. T. Vaughan, and A. Howard, “The player/stage project: platform,” International Journal on Advanced Robotics Systems, vol. 3,
Tools for multi-robot and distributed sensor systems,” in Proceedings no. 1, pp. 43–48, 2006.
of the 11th international conference on advanced robotics, vol. 1, 2003, [20] “pocolibs - openrobots.” [Online]. Available:
pp. 317–323. http://www.openrobots.org/wiki/pocolibs
[12] N. Koenig and A. Howard, “Design and use paradigms for gazebo, an [21] K. DeMarco, M. E. West, and T. R. Collins, “An implementation of ROS
open-source multi-robot simulator,” in Intelligent Robots and Systems, on the yellowfin autonomous underwater vehicle (AUV),” in OCEANS
2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Confer- 2011, 2011, p. 17.
ence on, vol. 3, 2004, pp. 2149–2154.
[22] T. I. Fossen, Guidance and control of ocean vehicles. West Sussex,
[13] “Microsoft robotics developer studio.” [Online]. Available: England: John Wiley & Sons, 1994.
http://www.microsoft.com/robotics/
[23] W. Wang and C. M. Clark, “Modeling and simulation of the VideoRay
[14] S. Carpin, M. Lewis, J. Wang, S. Balakirsky, and C. Scrapper, “US- pro III underwater vehicle,” in OCEANS 2006-Asia Pacific, 2007, p. 17.
ARSim: a robot simulator for research and education,” in Robotics and
[24] P. M. Newman, “MOOS-mission orientated operating suite,” Mas-
Automation, 2007 IEEE International Conference on, 2007, pp. 1400–
sachusetts Institute of Technology, Tech. Rep, vol. 2299, no. 08, 2008.
1405.
[25] “Robot operating system (ROS).” [Online]. Available:
[15] E. Coumans et al., “Bullet physics library,” Open source: bulletphysics.
http://www.ros.org/wiki/
org, 2006.
[26] K. Ahnert and M. Mulansky, “Odeint-solving ordinary differential
[16] R. Smith, Open dynamics engine (ODE), 2006.
equations in c++,” arXiv preprint arXiv:1110.3397, 2011.
[17] R. Osfield, D. Burns et al., Open scene graph, 2004.
[27] “High resolution MultiBeam imaging sonar - BlueView technologies.”
[18] G. Echeverria, N. Lassabe, A. Degroote, and S. Lemaignan, “Modular [Online]. Available: http://www.blueview.com/products.html
open robots simulation engine: Morse,” in Robotics and Automation
[28] D. Gueriot, C. Sintes, and R. Garello, “Sonar data simulation based on
(ICRA), 2011 IEEE International Conference on, 2011, pp. 46–51.
tube tracing,” in OCEANS 2007-Europe, 2007, p. 16.
[19] G. Metta, P. Fitzpatrick, and L. Natale, “YARP: yet another robot