Python SDK2 0
Python SDK2 0
Evolution Robotics and the Evolution Robotics logo are trademarks of Evolution Robotics, Inc. Microsoft DirectX is a trademark of Microsoft Corporation. Microsoft Speech SDK 5.1 is a trademark of Microsoft Corporation. Microsoft Windows is a trademark of Microsoft Corporation. DirectXTM is a trademark of Microsoft Corporation. Other product and brand names may be trademarks or registered trademarks of their respective owners.
Table of Contents
Chapter 1: Introduction
Task Overview ........................................................................................................ 1-1 Running Tasks......................................................................................................... 1-2 Customer Support ................................................................................................... 1-3 Registration ............................................................................................................. 1-3 ER1 Community ..................................................................................................... 1-3 Accessories.............................................................................................................. 1-3
Table of Contents
Units.........................................................................................................................3-5 setDefaultUnits .................................................................................................3-5 getDefaultUnits.................................................................................................3-6 About X, Y Coordinates ..........................................................................................3-6 Overview of Tasks ...................................................................................................3-7 Tasks in Detail .........................................................................................................3-8 DetectColor.......................................................................................................3-8 DetectMotion ....................................................................................................3-8 DetectObject .....................................................................................................3-9 DetectSound......................................................................................................3-10 DoAtATime ......................................................................................................3-11 DoPeriodically ..................................................................................................3-11 DoWhen............................................................................................................3-12 DriveStop..........................................................................................................3-12 GetImage...........................................................................................................3-12 GetPosition .......................................................................................................3-13 Move .................................................................................................................3-13 MoveRelative....................................................................................................3-13 MoveTo ............................................................................................................3-14 PlaySoundFile...................................................................................................3-14 RecognizeSpeech..............................................................................................3-15 SendMail...........................................................................................................3-15 SetDriveVelocities............................................................................................3-16 Speak ................................................................................................................3-16 SpeakFromFile..................................................................................................3-16 Stop ...................................................................................................................3-17 Turn ..................................................................................................................3-17 TurnRelative .....................................................................................................3-17 TurnTo ..............................................................................................................3-18 Wait ..................................................................................................................3-18 WaitUntil ..........................................................................................................3-18 WatchInbox.......................................................................................................3-19
Chapter 4: Tutorials
Object Sonar............................................................................................................ 4-1 Imports ............................................................................................................. 4-1
Table of Contents
Chapter 1
Introduction
The ER1 Software Development Kit (ER1 SDK) for Python was created around a powerful, flexible API that empowers you to create intricate robot behaviors that build on the capabilities of the ER1. (Python is especially suited for this application due to its versatility and ease of use.) The ER1 SDK is organized as a series of tasks. Each task describes a discrete action that the robot can perform, such as moving forward or playing a sound. These tasks can be used singly, strung together in sequence or executed in parallel. Tasks communicate with each other using pieces of information called Events. If you want your robot to perform a task that is not provided with the API, you can write your own. You can mix tasks youve written yourself with the provided ER1 tasks.
Task Overview
Tasks are divided into two distinct groups: Action Tasks and Monitoring Tasks. Both task types can produce Events. Action tasks perform a specific action then exit. These actions are usually short in duration. An example of an action task is PlaySoundFile. This task simply plays a sound file and then exits.
1-1
Chapter 1 Introduction
Monitoring tasks check the status of their environment until they are terminated. An example of a monitoring task is DetectMotion. This task runs continuously, raising MotionDetection events whenever motion is detected. Other interested tasks can then use the information contained in the MotionDetection events. Events are pieces of information that are published by one task and are received by other interested tasks.
Running Tasks
There are two ways to run multiple tasks: Sequentially and in Parallel. Each of these is suited to different situations and different task types. Sequential Tasks - Action tasks are the only type of task that can be run sequentially. In this scenario, the tasks are performed one after the other with no two overlapping. An example of this would be using the MoveRelative task to move forward five feet and then the Speak task to say Hello. Because monitoring tasks never self-terminate, they cannot be used in a sequential construct. In this case the parallel construct is needed. Parallel Tasks - The parallel construct is used to run two or more tasks at the same time. You will need this construct every time you run a monitoring action. The parallel construct can be used to mix and match any of the different action or monitoring tasks in order to create an unlimited number of robot behaviors. (Note that a group of parallel tasks can be run in two different ways: they can wait for one task to terminate, or they can wait for all the tasks to terminate.) The following is an example of using a parallel construct. Imagine that we want to make the robot listen for voice commands and respond to them. This goal will require at least two tasks running simultaneously: one to perform speech recognition, and another to respond. We can use the built-in RecognizeSpeech task to generate events when a voice command is heard. RecognizeSpeech is a monitor task, which means that it will never complete on its own, but instead keeps running and generating speech events until it is explicitly terminated. Now we need to run another task in parallel with RecognizeSpeech to wait for the speech events. The built-in WaitUntil task waits for an event and runs some other task when it receives that event. So we could have WaitUntil wait for speech recognition events, then run the Speak task with an argument of "hello". WaitUntil would then terminate immediately after running Speak. When we add RecognizeSpeech and WaitUntil to the same parallel construct, they run simultaneously. As soon as a speech recognition event is raised, WaitUntil will catch the event and run Speak, causing the robot to respond. When WaitUntil terminates, the parallel construct will terminate all remaining tasks in the construct, in this case RecognizeSpeech, and we are done.
1-2
Customer Support
Customer Support
Evolution Robotics Customer support is available by email at [email protected] or by filling out the form at www.evolution.com/support/. Customer Service representatives are available by calling toll free at 866-ROBO4ME or, for international customers 626-229-3198, Monday though Friday, 9 A.M. to 5 P.M. Pacific Time.
Registration
Remember to register your robot online at www.evolution.com. By registering you get the following benefits: World class customer support RobotMail email account Join the ER1 SDK development community Software updates and bug fixes Information on new accessories and software
ER1 Community
The ER1 community is a place to share development ideas, lessons learned, shortcuts, pictures and applications. After you have completed registration, visit the ER1 SDK section of the ER1 community at www.evolution.com.
Accessories
Evolution Robotics is continually developing new accessories for your ER1. Each of these accessories is designed to expand the number and variety of applications you can perform with your ER1. XBeams Expansion Pack - The XBeams Expansion Pack allows you to change the configuration of your robot. For example, you can create your own SnakeBot, add a backpack or design something unique. ER1 Gripper Arm - The ER1 Gripper is Evolution Robotics latest addition to our ER1 accessories. The gripper arm allows your ER1 to grasp, carry and release objects. IR Sensors - The IR sensors will enhance the ER1s obstacle avoidance capabilities. The IR Sensor Pack will be available for sale later this year.
1-3
Chapter 1 Introduction
1-4
Chapter 2
Software Installation
2-1
Resource Configuration
A resource describes a software or hardware interface that provides additional capabilities to the ER1 SDK. For the ER1 SDK for Python system to load resources correctly, it must be told which resources to load and how. The resource configuration system provides this and other information about resources and the external environment. Resource information is stored in XML format. The primary resource file is named resource-config.xml, and is located in the config directory in the ER1 SDK installation directory. This file lists the resources present on the system, with parameters to the appropriate drivers, and provides additional information about the physical configuration of the system. This file is shipped with a default hardware setup for the ER1. In addition, a number of optional hardware resources are listed in the resource config, but are commented out (i.e. <!--comment-->). To enable one or more of these resources, simply remove the comments from around the appropriate section. Optional resources listed in the file include: A second camera ER1 Gripper Arm ER1 IR sensors ViaVoice speech recognition and Text to speech (not included)
Usage
ersp.task.setUserConfig(key, value, 1|0)
Parameters
key value 1|0 This parameter defines the name of the config setting. This specifies the value of the config setting. (Optional) A value or 1 saves the parameters to the user config file, and a value of 0 does not. By default, any values set are not saved and are only used during that Python session.
Returns
Nothing.
2-2
Assigning a Camera
Example
The following example shows how to set your mailserver:
ersp.task.setUserConfig($MailHostname, mail.yourmail.com)
getUserConfig This function is used to see any user config values that have been set.
Usage
ersp.task.getUserConfig (key)
Parameters
key This parameter specifies the name of the config setting.
Returns
This function returns all of the settings in the user-config.xml file.
Assigning a Camera
The Camera Chooser GUI is used to select a camera to use for obstacle avoidance and a camera to use for obstacle recognition. It is initiated by clicking on camerachooser.exe in the installation directory. It looks like this:
Click on the Swap Camera button to change the camera used for each function.
2-3
2-4
Chapter 3
The following sections go into detail about the concepts and tasks you will need to use. These sections are: Using Tasks Parallel Tasks Defining New Tasks TaskContext Events Units About X, Y Coordinates
3-1
Using Tasks
Tasks are, at their heart, a lot like functions. To have the robot execute a task, you call it with some arguments. For example, to make the robot move forward 10 centimeters, you can use the following command in a Python script:
from ersp.task import navigation navigation.MoveRelative(0, [10, 0, 5, 4])
Some tasks also return useful values. For example, to determine the location and orientation of the robot, you can use the GetPosition task:
from ersp.task import resource (x, y, theta, timestamp) = resource.GetPosition() print "I am at (%s, %s)." % (x, y)
Parallel Tasks
For a robot that lives in a complex, changing environment, sometimes simple sequences of actions aren't enough; Sometimes you want your robot to do more than one thing at a time. To make the robot perform multiple tasks in parallel, you can use the Parallel class. For example, the following script will cause your robot to rotate 360 degrees around and simultaneously speak to you:
from ersp import task from ersp.task import speech from ersp.task import navigation import math
3-2
p = task.Parallel() p.addTask(navigation.TurnRelative, [math.pi * 2]) p.addTask(speech.Speak, ["I'm starting to feel a little dizzy."]) p.waitForAllTasks()
You use addTask to specify the tasks and their arguments that you want the robot to perform, and you can add as many tasks as you want. Remember that adding a task doesn't start it running. The waitForAllTasks method starts up all the tasks at the same time, then waits until all the tasks have completed.
TaskContext
There are a few things you have to know when writing a function that you plan on registering as a task. First the function must take a single argument. When the task is executed, that argument will be bound to the TaskContext object, which represents the task context. One of its most important functions is packaging up the task arguments. To extract the task arguments you can use the context's getArguments method, which returns a tuple containing the arguments. For example, here's a task that adds its two arguments together and returns the sum:
from ersp import task def AddFun(context): (a, b) = context.getArguments() return a + b # Note that a will be set to the first argument and b will be set to the second argument. Add = registerTask("Add", AddFun) print Add(5, 4)
3-3
Events
Some tasks are designed to watch for particular events or situations, for example a loud noise, a recognizable object in front of the camera, or a spoken command. These tasks need a way to signal when the situation they have been monitoring for occurs, and the way they signal other tasks is by raising events. You can make your tasks wait for these events, and take appropriate actions.
Example Program
Here is an example that waits for the speech recognition system to hear a spoken command:
from ersp.task import speech from ersp import task def WaitForCommandFun(context): task = context.getTask() event = task.waitForEvent(speech.EVENT_SPEECH_RECOGNIZED) text = event.getProperty("text") print "You said '%s'." % (text,) WaitForCommand = task.registerTask("WaitForCommand",\ WaitForCommandFun) p = Parallel() p.addTask(WaitForCommand) p.addTask(RecognizeSpeech, "mygrammar.something.something") p.waitForFirstTask()
First, here's an overview of what this program does. It starts up two tasks in parallel, the custom WaitForCommand task, and RecognizeSpeech. RecognizeSpeech is a task that runs forever and raises events when the speech recognition system recognizes speech. WaitForCommand waits for one of the speech recognition events, prints the text that was recognized, then returns. This occurs because we call Parallel's waitForFirstTask method so as soon as the WaitForCommand task returns, the program ends. The first thing WaitForCommand needs to do is get a handle on the Task object that represents the currently executing instance of the WaitForCommand task. This is required in order to call the Task's waitForEvent method. The waitForEvent method waits for an event to occur and returns the event. You specify which events you're waiting for by passing the type as the first argument (EVENT_SPEECH_RECOGNIZED in the example). The method does not return until an event of that type is raised by another task. Once a speech recognition event is raised and waitForEvent returns, we can look up the value of the "text" property, which for recognition events contains the text that was recognized.
3-4
Units
You can even write tasks that raise their own events. It's as easy as 1-2-3: 1. Creating the event 2. Setting any properties 3. Raising the event Here's an updated version of the Add task that instead of returning the sum of its two arguments, raises an event that contains the sum. This is just a fictional example of how to use events:
from ersp import task def AddFun(context): (a, b) = context.getArguments() event = Event("Sum") event ["sum"] = a + b) manager = context.getTaskManager() manager.raiseEvent(event)
Units
The ER1 SDK uses a certain set of default units for its functions. These are centimeters for forward and backward motion, radians for rotation, and seconds for time. However, you may change these units to inches, feet, or meters for movement, degrees for rotation, or minutes for time. How do you do this? You use the setDefaultUnits function to set the units or the getDefaultUnits function to find out how your units are set. setDefaultUnits
Usage
import ersp.task ersp.task.setDefaultUnits(ersp.task UNIT_type, unit)
Parameters
UNIT_type
This parameter specifies the UNIT_type: UNIT_DISTANCE, UNIT_ANGLE, and/or UNIT_TIME. This parameter sets the units to be used for each UNIT_type. These are: DISTANCE - This parameter can be set to cm (centimeters), ft (feet), m (meters), or in (inches). ANGLE - The ANGLE parameter can set to rad (radians) or deg (degrees). TIME - This can be set to sec (seconds) or min (minutes).
unit
Returns
Nothing.
3-5
getDefaultUnits
Usage
import ersp.task ersp.task.getDefaultUnits (UNIT_type) )
Parameters
UNIT_type
Returns
This function returns the distance, angle and/or time setting requested.
About X, Y Coordinates
This coordinate system, with the positive x axis pointing forward and the positive y axis pointed toward the left, is the coordinate system we all know and love (positive x-axis pointed to the right, positive y-axis pointed forward, +x, +y values in the forward-right quadrant), but rotated 90 degrees counter-clockwise. The reason for the rotation is that we want the 0 degree mark (i.e. positive x-axis) to be pointed forward. This coordinate system, with the x-axis pointed forward, is the standard in all of robotics, and that is why ER SDK uses it.
1. Robot starting position (0, 0) with front of robot pointing along X+ axis. 2. Robot path to new relative position of x=10, y=20. 3. Robot position after first relative move of x=10, y=20. Axes are redrawn so that robot is again at the position 0,0, with the front of the robot pointing along the X + axis.
3-6
Overview of Tasks
4. Robot path to new relative position of x=10, y= -30 5. Robot position after relative move of x=10, y= -30. Robot is facing in the direction it would have been facing if the robot had traveled in a straight line to its new position.
Overview of Tasks
The following are the tasks with brief descriptions: DetectColor - This task begins watching for a specified color and raises events when it is seen. DetectMotion - Returns the amount of motion between consecutive frames from the camera. DetectObject - This task begins watching for a specified object and raises events when it is seen. DetectSound - Returns the perceived volume level from the microphone. DoAtATime - Performs a task at a specific time. DoPeriodically - Performs a task periodically, every so often. DoWhen - Waits for an event, and executes a task when the event is raised. DriveStop - Sends as top command to the drive system. GetImage - Returns an image from the camera. GetPosition - Returns the robot in global coordinate system (x, y, theta). Move - Moves at the given velocity, with obstacle avoidance. MoveRelative - Moves a set of relative distances MoveTo - Moves to a set of absolute points. PlaySoundFile - Plays a sound file. RecognizeSpeech - Recognizes speech according to the specified grammar, raising events when speech is detected. SendMail - A task that sends mail and waits for its completion. SetDriveVelocity - Sets the drive system to move at a given velocity. Speak - "Speaks" the given text using TTS, waiting until completed. SpeakFromFile - "Speaks" the specified file using TTS, waiting until completed. Stop - Stops. Turn - Moves at the given angular velocity, with obstacle avoidance. TurnRelative - Turns a relative distance. TurnTo - Turns to an absolute heading. Wait - Waits for the specified amount of time.
3-7
WaitUntil - Waits for an event. WatchInbox - A task that waits for new mail and raises an event when it is available.
Tasks in Detail
DetectColor When the color is recognized, an Evolution.ColorDetected event is raised. The event will have a "label" property corresponding to the label passed to this task, and it will have a "heading" property whose value is the angle (in radians) to the color. This task does not terminate.
Usage:
DetectColor(label, color)
Parameters:
label [string] color [DoubleArray] The label to use in the event. The color to watch for. This should be a DoubleArray of size 5 containing values generated by the color training application.
Returns:
None.
Events Raised
ersp.task.vision.EVENT_COLOR_DETECTION ("Evolution.ColorDetected")
Raised when the specified color is detected. label [string] The label argument given to the task. heading [double] The angle to the color (0 is straight ahead). DetectMotion Returns the amount of motion between consecutive frames from the camera.
Usage:
DetectMotion(threshold, min_change_per_px, min_event_interval, resource)
Parameters:
threshold [integer, double]
3-8
Tasks in Detail
min_change_ per_px [integer, double] min_event_ interval [integer, time] resource [integer, string]
(Optional) Minimum delta per pixel across the entire visual field. The default value is 1.
(Optional) Minimum interval, in seconds, between MotionDetected events. The default value is .1 seconds.
Events Raised
ersp.task.vision.EVENT_MOTION_DETECTION_STARTED ("Evolution.MotionRecognitionStarted")
Raised once DetectMotion has finished initializing and has started watching for motion. Has no properties.
ersp.task.vision.EVENT_MOTION_DETECTED ("Evolution.MotionDetected")
Raised when motion over the specified threshold is detected. diff_per_pixel [double] The amount of motion. DetectObject When the object is recognized, an Evolution.ObjectDetected event is raised. The event will have a "label" property whose value is the name of the object recognized, and a "heading" property whose value is the angle (in radians) to the object.
DetectObject may have to load a large model set file before it can begin performing
recognition; If you're interested in knowing exactly when loading completes and recognition starts you can wait for an Evolution.ObjectRecognitionStarted event (which has no properties). This task does not terminate.
Usage
DetectObject(model_set, min_time)
Parameters:
model_set [string] min_time [double] The filename of the model set to use. An optional minimum time between detection events for each model. More parameters may be given to restrict which models within the modelset will be recognized.
3-9
Returns:
Nothing.
Events Raised
ersp.task.vision.EVENT_OBJECT_RECOGNITION_STARTED ("Evolution.ObjectRecognitionStarted")
Raised when DetectObject has finished initializing and has started watching for objects. Has no properties.
ersp.task.vision.EVENT_OBJECT_DETECTION ("Evolution.ObjectDetected")
Raised when an object is detected. label [string] Identifies the object that was recognized. heading [double] The heading to the object (0 is straight ahead). elevation [double] The elevation of the object (0 is level) distance [double] The distance to the object. period DetectSound Returns the perceived volume level from the microphone. [double] The minimum time between events.
Usage
DetectSound(threshold, min_time_between_events, resource_id)
Parameters:
threshold [double] min_time_ between_event s [time] resource_id [string] Sound level threshold (0 threshold 1) (Optional) Event interval
Returns:
NULL; this task raises events
Events Raised
ersp.task.net.EVENT_SOUND_DETECTED ("Evolution.SoundDetected")
Raised when the audio level peaks above the threshold level. loudness [double] Loudness of the sound (between 0.0 and 1.0).
3-10
Tasks in Detail
Usage
DoAtATime(time, task)
Parameters:
time
[double]
The time argument specifies the time at which to execute the given task, and should be given as the number of seconds since the "Epoch". You can use the Python time function to determine the current time, and adjust it. For example,
import time now = time.time() # Add 10 seconds future = now + 10 DoAtTime(future, MyTask)
task [pointer]
Task to be run at the specified time. Any additional arguments are passed to the task when it is run.
Returns:
Nothing DoPeriodically Performs a task periodically, every so often.
Usage
DoPeriodically(time, task)
Parameters:
time [double] task [pointer] Seconds between task invocations. Task to be run at the specified time. Any additional arguments are passed to the task when it is run.
Returns:
Return value of task.
3-11
DoWhen Waits for an event, and executes a task when the event is raised.
Usage
DoWhen(timeout, eventspec, task)
Parameters:
timeout
[double]
(Optional) Time, in seconds, to wait for the event. Defaults to infinity. The event to wait for. (Optional) If specified, a task to run if the event is raised. The event will be passed to the task as the first argument. Any additional arguments are passed to the task when it is run.
Returns:
Return value of task. DriveStop Sends stop command to the drive system.
Usage
DriveStop
Parameters
None.
Returns
Nothing. GetImage Returns an image from the camera.
Usage
GetImage(resource_id)
Parameters:
resource_id [string] (Optional) camera
Returns:
image [out, Image]; image data
3-12
Tasks in Detail
GetPosition Polls the range sensor and returns distance-timestamp pairs in one contiguous array.
Usage
GetPosition(resource_id)
Parameters:
resource_id [string] Range Sensor ID. Multiple sensors can be specified.
Returns
distances [out, double_array]; { d0, t0, d1, t1, ..., dX, tX}
Usage
Move(v, w)
Parameters
v [double] w [double] Linear velocity. Angular velocity.
Returns
Nothing. MoveRelative Moves a set of relative distances. See About X, Y Coordinates for details about specifying x, y coordinates.
Usage
MoveRelative(timeout [dx, dy, v, w, eps,] final_heading)
Parameters:
timeout
[double]
Seconds to wait for completion; zero means no timeout. (Required) Distance to move forward. (Required) Distance to the left. (Optional) Velocity. (Optional) Angular velocity. (Optional) How close to stop from object. (Optional) Direction the robot should face when finished.
dx dy v w eps final_heading
Returns
Nothing.
3-13
Usage
MoveTo(timeout [dx, dy, v, w, eps, final_heading])
Parameters
timeout
[double]
Seconds to wait for completion; zero means no timeout. (Required) Distance to move forward. (Required) Distance to the left. (Optional) Velocity. (Optional) Angular velocity. (Optional) How close to stop from object. (Optional) Direction the robot should face when finished.
dx dy v w eps final_heading
Returns
Nothing. PlaySoundFile Plays a sound file.
Usage
PlaySoundFile(file, async)
Parameters:
file [string] async
[boolean]
Path to the sound file. (Optional) Boolean indicating whether the sound should be played asynchronously, meaning that the task returns immediately, but the sound keeps playing. True means async. The default is false (sync).
Returns:
Nothing.
3-14
Tasks in Detail
RecognizeSpeech Recognizes speech according to the specified grammar, raising events when speech is detected. This task runs until terminated.
Usage
RecognizeSpeech(grammar, resource)
Parameters:
grammar [string] resource [string] Grammar file to use. (Optional) ASR (recognizer) resource to use.
Returns
Nothing
Events Raised
ersp.task.speech.EVENT_SPEECH_RECOGNIZED ("Evolution.SpeechRecognized")
Raised when speech was recognized and accepted. text [string] The text of the speech. confidence [double] The confidence
ersp.task.speech.EVENT_SPEECH_REJECTED ("Evolution.SpeechRejected")
Raised when speech was possibly recognized, but rejected. text [string] The text of the speech. confidence [double] The confidence value. SendMail A task that sends mail and waits for its completion.
Usage
SendMail(to, from, subject, message, attachment)
Parameters:
to [string] from [string] subject
Email address. (Optional) Sender name. (Optional) Subject. (Optional) Text message.
[string]
message
[string]
3-15
attachment
[string]
Returns
Immediately if an error occurs or until mail is sent. SetDriveVelocities Sets drive system to move at the given velocity.
Usage
SetDriveVelocities v, w
Parameters
v [double] w [double]
Returns
Nothing. Speak "Speaks" the given text using TTS, waiting until completed.
Usage
Speak(text, voice, resource)
Parameters:
text [string] voice [string] resource [string] Text to speak. (Optional) Default voice to use (the text can override). (Optional) TTS resource to use.
Returns
Nothing.
SpeakFromFile "Speaks" the specified file using TTS, waiting until completed.
Usage
SpeakFromFile(file, voice, resource)
Parameters:
file [string]
voice [string]
File from which to read. (Optional) Default voice to use (the file can override).
3-16
Tasks in Detail
resource [string]
Returns:
Nothing. Stop Stops the robot.
Usage
Stop stop_type
Parameters:
stop_type
(Optional) This parameter specifies the type of stop. STOP_OFF, STOP_SMOOTH and
STOP_ABRUPT are allowed values.
[double]
Returns:
Nothing Turn Moves at the given angular velocity, with obstacle avoidance.
Usage
Turn(w)
Parameters:
w [double] Angular velocity in default angular units.
Returns:
Nothing TurnRelative Turns a relative distance.
Usage
TurnRelative(angle, w)
Parameters:
angle [double] w [double] The angle to turn through, in radians. (Optional) The desired angular velocity (defaults to pi/6 / s).
Returns:
Nothing.
3-17
Usage
TurnTo(timeout, eps, angle, angular_velocity)
Parameters:
timeout
[double]
Seconds to wait for completion; zero means no timeout. How close to get to target Angle in radians Velocity
Returns
Nothing. Wait Waits for the specified amount of time.
Usage
Wait(duration)
Parameters:
duration
[double]
Seconds to wait
Returns:
Nothing.
Usage
WaitUntil(eventspec, timeout)
Parameters:
eventspec [string] timeout
[double]
Event to wait for. (Optional) Time to wait for the event, in seconds. Defaults to infinity.
Returns:
Nothing.
3-18
Tasks in Detail
WatchInbox A task that waits for new mail and raises an event when it is available.
Usage
WatchInbox(pop_server, user_name, password, from_filter,
Parameters:
pop_server [string] user_name [string] password [string] rom_filter [string] to_filter
[string]
(Optional) Incoming server name (will search for in user config file). (Optional) User name (will search for in user config file). (Optional) Password (will search for in user config file). (Optional) Sender name filter. Receiver name filter. (Optional) Subject line filter. (Optional) Mail body filter. (Optional) Time interval (seconds) to check for new mail. The default value is 30 seconds.
Returns:
Immediately if an error occurs or until task is killed.
Events Raised
ersp.task.net.EVENT_MAIL_RECEIVED ("Evolution.MailReceived")
Raised when new mail is received. Depending on the filter criteria, the event contains one or all of the following properties: * to [string]; address of the receiver * from [string]; address of the sender * subject [string]; the subject line * body [string]; the contents of the mail
3-19
3-20
Chapter 4
Tutorials
Object Sonar
This program makes the robot act like it has sonar. This demonstrates the techniques you will need to write your own programs. Wait for a person to say the name of an object the robot can recognize ("box", "dollar bill", "logo"), then turn slowly in place. Make a sonar ping noise every 5 seconds. If the object is spotted, make a sonar ping noise more often. Don't stop until a person says "stop". A person can say a new object to be looked for at any time. Say "quit" to quit the program completely. Tests recognizing objects, speech recognition, repeating at intervals, playing audio files (and turning at a given velocity). Imports
# The first thing we need to do is import the ersp.task module so we # can use the API. import ersp.task
4-1
Chapter 4 Tutorials
# # # #
We know we want to be able to use some movement tasks, the speech recognition task, and some general utility tasks, so we have to import those modules. We use the 'from module import submodule' syntax so we can refer to the tasks like 'speech.DetectSpeech'. ersp.task ersp.task ersp.task ersp.task import import import import navigation speech vision util
# We'll also be using the sleep function from the standard Python time # module. import time
## -- Constants # Here we define the list of objects that the robot can recognize. KNOWN_OBJECTS = {"cereal": "cranberries.png"}
# This is the audio file containing the sonar ping sound. PING_SOUND_FILE = "ping.wav"
# The speed at which the robot should rotate, in radians/s. ANGULAR_VELOCITY = 10.0 * (math.pi / 180.0)
## -- Custom Tasks and Utility Functions # Here we define a function for a task that will be in charge, and # figure out what to do based on what commands we hear. def MainSonarLoopFun(context): timeToQuit = 0 task = context.getTask() # Enable the speech recognition event so that we don't accidentally # miss one while we're in the body of the loop. task.enableEvent(speech.EVENT_SPEECH_RECOGNIZED) # Run in a loop until either this task is terminated from outside, # or we decide it's time to quit (a person gave us the 'quit' # command). while not (context.terminationRequested() or timeToQuit): # Wait for a speech recognition event from the DetectSpeech task. event = task.waitForEvent(speech.EVENT_SPEECH_RECOGNIZED) # Check to make sure we really got an event (waitForEvent will # unblock and return None when this task is terminated). if event != None:
4-2
Object Sonar
# Grab the value of the event property that contains the actual # text that was recognized. command = event["text"] # Check whether we got the pause command, and if so then pause # the sonar. if command == "pause": print "Pausing." stopSonar() # Check whether we got the quit command, and if we did, then stop # the sonar and set a flag that lets us know we can end this # while loop. elif command == "quit" or command == "exit": print "Exiting." stopSonar() timeToQuit = 1 # Check whether the word we heard was the name of an object we # know about. If it was, then look for that object. elif command in KNOWN_OBJECTS: scanMessage = "Scanning for %s." % (command,) print scanMessage speech.Speak(scanMessage) runSonar(KNOWN_OBJECTS[command]) # Whatever it was the person said, it wasn't something we were # expecting or know how to handle. Let the person know. else: print "Sorry, I don't know what you mean." speech.Speak("Sorry, I don't know what you mean.") # Don't forget to disable the event. task.disableEvent(speech.EVENT_SPEECH_RECOGNIZED)
# Now that we've defined the MainSonarLoop function, the way we use it # as a task is to call registerTask (and we assign the resulting task # to MainSonarLoop). MainSonarLoop = ersp.task.registerTask("MainSonarLoop", MainSonarLoopFun)
# This task just starts up the DetectObject task and the RecognitionHandler # task. def SonarFun(context): # object should be a string containing the name of the object to # look for. [objectName] = context.getArguments() # We want to run the two tasks in parallel. # # # # p Note here that we pass our context in to create this Parallel object. This is always good practice, if you're in a task that has been passed its own context object. For this task it is required so that task termination happens properly. = ersp.task.Parallel(context)
# Tell DetectObject to look for the object specified by objectName. detector = p.addTask(vision.DetectObject, ("base.dla", 2.5, objectName))
4-3
Chapter 4 Tutorials
# Add the SonarDetector task (which doesn't need any arguments). handler = p.addTask(RecognitionHandler) # Neither of the tasks we just added will ever end, so this really # just waits until this task is explicitly terminated. p.waitForAllTasks() # When we do finally get terminated, we want to terminate these # tasks as well. detector.terminate() handler.terminate() return None Sonar = ersp.task.registerTask("Sonar", SonarFun)
# This task does the actual pinging def RecognitionHandlerFun(context): task = context.getTask() while not context.terminationRequested(): # Wait for an object recognition event, then ping. Or if 4 # seconds go by without an event, just ping anyway. event = task.waitForEvent(vision.EVENT_OBJECT_DETECTION, 4000.0) if event != None: print event util.PlaySoundFile(PING_SOUND_FILE) RecognitionHandler = ersp.task.registerTask("RecognitionHandler", RecognitionHandlerFun)
# This function starts the Sonar task. def runSonar(objectName): global sonarTask print "Starting sonar. It can take about 10-15 " + \ "seconds before the sonar is ready to detect things." # If there's already a Sonar task running, stop it. if sonarTask != None: stopSonar() else: util.PlaySoundFile(PING_SOUND_FILE) # Start a new Sonar task in the background. sonarTask = ersp.task.getTaskManager().installTask(Sonar, [objectName]) navigation.Move(0.0, 0.1) # This function stops the Sonar task. def stopSonar(): global sonarTask
4-4
Object Sonar
navigation.Stop() if sonarTask != None: print "Terminating %s." % (sonarTask,) # Terminate the task, then sleep for .2 seconds to give the task a # chance to exit cleanly. sonarTask.terminate() time.sleep(.200) else: print "No sonarTask to terminate."
def FakeASRFun(context): manager = context.getTaskManager() while (not context.terminationRequested()): print "enter a command:" line = raw_input() if line != "": event = ersp.task.Event(speech.EVENT_SPEECH_RECOGNIZED) event["text"] = line manager.raiseEvent(event) FakeASR = ersp.task.registerTask("FaskASR", FakeASRFun)
log = ersp.task.Log("Evolution.Tasks.Vision") log.setPriority(ersp.task.LOG_DEBUG) p = ersp.task.Parallel() p.addTask(MainSonarLoop) # if you want to use speech recognition, uncomment this line # and comment out the next line that adds FakeASR. #p.addTask(speech.RecognizeSpeech) p.addTask(FakeASR, ["sonar.grammar"])
# We use try and finally here to make sure that no matter how this # program ends, whether you say "quit" or you press Control-Break, the # robot stops moving. try: p.waitForFirstTask() finally: navigation.Stop() print "all done."
4-5
Chapter 4 Tutorials
4-6
Index
C
Customer support 1-3
D
DetectColor 3-8 DetectMotion 3-8 DetectObject 3-9 DetectSound 3-10 DoAtATime 3-11 DoPeriodically 3-11 DoWhen 3-12 DriveStop 3-12
E
Events 3-4
G
getDefaultUnits 3-6 GetImage 3-12 getUserConfig 2-3
M
Move 3-13 MoveRelative 3-13 MoveTo 3-14
P
PlaySoundFile 3-14
I-1
R
RecognizeSpeech 3-15 Registration 1-3
S
SendMail 3-15 setDefaultUnits 3-5 SetDriveVelocities 3-16 SetFaceVector 3-16 setUserConfig 2-2 Speak 3-16 SpeakFromFile 3-16 Stop 3-17
T
TaskContext 3-3 Turn 3-17 TurnRelative 3-17 TurnTo 3-18
U
User config 2-2
W
Wait 3-18 WaitUntil 3-18 WatchInbox 3-19
X
X, Y coordinates 3-6
I-2