05dec Yang
05dec Yang
DSpace Repository
2005-12
Yang, Chuan-Hao
Monterey California. Naval Postgraduate School
http://hdl.handle.net/10945/1798
THESIS
by
Chuan-Hao Yang
December 2005
11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not reflect the official
policy or position of the Department of Defense or the U.S. Government.
12a. DISTRIBUTION / AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE
Approved for public release, distribution is unlimited.
13. ABSTRACT (maximum 200 words)
It is desirable in many applications that a mobile robot is able to track and follow a person. There have been various
efforts in literature to create person-tracking robots. However, current person-tracking robots are not capable of operating in
unstructured environments.
The problem of creating a person-tracking mobile robot has been studied by many researchers in literature. There are
two main issues associated with this problem. The first issue is to equip a robot with proper sensory devices so that it is able to
identify and locate the target person in a crowd in real time. Various approaches have been investigated, including vision,
infrared sensors, ultrasonic sensors, and other approaches. The second issue is to control and navigate the robot so that it
follows the target person within a certain distance. This seems to be simple, but in reality it is a fairly difficult task. For
example, if the target person is in a busy corridor with many people standing and walking, the robot has to constantly avoid
other people while following the target. There is still no reported evidence that a person-tracking robot has been implemented
that is able to track a person in arbitrary environmental conditions.
In this research, by using an innovative RF/ultrasonic sensor system, an intelligent person-tracking mobile robot is to
be implemented that is able to follow the target person in unstructured, practical environments. The main focus of the thesis is
development and implementation of control algorithms.
i
THIS PAGE INTENTIONALLY LEFT BLANK
ii
Approved for public release, distribution is unlimited
Chuan-Hao Yang
Captain, Taiwan Army
B.S., National Defense University, 2001
from the
Marcello Romano
Second Reader
Jeffrey B. Knorr
Chairman, Department of Electrical and Computer Engineering
iii
THIS PAGE INTENTIONALLY LEFT BLANK
iv
ABSTRACT
It is desirable in many applications for a mobile robot to track and follow a person.
There have been various efforts in literature to create person-tracking robots. However,
current person-tracking robots are not capable of operating in unstructured environments.
The problem of creating a person-tracking mobile robot has been studied by many
researchers in literature. There are two main issues associated with this problem. The first
issue is to equip a robot with proper sensory devices so that it is able to identify and
locate the target person in a crowd in real time. Various approaches have been
investigated, including vision, infrared sensors, ultrasonic sensors, and other approaches.
The second issue is to control and navigate the robot so that it follows the target person
within a certain distance. This seems simple, but in reality it is a fairly difficult task. For
example, if the target person is in a busy corridor with many people standing and walking,
the robot has to constantly avoid other people while following the target. There is still no
reported evidence that a person-tracking robot has been implemented that is able to track
a person in arbitrary environmental conditions.
v
THIS PAGE INTENTIONALLY LEFT BLANK
vi
TABLE OF CONTENTS
I. INTRODUCTION........................................................................................................1
A. PERSON-TRACKING MOBILE ROBOT ...................................................1
B. MOTIVATION ................................................................................................2
C. SEVERAL APPROACHES TO THE PERSON-TRACKING ROBOT....2
1. Vision-Based Approach .......................................................................2
2. Non-vision Based Approach................................................................3
3. Transmitter-and-receiver Based Approach ......................................3
a. Person Tracking Using Blinking LED Devices .......................3
b. Person Tracking Using an Ultrasonic Positioning System .....4
4. Intelligent Space Approach.................................................................4
5. Combined/Multi-Modal Approach.....................................................4
D. THESIS OBJECTIVES...................................................................................6
E. THESIS OUTLINE..........................................................................................6
II. SYSTEM ARCHITECTURE .....................................................................................7
A. ULTRASONIC POSITIONING SYSTEM ...................................................7
B. ROBOT SYSTEM..........................................................................................10
1. Bumper Sensors .................................................................................11
2. Sonar Sensors .....................................................................................11
3. Motor/Motion Sensors .......................................................................13
C. INTERACTION BETWEEN ULTRASONIC POSITIONING
SYSTEM AND SONAR SENSORS .............................................................13
D. SUMMARY ....................................................................................................15
III. ALGORITHM............................................................................................................17
A. POTENTIAL FIELD ALGORITHM ..........................................................17
1. Robot Coordinate...............................................................................17
2. Attractive Forces Derived From the Readings of the Ultrasonic
Sensor ..................................................................................................17
3. Attractive Forces Derived From the Readings of Sonar Sensors ..18
4. Potential Field Motion Planning From Combined Forces .............20
5. Resultant Forces to Translation and Steering Velocities
Conversion ..........................................................................................21
B. ALGORITHM USED AS THE TARGET IS IN A CERTAIN RANGE..22
C. OBSTACLE AVOIDANCE ALGORITHM ...............................................23
1. Motion Planning for Obstacles on the Right Forward of the
Robot ...................................................................................................24
2. Motion Planning for Obstacles on the Left Forward of the
Robot ...................................................................................................25
3. Motion Planning for Obstacles in Front of the Robot ....................26
D. OVERALL ALGORITHM OF A PERSON-TRACKING MOBILE
ROBOT ...........................................................................................................28
vii
E. SUMMARY ....................................................................................................29
IV. EXPERIMENTS AND RESULTS ...........................................................................31
A. PERSON-TRACKING IMPLEMENTATION WITHOUT
OBSTACLES..................................................................................................31
B. PERSON-TRACKING IMPLEMENTATION WITH AN
OBSTACLE BETWEEN THE ROBOT AND THE TARGET
PERSON .........................................................................................................32
C. PERSON-TRACKING IMPLEMENTATION WHEN THE TARGET
PERSON MAKES A TURN AT A CORNER.............................................33
D. PERSON-TRACKING IMPLEMENTATION IN AN
UNSTRUCTURED ENVIRONMENT ........................................................35
E. SUMMARY ....................................................................................................36
V. CONCLUSION AND FUTURE WORK .................................................................39
A. CONCLUSION ..............................................................................................39
B. FUTURE WORK ...........................................................................................40
APPENDIX.............................................................................................................................41
LIST OF REFERENCES ......................................................................................................49
INITIAL DISTRIBUTION LIST .........................................................................................53
viii
LIST OF FIGURES
ix
THIS PAGE INTENTIONALLY LEFT BLANK
x
LIST OF TABLES
xi
THIS PAGE INTENTIONALLY LEFT BLANK
xii
ACKNOWLEDGMENTS
xiii
THIS PAGE INTENTIONALLY LEFT BLANK
xiv
EXECUTIVE SUMMARY
The problem of creating a person-tracking mobile robot has been studied by many
researchers in literature. There are two main issues associated with this problem. The first
issue is to equip a robot with proper sensory devices so that it is able to identify and
locate the target person in a crowd in real time. Various approaches have been
investigated, including vision, infrared sensors, ultrasonic sensors, and other approaches.
The second issue is to control and navigate the robot so that it follows the target person
within a certain distance. This seems simple, but in reality it is a fairly difficult task. For
example, if the target person is in a busy corridor with many people standing and walking,
the robot has to constantly avoid other people while following the target person. There is
still no reported evidence that a person-tracking robot has been implemented that is able
to track a person in arbitrary environmental conditions.
The mobile robot has a sonar system that includes 16 sonar sensors arranged in a
ring. The function of this system is to measure the distance and the direction of obstacles.
The information of the obstacle from the sonar sensors is another part of the inputs. By
xv
using this information, the robot is able to avoid the obstacles encountered during the
person-tracking task.
The overall algorithm used in this thesis includes two major sub-algorithms, the
potential field algorithm and the obstacle avoidance algorithm. By regarding the readings
from the RF/ultrasonic positioning system and the sonar system as the attractive forces,
the potential field algorithm is to compute the resultant force from those attractive forces,
and furthermore convert it into translation velocity and steering velocity, which control
the motion of the robot. The obstacle avoidance algorithm is executed when the robot is
too close to the obstacles.
Four main experiments are conducted to validate the person-tracking ability of the
mobile robot using an RF/ultrasonic positioning system. The first experiment is
conducted to verify the normal function of the mobile robot using a direct person-
tracking condition without any obstacle between the target person and the robot. The
second experiment is designed to add the obstacle in the procedure of person-tracking
task, and examine the ability of the robot to implement obstacle avoidance and person-
tracking simultaneously. The third experiment is for examining the ability of the robot in
a situation where the robot needs to maintain tracking of the target person during a turn at
a corner. The fourth experiment is based on the examination of the robot behavior in a
general environment, which is unstructured.
xvi
I. INTRODUCTION
1
B. MOTIVATION
It is desired in many applications that the mobile robot be able to track and follow
a person. There have been various efforts in literature to create person-tracking robots.
However, current person-tracking mobile robots are not capable of operating in
unstructured environments. Because several main approaches, such as vision and infrared
sensors, are not fully reliable in all situations, it is necessary to explore other methods.
The main objective of this research is to investigate the feasibility of developing a
person-tracking robot system using an RF/ultrasonic positioning system.
C. SEVERAL APPROACHES TO THE PERSON-TRACKING ROBOT
1. Vision-Based Approach
This is an approach using a camera to capture the image of the target person. The
image has to be updated in real-time. This method assumes that the target person
detection is successful, although this may always be a challenge. After detecting the
target person in an image, the control information, including directions and distances,
will be computed from the variations of the target position and size in the image. The
robot should then be able to move toward the target person based on this information.
Numerous researches [2-15] have adopted and adapted this approach to develop the
person-following mobile robots. However, several uncertainties can still be significant
enough to influence the efficiency of target detection. One factor that affects the detection
is light condition. Determining the target person in the image can be relatively more
difficult when the color or brightness of the target is not outstanding enough to make it
different from that of the background or other obstacles. Another factor that affects
detection is the simultaneous motion of both person and robot. The vision sensor can
easily lose the target person when the target person moves too fast. Some researchers
used active cameras. This reduced the problem of losing the target person, but increased
the difficulty in the algorithm design. This approach is not suitable for the robot to
perform obstacle avoidance. It is difficult for the robot to tell the difference between the
target person and other obstacles. The situation will only be worse when there are several
persons moving around in the same environment. It is possible and likely for the robot to
lose the target person if the environment is unstructured.
2
2. Non-vision Based Approach
A non-vision based approach uses several kinds of rangefinders, such as sonar
sensors, infrared sensors, and others. Each rangefinder on the robot can determine the
distance between the nearest object and the rangefinder itself. Because the robot is not
able to distinguish between object and target person, this approach can only be adopted to
implement either obstacle avoidance when regarding all the objects as obstacles or
person-tracking when the target person is always the nearest object to the robot without
any obstacle in between. Using a Nomad 200 mobile robot equipped with 16 sonar
rangefinders, the distance of the object can be computed by the nearest sonar unit, and the
approximate direction also can be determined from the relative location of the sonar unit,
which detects the nearest distance. The robot can be efficiently programmed to
implement obstacle avoidance. However, to additionally implement person-tracking task,
even in an environment with a fixed condition, is still difficult and not practical.
3. Transmitter-and-receiver Based Approach
Using a transmitter-and-receiver approach, the transmitters located on the target
person transmit signals, such as ultrasonic waves or blinking LED. The receivers located
on the robot receive those signals. After computing the distance and the angle of the
target person from those signals, the robot knows where to move in order to turn itself
toward the target person and decrease the distance in between. In [16], two transmitter-
and-receiver based approaches have been discussed.
a. Person Tracking Using Blinking LED Devices
This approach requires equipping the target person with two infrared LED
devices with fixed distance between them and using a camera on the robot to detect the
two devices. This is similar to the vision-based approach. The main difference is that the
signals from infrared LED devices should be firmer and not affected by the disturbance in
the environment, as long as they are not blocked by any obstacle. The camera is able to
rotate to keep the target person in the middle of the image. By computing the distance
between two LED lights and the deviation of the two lights from the central vertical axis
in the image, the range and the bearing of the target person can be obtained respectively
by the robot.
3
b. Person Tracking Using an Ultrasonic Positioning System
This approach is to equip the ultrasonic transmitters on the target person
and the receivers on the robot. By computing the time interval between transmitting and
receiving the ultrasonic signal, the distance between the target person and the robot will
be determined. The angle can also be computed from the time delay between several
receivers.
These approaches are straight-forward for person-tracking, but they are not
suitable when there are obstacles between the target person and the robot. The detection
of obstacles will be a problem using these approaches. Without any additional
mechanism, the robot is not able to implement obstacle avoidance.
4. Intelligent Space Approach
The intelligent space approach [17,18] utilizes several sensors, such as visual or
non-visual sensors that are located in the environment to detect both the robot and the
target person. Therefore, the position information of the robot and target person will be in
the global coordinate and determined by the sensors in the intelligent space. From the
relative positions of the robot and the target person, the robot motion will be planned by
this intelligent space and controlled through the network. However, the desired approach
in this research is to design an autonomous robot that implements tasks in unstructured
environments. This approach then becomes unsuitable although it may be well-
functioned.
5. Combined/Multi-Modal Approach
A combined/multi-modal approach [19] is made of a combination of several
approaches. It is able to gather the advantages of each single approach. This is also the
key subject in this thesis. By using an ultrasonic positioning system along with the sonar
rangefinders, this research combines the transmitter-and-receiver based approach with the
non-vision based approach. A suitable algorithm also will be designed to adapt the robot
to several situations that may happen in the implementation of person-tracking. The robot
can then accomplish the person-tracking tasks, which include person-following and
obstacle avoidance in unstructured environments. Figure 1 shows the person-tracking
mobile robot using an ultrasonic positioning system. Figure 2 shows the specially made
vest equipped with ultrasonic transmitters.
4
Figure 1. Person-Tracking Mobile Robot using an Ultrasonic Positioning System.
5
D. THESIS OBJECTIVES
The main idea in this thesis is to investigate the feasibility of developing a person-
tracking robot system using an ultrasonic positioning system, besides the 16 sonar
sensors equipped on the Nomad 200 mobile robot. Furthermore, it will be proven to be
the most reliable way to create a person-tracking mobile robot after completing the
following steps.
1. Create the interface between the ultrasonic positioning system and the robot
system in the operating program.
5. Complete the task of person-tracking when the target person makes a turn at
a corner.
6
II. SYSTEM ARCHITECTURE
Besides the ultrasonic transmitters and receivers, the ultrasonic positioning system
includes also the RF (Radio Frequency) transmitter and receiver. The RF transmitter and
receiver are mounted near the ultrasonic receivers and transmitters, respectively, as
shown in Figure 3. The RF transmitter sends an electromagnetic signal to the RF receiver
to request the ultrasonic signals from the ultrasonic transmitters. As soon as the RF
receiver gets the RF signal, the ultrasonic transmitters transmit ultrasonic signals.
Meanwhile, the ultrasonic receivers start to wait for the ultrasonic signals. The RF signal
travels at the speed of light. As a result, the time spent for the RF signal to travel from the
RF transmitter to the RF receiver is relatively short and can be neglected.
7
D1 = v t1 (2.1)
D2 = v t 2 (2.2)
where
As a result, D1 , D2 , and d , are regarded as known. From the side-angle relations, the
parameters in Figure 3 can be computed as follows:
⎛ d 2 + D22 − D12 ⎞
α = cos ⎜⎜
−1
⎟⎟ (2.4)
⎝ 2dD2 ⎠
D2
D* D1
γ
β
α
d /2 d /2
d
Ultrasonic Receiver A Ultrasonic Receiver B
RF Transmitter
8
d d
D * = D22 + ( ) 2 − 2 D2 cos α (2.5)
2 2
⎛ * 2 ⎛ d ⎞2 ⎞
⎜
( )
⎜ D + ⎜ ⎟ − D22 ⎟
⎟
β = cos −1 ⎜ ⎝2⎠
⎛d ⎞ ⎟. (2.6)
⎜ 2⎜ ⎟ D * ⎟
⎜ ⎝2⎠ ⎟
⎝ ⎠
The ultrasonic positioning system is configured to measure the angle of the target
between − 90 o and 90 o . The value will be positive when the target is on the left-hand
side of the central vertical axis, and negative on the right-hand side. Therefore, the angle,
γ , is obtained by
γ = 90 o − β . (2.7)
By combining Equation 2.4 with Equation 2.5, the target distance, D * , can be computed
in the following equation.
2 D12 + 2 D22 − d 2
D* = . (2.8)
4
By combining Equation 2.6, Equation 2.7, and Equation 2.8, the target bearing, γ , can
be computed by
⎛ D12 − D22 ⎞
−1 ⎜ ⎟.
γ = 90 o
− cos (2.9)
⎜ d 2D 2 + 2D 2 − d 2 ⎟
⎝ 1 2 ⎠
The maximum distance, which can be measured from the target, depends on the
maximum length of the time interval, which allows the receivers to wait for the signal to
arrive. After this interval, the receivers will no longer receive signals until they are
triggered again for the next cycle. The maximum time interval is called the “window.”
The maximum window size in this research has been configured as 20 milliseconds, in
which the ultrasonic wave can travel 270 inches in room temperature, 20 o C,
approximately. When the signal arrives in less than 20 milliseconds, the system will shut
down the window immediately after the first signal has been received. Otherwise, the
system will only wait for 20 milliseconds and simply close the window right away,
9
whether the signal has been received or not. When no signal is received, the values of the
distance and the angle are not updated and remain the same as the last values.
B. ROBOT SYSTEM
The Nomad 200 mobile robot was made by Nomadic Technologies, Inc. This kind
of robot uses a multiprocessor as a low level control system to control the sensing,
communications, and motors. A remote workstation with Linux operating system is used
as a high level control system to communicate with the robot multiprocessor and the
ultrasonic positioning system through the wireless network. A laptop mounted on the
robot can also be used to substitute the remote workstation.
The robot system is controlled using the C/C++ programming language. The
information about the sensor systems and the motor state are stored in a global array,
called “State Vector” [20, 21]. The reference to the states is shown in Table 1. In this
section, several sensor systems equipped on the robot will be explained.
10
1. Bumper Sensors
The bumper sensors provide a mechanism that can be used to prevent damage to
the robot motors when the robot runs into something. There are six individual bumper
sensors arranged in a ring located on the robot. The n th bumper sensor represents the n th
bit in the STATE_BUMPER vector, while the 0 th bit is the least significant one. A bit of
the vector is set to 1 when the corresponding bumper is hit. In the robot algorithm, the
robot simply stops when any of those bumpers is hit; that is, when the STATE_BUMPER
vector is greater than 0. In Figure 4, the arrangement of the bumper sensors is presented.
1 Front
5
2 4
2. Sonar Sensors
There are 16 sonar units arranged in a ring located on the robot. In Figure 5, the
arrangement of the sonar sensors is presented. The sonar units are numbered in counter-
clockwise order beginning with the front of the robot. They emit sonar waves and receive
echoes consecutively in this order, with a blanking period between each cycle of emitting
and receiving. Note that the blanking period starts after the end of the processing of the
previous sonar sensor, and ends before the beginning of the next one. The fire rate of the
sonar sensors can then be adjusted by varying this period. In this research, the blanking
period has been configured to be 50ms. From the time interval between the transmission
of the sonar wave and the receiving of the echo, the distance between the robot and
obstacles can be determined. The sonar sensors can measure distances from 6 inches to
11
255 inches. If an echo is not received, the sensor will regard the distance as 255 inches.
The distance information will be stored in the state vectors, STATE_SONAR_0 to
STATE_SONAR_15.
1 0
15
2 Front
14
3 13
4 12
5 11
6 10
7 9
8
Sonar Sensor
Echo
The method used to compute the distance between the robot and the obstacle by
the sonar sensor is relatively more straightforward, as compared with the algorithm used
in the ultrasonic positioning system. Figure 6 shows a scenario in which the distance, L ,
needs to be computed. Assume the time spent for the sonar sensor to receive the echo
after the transmission of the sonar wave is T , which is a round-trip time period. The
distance, L , is obtained by the following equation.
L = v T / 2. (2.10)
L ES CH AC B1 B0 L R
7 6 5 4 3 2 1 0
R : set when the right wheel is in motion L : set when the left wheel is in motion
AC : set when the Scout is plugged into an AC source
CH : set when the Scout is plugged into an AC source and the batteries are charging
B1, B0 : 0, 0 Low Battery 0, 1 Med Battery 1, 0 High Battery 1, 1 Reserved
ES : set when the Emergency-Stop is down (always 0 for robots without E-Stop)
13
t sonar _ window 1 t wait 1 t ultrasonic _ window t wait 2 t sonar _ window 2
Sonar Init
Pulses
Sonar Echo
Pulses
Ultrasonic Init
Pulse
Sonar/Ultrasonic
Transmission
Windows
The beginning of the sonar init pulse starts when the sonar wave is being
transmitted, and this pulse is terminated by the rising edge of the echo pulse; that is, the
14
sonar transmission window will be closed as soon as the echo is received. As for the
transmission of the ultrasonic wave, after the falling edge of the sonar init pulse, the
ultrasonic positioning system waits 20 milliseconds for the sonar waves to die out, and
immediately transmits a kind of electromagnetic signal from the RF transmitter to the RF
receiver on the target to request ultrasonic signals. The time needed for the
electromagnetic signal to travel to the target is relatively short and can be neglected.
Simultaneously, the ultrasonic positioning system opens a window with a maximum
length of 20 milliseconds for the ultrasonic wave (transmitted from the ultrasonic
transmitters) to arrive. Therefore, there will be no overlap between the processes of both
sonar sensor and the ultrasonic sensor. The interference problem can be avoided.
Because of the longer time interval that is configured between the processes of
any two sonar sensors, the time needed for all 16 sonar sensors to complete their
processes once, will be relatively longer. As a result, the speed to update the information
of the obstacle distances will not be fast enough for the robot to react. An effective
approach to deal with the speed issue is to enable only the front 5 sonar sensors - since
only sonar sensors in the front of the robot are needed for this forward-motion-only
implementation, and since there is no significant influence to turn off those other sonar
sensors in the back during the implementation. The time needed for the same sonar
sensor to transmit again will be sufficiently shortened. Therefore, the person-tracking
implementation can obtain a better result.
D. SUMMARY
The system architecture including sonar sensor system, motor system, bumper
sensor system, and ultrasonic positioning system are presented in this chapter. In order to
adapt the ultrasonic positioning system to the original robot system, the interference
between the sonar sensor system and the ultrasonic positioning system should be avoided.
An asynchronous handshake method is adopted to schedule the actions of these two
systems that address this situation. Furthermore, to accelerate the reaction of the robot, a
limited number of the sonar sensors, which are indispensable for most person-tracking
conditions, are used in the implementation.
15
THIS PAGE INTENTIONALLY LEFT BLANK
16
III. ALGORITHM
Front
x-axis
Left
y-axis O
computed as follows:
Fultrasonic _ x = D * cos(γ ) (3.1)
17
Target Fultrasonic _ x
D*
Fultrasonic _ y
Robot
Figure 10. Attractive Forces Derived From the Readings of the Ultrasonic Sensor.
18
d1 d0
d15
d2
d14
d3
φ
d 13
d4
d12
d5
d 11
d6
d 10
d7 d8 d9
d1 d0
d2
d15 Obstacle
d14
Fsonar _ y
Combined Force
( y − axis only )
Figure 12. Combined Force Derived From the Readings of the Sonar Sensors.
19
Therefore, the combined force in x-axis,
Fsonar _ x = 0. (3.3)
where φ = 22.5 o . Since only five of the sonar sensors in the front of the robot are used in
the implementation, the equation becomes
Figure 12 shows how the attractive forces will react when the robot encounters an
obstacle. If the obstacle is on the right of the robot, the distance readings of the 14th and
the 15th sonar sensors will be smaller. As a result, the combined attractive force will be
toward the left.
4. Potential Field Motion Planning From Combined Forces
Fresultant _ x
Obstacle
Obstacle
Fresultant _ y
Robot Robot
20
The potential field motion planning is using the resultant of the attractive forces,
which are derived from the readings of the ultrasonic positioning system and the sonar
rangefinders. Figure 13 presents an example of the robot motion, which is corresponding
to the resultant force, at exactly the moment of the relative position shown in the figure.
The resultant force intends to drive the robot toward the target and away from the
obstacle at the same time. It will be updated consecutively to control the motion of the
robot in real time.
where K 1 , K 2 , K 3 , and K 4 are adjustable and act as weighting parameters. Since the
force, Fsonar _ x , is equal to 0, the parameter, K 3 , is meaningless and has no effect on the
robot motion. In this research, those parameters are adopted according to numerous
experiments, which include observing the behaviors of the robot in different situations
and adjusting the values of the parameters. Those parameters are as follows:
⎧ K 1 = 15
⎪
⎨ K 2 = 20 (3.8)
⎪ K = 10.
⎩ 4
An additional limit to the force, Fsonar _ y , is used to prevent the combined force
from the sonar sensors from becoming much larger than that from the ultrasonic sensor.
That is, if Fsonar _ y ≥ K 5 Fultrasonic _ y , the equality, Fsonar _ y = K 5 Fultrasonic _ y , will be used.
21
and Fresultant_ y , are converted to translation velocity and steering velocity, respectively.
The values of the parameters, G1 and G2 , are adjustable. In this research, those
values are chosen as G1 = G2 = 0.1 . In addition, a limit to the translation velocity is set to
prevent the robot from moving too fast. That is, if Vtranslation ≥ 120 ( (1 / 10) inches / sec ) ,
the equality, Vtranslation = 120 ( (1 / 10) inches / sec ) , will be used to limit the maximum
absolute value of translation velocity to be 120 ( (1 / 10) inches / sec ) . Those velocities,
Vtranslation and Vsteering , are then the reference inputs to the robot motor system.
Except for the termination of the translation velocity, the steering velocity of the
robot is still active when the target is in the range of 65 inches. With this steering velocity,
the robot simply turns itself to face the target without displacement in position. Again, to
prevent oscillation while the robot is trying to adjust itself to keep the angle reading of
22
the ultrasonic positioning system to be zero, an elastic range is determined to be from
− 10 o to 10 o . When the angle reading from the ultrasonic sensor is within this range, the
robot will set its steering velocity, Vsteering , to be zero. Therefore, the robot will be
physically motionless when the range and bearing conditions are satisfied. When the
target begins moving out of those ranges, the robot will activate itself again to implement
all the relative movements in tracking the target.
When the target range, D ∗ < 65 , the velocities, Vtranslation and Vsteering , in this
Vtranslation = 0 (3.11)
⎧G F , γ > 10
Vsteering = ⎨ 3 ultrasonic _ y (3.12)
⎩ 0, γ ≤ 10.
Target Target
Obstacle Obstacle
Robot
Robot
Obstacle Avoidance Motion Planning
Potential Field Motion Planning
Figure 14. Motion Planning for Obstacles on the Right Forward of the Robot.
24
2. Motion Planning for Obstacles on the Left Forward of the Robot
When the obstacle is on the left forward of the robot, it is similar the obstacle
being on the right forward, as in the previous section. The only difference is that the robot
will make a right turn when it encounters the obstacle. That is when either the 1st or the
2nd sonar range is smaller than 12 inches. The potential field algorithm will take over the
system as soon as this situation no longer exists. When implementing obstacle avoidance,
the translation velocity, Vtranslation , is set to be 0, and the steering velocity, Vsteering , is an
The negative sign in Equation 3.14 is appended because the robot is making a
right turn. On the other hand, the positive sign will be used when the robot is making a
left turn.
Target Target
Obstacle Obstacle
Robot Robot
Figure 15. Motion Planning for Obstacles on the Left Forward of the Robot.
25
3. Motion Planning for Obstacles in Front of the Robot
When there is an obstacle between the target and the robot, the left side and the
right side sonar ranges both may be smaller than the 12 inch threshold. The robot needs
to decide which direction to turn in order to escape this problem. In this case, it is
assumed that the obstacle is relatively small and does not obstruct the line of sight of the
robot, such that the target transmitter signal can still be received. To deal with this, the
target bearing is appended to the algorithm. When the target bearing, γ , is negative, as
shown in Figure16, the robot will regard a right turn as a better decision to carry out the
person-tracking task. On the other hand, the robot will make a left turn, when the target
bearing, γ , is a value greater than or equal to zero, as shown in Figure 17. As soon as the
robot turns itself away from the obstacle - when all the sonar ranges are no longer smaller
than 12 inches - the potential field algorithm will take over the system again to carry on
the person-tracking task.
Target Target
γ
Obstacle Obstacle
Robot Robot
Obstacle Avoidance Motion Planning
Potential Field Motion Planning
Figure 16. Motion Planning for Obstacles in Front of the Robot (γ < 0) .
26
Target Target
γ
Obstacle Obstacle
Robot Robot
equations.
⎧Vsteering = 150 , γ ≥0
⎨ (3.15)
⎩Vsteering = −150 , γ < 0.
The parameter, γ , is the target bearing obtained directly from the readings of the
ultrasonic positioning system.
Note that the motion planning methods used in these three situations are
practically related to one another. For example, the third situation will be led to the first
or the second situation, when the robot starts to make a turn, which immediately changes
the sonar statuses.
27
D. OVERALL ALGORITHM OF A PERSON-TRACKING MOBILE ROBOT
By combining all the sub-algorithms described in this chapter, the simplified
overall algorithm of a person-tracking mobile robot is shown in Figure 18. Note that there
is prioritized order when carrying out the algorithm. The idea is to prevent the robot from
collision, which could damage the robot system. Examining the bumper sensor status has
high priority over others. When the STATE_BUMPER vector is set, the process should
go directly to “stop.” IF the STATE_BUMPER is not set, the main algorithm can be
executed. The algorithm used when the target is in a certain range can be carried out
before the obstacle avoidance algorithm, since there is no displacement in the robot’s
position when implementing this portion. Collision is not a concern in this case.
Start
Yes
BumperHit = 1 ? Stop
Near Target No No
Turn itself to face the target
Target Range < 65 ?
Yes Yes
Vtranslation = 0 Target bearing γ ≤ 10 ? Vsteering = 0
Vsteering = G3 Fultrasonic _ y
No
Obstacle Avoidance Algorithm
Turn itself away from obstacles
Near obstacles Yes Vtranslation = 0
on the right forward ?
Vsteering = 110
No
Turn itself away from obstacles
Near obstacles Yes Vtranslation = 0
on the left forward ?
Vsteering = −110
No
Turn itself away from obstacles
Near obstacles Yes Yes
Target bearing ≥ 0 ? Vtranslation = 0
in front ? Vsteering = 150
No No
Turn itself away from obstacles
Vtranslation = 0
Vsteering = −150
Potential Field Algorithm
Vtranslatio n = G1 Fresultant_ x
Vsteering = G 2 Fresultant_ y
29
THIS PAGE INTENTIONALLY LEFT BLANK
30
IV. EXPERIMENTS AND RESULTS
t =T t = 2T
t = 3T t = 4T
31
B. PERSON-TRACKING IMPLEMENTATION WITH AN OBSTACLE
BETWEEN THE ROBOT AND THE TARGET PERSON
In this section, two main experiments will be carried out to verify the ability of
the robot to implement obstacle avoidance during person-tracking. The first experiment
corresponding to Figure 20 is to test the robot’s behavior when the robot encounters an
obstacle in the midst of person-tracking. The second experiment corresponding to Figure
21 is similar to the first one. The only difference is that the implementation of obstacle
avoidance should be completed before the robot can carry out the person-tracking task. It
is clear to see that the robot does have the ability to avoid the obstacle and accomplish the
person-tracking task, which is actually difficult for most applications of the person-
tracking robot in practice.
t =T t = 2T
t = 3T t = 4T
t 5T
t =T t = 2T
t = 3T t = 4T
t = 5T
Figure 21. Robot Trajectory when there is an obstacle between the robot and
the target person in the beginning of the person-tracking task.
t =T t = 2T
t = 3T t = 4T
t = 5T
Figure 22. Robot Trajectory when the target person makes a turn at a corner.
34
Turning a corner is a common issue for most applications of the person-tracking
robot. When the target person moves too fast at a corner, the corner will then become an
obstacle. The robot will need to avoid the corner during the person-tracking task as
shown in Figure 22. The feasibility of the robot’s turning a corner has been proven during
the implementation of this experiment.
D. PERSON-TRACKING IMPLEMENTATION IN AN UNSTRUCTURED
ENVIRONMENT
Since several, sample situations have been coped with successfully, a more
practical validation will be performed in this section. That is the person-tracking
implementation in an unstructured environment. An unstructured environment has the
following conditions. First, there will be relatively more obstacles in the environment.
Second, the target person will not move along a certain fixed route. In other words, the
route will be arbitrary. Third, there are persons, which may be unexpected, other than the
target person wandering around in the environment. The objective of the examination is
to verify the ability of the robot to carry out the person-tracking task in a more practical
environment with unanticipated conditions.
Figure 23 shows the procedure of this examination and the robot trajectory. The
robot performs the ability to implement obstacle-avoidance and person-following tasks
during several turns and even through a narrow corridor. In addition, the robot is only
following the target person and not affected by the movements of other persons in the
environment. Note that the robot will regard the persons other than the target person as
obstacles when they are too close in range. According to the robot trajectory in the figure,
the robot following the wrong person will not be an issue in this thesis, while this
situation has always been a challenge for most applications of a vision-based mobile
robot.
35
LEGEND : Robot Position Target Position Obstacle Non − Target Person Position
t =T t = 2T
t = 3T t = 4T
t = 5T t = 6T
E. SUMMARY
In this chapter, four, main experiments to examine the person-tracking ability are
presented. The first experiment is to verify the normal function of the mobile robot using
a direct, person-tracking condition without any obstacle between the target person and the
robot. The second experiment is to add an obstacle in the person-tracking task and
examine the ability of the robot to implement obstacle avoidance and person-tracking
36
simultaneously. The third experiment is to examine the ability of the robot in a common
situation, when the robot needs to maintain tracking the target during a turn at a corner.
The fourth experiment is based on the examination of the robot’s behavior in a general
environment, which is unstructured. According to the results of those experiments, the
ability of the robot in implementing most general situations is ensured.
37
THIS PAGE INTENTIONALLY LEFT BLANK
38
V. CONCLUSION AND FUTURE WORK
A. CONCLUSION
The main objective of this research was to investigate the feasibility of developing
a person-tracking robot system using an RF/ultrasonic positioning system. In order to
accomplish this objective, the following goals have been achieved in this thesis.
1. Created the interface between the ultrasonic positioning system and the robot
system in the operating program.
During the implementation of the first goal, by using TCP/IP approach, the data
developed by the ultrasonic positioning system can be utilized by the robot and analyzed
from the remote workstation through the network. The interference issue between the
ultrasonic positioning system and the sonar sensor system has been efficiently resolved
by sequencing the execution order of those two systems.
The second goal was achieved by designing the potential field algorithm along
with the obstacle avoidance algorithm, which was developed from several main situations.
In addition, the parameters used in the algorithm have been adjusted during various
experiments.
The rest of the goals were to investigate the ability of the mobile robot to
accomplish the person-tracking task, utilizing the algorithm designed in Chapter III,
39
which includes person-following and obstacle avoidance. According to the results of
several experiments in Chapter IV, the goals have been reached. Since all the goals have
been achieved, the feasibility of the main idea in this thesis was verified.
B. FUTURE WORK
Since the feasibility of developing a person-tracking mobile robot using an
ultrasonic positioning system in unstructured environments has been ensured, the next
step is to improve the efficiency of this system.
The ultrasonic positioning system and the sonar sensor system both operate using
acoustic principles. Therefore, the robustness of the tracking system in an environment
with sound-level noises can be examined in the future work.
40
APPENDIX
In this appendix, the C++/C code used to operate the overall system is presented
as shown below.
/************************************************************
* *
* PROGRAM: tracking_robot.c *
* *
* PURPOSE: For the robot to follow the specific person and *
* to implement obstacle avoidance concurrently. *
* *
* Edited by Chuan-Hao Yang *
* *
************************************************************/
/* Configure timeout (given in seconds). This is how long the robot will keep moving if
it becomes disconnected.
conf_tm(1);
/* Sonar setup: configure the order in which individual sonar unit fires. In this case, fire
all units in counter-clockwise order (units are numbered counter-clockwise starting
with the front sonar as zero). The conf_sn() function takes an integer and an array of
at most, 16 integers. If less than 16 units are to be used, the list must be terminated by
an element of value -1. The single integer value passed controls the time delay
between units in multiples of two milliseconds. Only use the front 5 sonar units in this
case. */
for (i=0; i<=2; i++)
{ order[i] = i; }
for (i=14; i<=15; i++)
{ order[i-11] = i; }
order[6]= -1;
conf_sn(25,order);
42
/* Zero the robot. This aligns the turret and steering angles. The repositioning is
necessary to allow the user to position the robot where it was. */
oldx = State[34]; /* remember position */
oldy = State[35];
zr(); /* tell robot to zero itself */
ws(1,1,1,20); /* wait until done with zeroing */
place_robot(oldx, oldy, 0, 0); /* reposition simulated robot */
/* Main loop. */
while (!BumperHit)
{
GetSensorData();
GetUltrasonicData();
Movement();
}
/* Disconnect. */
close(sock);
disconnect_robot(1);
}
/* Movement(). This function is responsible for using the sensor data to direct the robot's
motion appropriately. */
void Movement (void)
{
int m,i;
int nearsomething_right,nearsomething_left,nearsomething_front;
int tvel, svel;
double F_target[2],F_sonar[2],F_total[2];
int k1 = 15;
int k2 = 20;
int k3 = 1;
int k4 = 10;
float k5 = 0.75;
43
double gain_tvel = 0.1;
double gain_svel = 0.1;
int gain_svel_near_target = 2;
float theta;
/* Make sure we are not about to run into something; check the front sonar sensors. If it
looks bad, set nearsomething_*. */
nearsomething_right = FALSE;
nearsomething_left = FALSE;
for (i = 14; i <= 15; i++)
if (SonarRange[i] < 12 ) nearsomething_right = TRUE;
for (i = 1; i <= 2; i++)
if (SonarRange[i] < 12 ) nearsomething_left = TRUE;
for (i = 0;i <= 0; i++)
if (SonarRange[i] < 12 ) nearsomething_front = TRUE;
/* Decide how to move. There are five situations: 1) near target, 2) near something on
the right, 3) near something on the left, 4) in front of something, 5) clear to move. */
if (distance < 65) /* Equation 3.11, 3.12 */
{
tvel = 0;
svel = (int) (gain_svel_near_target*F_target[1]);
44
if (abs(angle) <= 10) { svel = 0; }
}
else if (nearsomething_right&!nearsomething_left) /* Equation 3.13 */
{
tvel = 0; /* stop moving, and make a turn. */
svel = 110;
}
else if (nearsomething_left&!nearsomething_right) /*Equation 3.14 */
{
tvel = 0; /* stop moving, and make a turn. */
svel = -110;
}
else if (nearsomething_front&nearsomething_right&nearsomething_left)
{ /* Equation 3.15 */
if (angle >= 0)
{
tvel = 0; /* stop moving, and make a turn. */
svel = 150;
}
else
{
tvel = 0; /* stop moving, and make a turn. */
svel = -150;
}
}
else /* it is clear to move */
{
svel = (int) (gain_svel*F_total[1]); /* Equation 3.9 */
tvel = (int) (gain_tvel*F_total[0]); /* Equation 3.10 */
}
/* limit the translation velocity */
if(abs(tvel)>120) {tvel=120*tvel/abs(tvel);}
/* Set the robot's velocities. The first parameter is the robot's translation velocity, in
tenths of an inch per second. This velocity can be between -240 and 240. The second
parameter is the steering velocity, in tenths of a degree per second, and can be
between -450 and 450. */
scout_vm(tvel,svel);
printf("%d,%d\n",distance,angle);
}
45
/* Read all sensors and load data into State array. */
gs();
/* Read State array data and put readings into individual arrays. */
for (i=0; i<16; i++)
{
/* Sonar ranges are given in inches, and can be between 6 and
255, inclusive. */
SonarRange[i] = State[17+i];
}
/* Check for bumper hit. If a bumper is activated, the corresponding bit in State[33] will
be turned on. Since we don't care which bumper is hit, we only need to check if
State[33] is greater than zero. */
if (State[33]>0)
{
BumperHit = 1;
tk("Ouch.");
printf("Bumper hit!\n");
}
}
46
if (buff[j] == ' ')
{
distance = atoi(&buff[i+1]);
angle = atoi(&buff[j+1]);
return;
}
}
break;
}
}
}
47
THIS PAGE INTENTIONALLY LEFT BLANK
48
LIST OF REFERENCES
3. Kai Nickel, Edgar Seemann, and Rainer Stiefelhagen. “3D-Tracking of Head and
Hands for Pointing Gesture Recognition in a Human-Robot Interaction Scenario.”
Proceedings of the Sixth IEEE International Conference on Automatic Face and
Gesture Recognition, pp. 1-6, 2004.
6. Giorgio Chivilò, Flavio Mezzaro, Antonio Sgorbissa, and Renato Zaccaria. “The
Robotic Attendant: How to Follow a Leader Through Visual Servoing.”
Proceedings of the 2004 IEEE International Workshop on Robot and Human
Interactive Communication, pp. 479-484, Kurashiki, Okayama Japan, September
20-22, 2004.
8. Nobutaka Hirai, and Hiroshi Mizoguchi. “Visual Tracking of Human Back and
Shoulder for Person-Following Robot.” Proceedings of the 2003 IEEE/ASME
International Conference on Advanced Intelligent Mechatronics, pp. 527-532,
2003.
9. Bogdan Kwolek. “Visual System for Tracking and Interpreting Selected Human
Actions.” Journal of WSCG, Vol. 11, No. 1, ISSN 1213-6972, Plzen, Czech
Republic, February 3-7, 2003.
49
10. Bogdan Kwolek. “Color Vision Based Person Following with a Mobile Robot.”
The International Workshop on Robot Motion and Control, pp. 375-380,
November 9-11, 2002.
11. Hirotaka Ohta, Munemoto Hirako, Kazuhiko Yamamoto, Kunihito Kato, and
Kentaro Hayashi. “Development of Entertainment Robot System by Using a
Person Detection Method.” Proceedings of the Seventh International Conference
on Virtual Systems and Multimedia, 2001.
14. Tsuyoshi Yamane, Yoshiaki Shirai, and Jun Miura. “Person Tracking by
Integrating Optical Flow and Uniform Brightness Regions.” Proceedings of the
1998 IEEE International Conference on Robotics & Automation, pp. 3267-3272,
Leuven, Belgium, May 1998.
15. Maurizio Piaggio, Roberto Fornaro, Alberto Piombo, Luca Sanna, and Renato
Zaccaria. “An Optical- Flow Person Following Behaviour.” Proceedings of the
1998 IEEE ISIC/CIRA/ISAS Joint Conference, pp. 301-306, Gaithersburg, MD,
September 14-17, 1998.
16. Akihisa Ohya, Yousuke Nagumo, and Youhei Gibo. “Intelligent Escort Robot
Moving together with Human - Methods for Human Position Recognition -.”
University of Tsukuba, 2002.
17. Kazuyuki Morioka, Joo-Ho Lee, and Hideki Hashimoto. “Physical Agent for
Human Following in Intelligent Sensor Network.” Proceedings of the 2002
IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1234-
1239, EPFL, Lausanne, Switzerland, October 2002.
51
THIS PAGE INTENTIONALLY LEFT BLANK
52
INITIAL DISTRIBUTION LIST
3. Chairman, Code EC
Department of Electrical and Computing Engineering
Naval Postgraduate School
Monterey, California
6. Chuan-Hao Yang
YongKang, Tainan710
Taiwan
53