Next Article in Journal
City-Wide Eco-Routing Navigation Considering Vehicular Communication Impacts
Next Article in Special Issue
Positioning, Navigation, and Book Accessing/Returning in an Autonomous Library Robot using Integrated Binocular Vision and QR Code Identification Systems
Previous Article in Journal
A Novel Mechanical Fault Feature Selection and Diagnosis Approach for High-Voltage Circuit Breakers Using Features Extracted without Signal Processing
Previous Article in Special Issue
Efficient Lazy Theta* Path Planning over a Sparse Grid to Explore Large 3D Volumes with a Multirotor UAV
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Obstacle Avoidance of Two-Wheel Differential Robots Considering the Uncertainty of Robot Motion on the Basis of Encoder Odometry Information

School of Mechanical Engineering, Korea University, Seoul 02841, Korea
*
Author to whom correspondence should be addressed.
Submission received: 10 December 2018 / Revised: 7 January 2019 / Accepted: 9 January 2019 / Published: 12 January 2019
(This article belongs to the Special Issue Mobile Robot Navigation)

Abstract

:
It is important to overcome different types of uncertainties for the safe and reliable navigation of mobile robots. Uncertainty sources can be categorized into recognition, motion, and environmental sources. Although several challenges of recognition uncertainty have been addressed, little attention has been paid to motion uncertainty. This study shows how the uncertainties of robot motions can be quantitatively modeled through experiments. Although the practical motion uncertainties are affected by various factors, this research focuses on the velocity control performance of wheels obtained by encoder sensors. Experimental results show that the velocity control errors of practical robots are not negligible. This paper proposes a new motion control scheme toward reliable obstacle avoidance by reflecting the experimental motion uncertainties. The presented experimental results clearly show that the consideration of the motion uncertainty is essential for successful collision avoidance. The presented simulation results show that a robot cannot move through narrow passages owing to a risk of collision when the uncertainty of motion is high. This research shows that the proposed method accurately reflects the motion uncertainty and balances the collision safety with the navigation efficiency of the robot.

1. Introduction

It is important to overcome different types of uncertainties for the safe and reliable navigation of mobile robots. Three main categories of uncertainties can be identified: uncertainties in recognition, motion, and the environment [1,2,3,4]. Recognition uncertainties are caused by the practical limitations of sensors or algorithms; for instance, the position uncertainty of obstacles caused by the sensor error. Environmental uncertainties arise from the inaccurate representation or dynamic change of the environment. Changes in the environment, such as large parking spaces or exhibition halls with dynamic obstacles, can cause the localization to become uncertain [5]. Sources of motion uncertainties include controller errors, latency, disturbances, and modeling errors [6]. Uncertainties may arise when robotically-steering flexible medical needles to clinical targets in soft tissues [7] and motion control of service robots passing between narrow and long obstacles. Thus far, several challenges in recognition and environmental uncertainties have been addressed. Little attention has been paid to motion uncertainties.
Several studies have focused on collision avoidance problems. Fox proposed a dynamic window approach (DWA); the DWA is widely used owing to its simplicity and smooth motions in a dynamic environment [8]. Brock extended the conventional DWA to a global dynamic window approach (GDWA) in order to guide robots in complex environments [9]. Minguez proposed a nearness diagram (ND) method, where robots exhibited great collision avoidance performances in cluttered environments [10]. Borenstein proposed a vector field histogram (VFH) method that enabled robust performances with respect to sensor errors [11]. Zi proposed a collision avoidance method for an omni-directional mobile robot in Ref. [12] and a collision avoidance method for multiple parallel mobile cranes (CPRMCs) in Ref. [13]. Various conventional collision avoidance algorithms are still widely used in applications.
Control strategies toward safe navigation have been extensively studied. In the authors’ prior work [14], collisions with dynamic obstacles from occluded regions were considered. It was shown that the limitations of visibility can be overcome by appropriate path planning and speed control strategies. Roy proposed an intelligent navigation scheme that can improve performance by learning the collision probability [15]. A sampling-based planner has been developed that achieves safety by maximizing the margin to obstacles in the input space [16]. A speed control strategy under the consideration of map and motion uncertainties has also been proposed [17].
Some obstacle avoidance schemes quantitatively consider the risk of collision. So far, various indices have been developed for quantitative collision risk evaluation. Kuffner proposed a region of inevitable collision (RIC) scheme [18]. The RIC extends the obstacle region with respect to robot motion. Frichard proposed an inevitable collision area for a mobile robot [19]. Zucker introduced a relative collision risk [20]. Chung proposed the collision risk index (CRI) that represents the margin of velocity control in the input space [14]. Horst proposed a method to define an appropriate time-to-collision (TTC) [21]. ISO 17387 specifies the TTC level for a collision warning function in a vehicle-mounted crash avoidance system (CAS) [22].
The defining point of this study is that it is important to model motion uncertainties of practical robots quantitatively. Motion uncertainties may vary with the type of robot used. Although the accurate control of the actuator velocity is not a difficult problem in recent days, many commercially-available robots show unsatisfactory velocity control performances. Many research studies recommend the intentional inflation of uncertainty in order to consider various unknown uncertainties [1,16,17,23]. Previous research included the extension of the obstacle area to reduce the risk of collision. However, it is useless for the performance and usability of the robot if the robot is moved by excessively expanding obstacles.
It is clear that the inflation should start from the estimated uncertainty of the given robots. After accurately modeling the uncertainty, the controller finally perceives the degree by which the obstacle has enlarged. Therefore, it is better to model the uncertainty of the robot accurately and to expand the obstacle area to the size of the model. The key ideas of this paper are that the modeling and exploitation of the motion uncertainties are extremely significant for practical applications. The aim of this paper is to achieve the safe and efficient navigation of mobile robots under the consideration of the motion uncertainties.
This study shows how the uncertainty of robot motions is experimentally modeled. The motion uncertainty is assumed to be represented by the velocity control error of a wheel. In other words, the velocity control error is assumed to be a dominant source of motion uncertainty. Then, the modeled uncertainty is reflected in the design of a motion control scheme. The proposed scheme is verified through simulations and experiments. The presented results clearly indicate that the resulting movements of a robot exhibit significant differences under different uncertainty conditions that are considered. It can be concluded that it is essential to model and exploit motion uncertainties.

2. Experimental Modeling of the Motion Uncertainty Using Encoder Odometry Information

This section describes how to model the uncertainty of robot motions. It is difficult to obtain conventional collision avoidance algorithms that explicitly consider the motion uncertainty of a robot. The uncertainty of the robot position increases with increasing velocity [17]. The motion uncertainty should be modeled with the consideration of the type of robot and motion control performance. Sources of motion uncertainty include unmodeled latency, inaccurate parameters, and disturbances. In this study, the velocity control error obtained by an encoder sensor is assumed to be a dominant source of motion uncertainty. Figure 1 illustrates the effect of motion uncertainties when a robot moves around an obstacle. If the motion uncertainty is low, as shown in Figure 1a, the vehicle travels close to the obstacle. As shown in Figure 1b, if the uncertainty is high, the robot travels away from the obstacles. It may be safe to navigate at distances sufficiently far from the obstacles. However, maintaining an excessive distance from an obstacle reduces the robot’s traveling efficiency. Therefore, accurate modeling of the motion uncertainty is needed to maintain both the navigation efficiency and safety.
The translational and rotational velocities of a two-wheeled mobile robot can be given as follows:
v = v l + v r 2
ω = v r + v l b
The translational velocity is denoted by v, and the rotational velocity is denoted by ω . The tread b is assumed to be constant. v l and v r represent the velocity of the left and right wheels, respectively. The velocity control error of both wheels represents the motion uncertainty of a two-wheeled differential robot. The velocity error can be obtained from the difference between the reference and experimental velocities. Thus, the motion uncertainty of a robot can be obtained through practical experiments.

3. Motion Controller Considering the Uncertainty of Robot Motion

It was assumed that the velocity error follows a Gaussian distribution. For a reference velocity x = [ v l _ r e f ; v r _ r e f ] , experimental velocity mean error M = [ v l _ e x p ; v r _ e x p ] , and velocity error covariance matrix Σ = [ σ l 0 ; σ r 0 ] , the multivariate Gaussian probability can be given by Equation (3) [24]. σ l and σ r denote the standard deviations in velocity errors of the left and right wheels obtained through encoders, respectively.
p ( x | M , Σ ) = 1 2 π σ l 2 0 0 σ r 2 exp 1 2 v l _ e r r v r _ e r r T σ l 2 0 0 σ r 2 1 v l _ e r r v r _ e r r = 1 2 π σ l σ r exp 1 2 v l _ e r r v r _ e r r T 1 σ l 2 0 0 1 σ r 2 v l _ e r r v r _ e r r
In Equation (3), v l _ e r r v l _ r e f v l _ e x p and v r _ e r r v r _ r e f v r _ e x p are defined. Since the velocities of both wheels of a two-wheeled mobile robot are independent, the covariance matrix is diagonal. The inverse of s can be simply calculated by taking the reciprocal of each diagonal element. Equation (3) is expanded as follows:
p ( x | M , Σ ) = 1 2 π σ l σ r exp 1 2 v l _ e r r v r _ e r r T 1 σ l 2 v l _ e r r 1 σ r 2 v r _ e r r = 1 2 π σ l σ r exp 1 2 σ l 2 ( v l _ e r r ) 2 1 2 σ r 2 ( v r _ e r r ) 2 = 1 2 π σ l exp 1 2 σ l 2 ( v l _ e r r ) 2 · 1 2 π σ r exp 1 2 σ r 2 ( v r _ e r r ) 2
Here, the last equation is the product of two independent Gaussian distributions. If the input space consists of v l and v r and the velocity error distribution is represented by a covariance ellipse, the semi-major and semi-minor axes are parallel to the v l and v r axes, respectively. The velocity uncertainties of v l _ r e f and v r _ r e f in the input space can be expressed by an elliptic inequality. This elliptic inequality can be obtained by the confidence coefficient, covariance matrix, and velocity control error.
( v l _ e r r ) 2 σ l 2 + ( v r _ e r r ) 2 σ r 2 s
where s is the critical value of the chi-squared distribution. Figure 2 shows the uncertainty ellipse, which is the expansion of the input velocity under the consideration of velocity control errors. If a robot with a large motion uncertainty chooses a velocity near the obstacle area, the robot is more likely to collide with obstacles. Therefore, knowledge of the magnitude of the uncertainty associated with the current traveling condition of the robot is required. If the size is known through experiment, a safe speed can be selected by expanding the obstacle by the amount of motion uncertainty. The shape of the motion uncertainty ellipse depends on the magnitude of the velocity uncertainty of the left and right wheels. If the motion of the robot is accurate, the uncertainty ellipse is small. However, the risk of collision increases according to increases in the motion uncertainty.
Therefore, the clearance considering the uncertainty of the robot motion (CURM) is proposed as shown in Figure 3. Using the CURM, experimental resultant velocities of a wheel remain inside the collision-free input region, regardless of the uncertainty. The CURM means the expected value of the clearance considering the motion uncertainty. The CURM is the smallest value in the uncertainty ellipse of the reference velocity. The extent of the obstacle expansion depends on the type of robot and navigation conditions.
For defining the CURM, the range of the input space was set as Equation (6).
V = { ( v l , v r ) | v l [ v l _ e x p a l · Δ t , v l _ e x p + a l · Δ t ] v r [ v r _ e x p a r · Δ t , v r _ e x p + a r · Δ t ] }
CURM ( v l , v r ) = { v l , v r | m i n ( C l e a r a n c e ( v l , v r ) ) , ( v l _ e x p a l · Δ t ) 2 σ l 2 + ( v r _ e x p a r · Δ t ) 2 σ r 2 s }
In Equation (6), a l and a r are the maximum acceleration values of the left and right wheels, respectively. In order to obtain the value of CURM, Equations (5) and (7) can be used. Algorithm 1 explains how to obtain CURM from Equations (6) and (7). The getUncertainty function of Line 6 returns [ σ l , σ r ].
Algorithm 1: CURM()
1:for v l _ r e f = ( v l Δ t ) to ( v l + Δ t ) do
2:for v r _ r e f = ( v r Δ t ) to ( v r + Δ t ) do
3:  min = infinite
4:  for v l _ r e f = ( v l Δ t ) to ( v l + Δ t ) do
5:   for v r _ r e f = ( v r Δ t ) to ( v r + Δ t ) do
6:    [ σ l , σ r ] = getUncertainty ( v l _ e x p , v r _ e x p , v l _ e r r , v r _ e r r )
7:    if ( v l _ e r r 2 / σ l 2 + v r _ e r r 2 / σ r 2 < s and min > Clearance( v l _ r e f , v r _ r e f )) then
8:     min = Clearance( v l _ r e f , v r _ r e f )
9:    end if
10:   end for
11:  end for
12:  CURM [ v l _ r e f , v r _ r e f ] = min
13:end for
14:end for
Algorithm 2 shows the A*-based path planner considering the motion uncertainty of mobile robots. The algorithm follows the structure of A*. The OPEN queue is a priority queue, in which the distance traveled is generated. The distances from the start node to the current node are arranged in ascending order. The OPEN queue contains candidate nodes for the trajectory. Certain steps are undertaken before placing the node in the OPEN queue. First, the trajectory is obtained through the motion controller for a candidate node (Line 9). Then, a sampling-based forward simulation is carried out (Line 11). OPEN.Push(node) places the node in the OPEN queue if the collision probability is lower than the threshold K. The CLOSED queue stores nodes, where the child node has been searched for backtracking after arrival at the goal. The rest of the process proceeds as per the basic A* algorithm to obtain an appropriate trajectory.
Algorithm 2: MakeTrajectoryBasedOnAStar ().
1:OPEN.Init()
2:CLOSED.Init()
3:if (isGoal(start) = true) then
4: return MakeTrajectory(start)
5:end if
6:OPEN.Push(start)
7:while OPEN.Size() = 0 do
8: n = OPEN.Pop()
9: nodes = Expand(n)
10:for all the node∈nodes do
11:  node.trajectory = GenTrajectory(n, node)
12:  node.f = n.f + GetLength(node.trajecory) + h(node)
13:  collisionProb = SimulationWithMotionUncertainty (node.trajectory)
14:  if (collisionProb ≥ K) then
15:   continue
16:  end if
17:  if (isGoal(node) = true) then
18:   return MakeTrajectory(node)
19:  end if
20:  if (node ∩ CLOSED = ⌀) then
21:   OPEN.Push(node) with node.f as priority
22:  end if
23:end for
24: CLOSED.Push(n)
25:end while
Algorithm 3 corresponds to Line 13 in Algorithm 2. Forward simulations are carried out by considering the uncertainty of robot motion, under the application of the motion controller from the parent node to the candidate child node.
Algorithm 3: SimulationWithMotionUncertainty ().
1:collision = 0
2:for OPEN.Size() = 0 do
3: [ x , y , θ ] = Trajectory[0].pose
4:for j = 1 to Trajectory.Size do
5:   [ v l , v r ] = Trajectory[j].velocity
6:   v l ^ = v l + 1 2 i = 1 12 r a n d ( α 1 v l 2 , α 1 v l 2 )
7:   v r ^ = v r + 1 2 i = 1 12 r a n d ( α 2 v r 2 , α 2 v r 2 )
8:   v ^ = ( v l + v r ) / 2
9:   ω ^ = ( v r v l ) / b
10:   x = x v ^ ω ^ sin θ + v ^ ω ^ sin ( θ + ω ^ Δ t )
11:   y = y v ^ ω ^ cos θ + v ^ ω ^ cos ( θ + ω ^ Δ t )
12:   θ = θ + ω ^ Δ t
13:  if (CollisionCheck( x , y , θ ) = true) then
14:   collision = collision + 1
15:   break
16:  end if
17:end for
18:end for
A velocity motion model [1] is used as the kinematic condition of the two-wheeled mobile robot, as shown in the 6th–12th lines. The trajectory generated in Line 11 of Algorithm 2 is generated by forward simulation using a number of samples. The probability of collision with the obstacle is calculated and returned. The returned value is used to determine whether the node will be placed in the OPEN queue at Line 14 of Algorithm 2.
As the motion uncertainty increases in magnitude, the sample distribution widens. Therefore, the number of colliding samples near the obstacle thus increases. With Algorithms 2 and 3, the paths with a short travel distance are created while safely avoiding obstacles. As a result, a safe and efficient path can be generated by reflecting the uncertainty of the robot motion.

4. Simulation and Experimental Results

4.1. Measuring the Motion Uncertainty

The measurement range of the motion uncertainty of the mobile robot was based on indoor service robots used in previous studies. Based on the service robots listed in Table 1, the maximum velocity and acceleration were 0.5 m/s and 0.5 m/s 2 , respectively. The standard deviation of the velocity control error with respect to the velocity and acceleration was obtained. Figure 4 shows the robot used in this experiment. The experiment was repeated with the DWA to measure the velocity and acceleration of approximately 5000 samples.
The method of finding parameters of the motion uncertainty of a two-wheeled mobile robot without actual navigation is as follows. The maximum linear velocity and acceleration of a wheel are determined. A triangular wave that satisfies the maximum linear velocity and acceleration is generated. One of the robot wheels is stationary while the other rotates the robot by the input of a triangular wave. The opposite wheel is also used to obtain the parameters of motion uncertainty in the same way. The motion uncertainty parameter of the robot with the input velocity for each time step and actual measured velocity through the wheel encoder sensor are obtained. When the operating conditions of the robot, such as the motor and controller, are changed, the motion uncertainty parameters are measured again.
Table 2 shows the results of motion control error collection for velocity and acceleration. Figure 5 shows the experimental results of the velocity control error measured by encoder sensors. The x-axis represents v l _ r e f v l _ e x p , and the y-axis represents v r _ r e f v r _ e x p . The reference velocity is the wheel velocity as input, and the experimental velocity is the wheel velocity read from the encoder after a control cycle. Figure 5a shows the velocity control error of a low-uncertainty (LU) robot, and Figure 5b shows the velocity control error of a high-uncertainty (HU) robot.
In order to investigate the effect of motion uncertainties, two parameter sets of the wheel velocity controller were tested. The motion uncertainty changes according to different PI gains of the velocity controller. One parameter set showed satisfactory velocity control performance after tuning. This set is called the LU case. The parameters here were [P, I] (current controller gain) = [1, 100] and [P, I] (velocity controller gain) = [0.5, 0.05]. The second parameter set HU demonstrated a less satisfactory control performance. The parameters of this set were [P, I] (current controller gain) = [0.00005, 5.0] and [P, I] (velocity controller gain) = [1.0, 25.0].
From Table 2, it is clear that the control error showed a strong correlation with the desired acceleration. On the other hand, the desired velocities were independent of the experimental control errors. The measured velocity control error was used in Line 7 of Algorithm 1 and Lines 6–7 of Algorithm refalgorithm:makeTrajectory.

4.2. CURM and One-Step Simulation

This section provides a comparison of the conventional and proposed approaches through simulations. The simulations were performed by applying Algorithm 1. Figure 6a shows a simulation environment. The robot was located at A. Figure 6b–d shows local paths that were generated by the conventional (blue) and proposed (red) schemes. It can be seen that the conventional path was closer to the obstacle than the proposed path. The clearance object of the conventional path was higher than that of the proposed path. However, if the motion uncertainty is taken into account, the conventional path became risky because there was minimal clearance to the lateral direction of the robot.
The CURM from Equation (7) provides the way of obtaining the motion uncertainty during the clearance object computation. The proposed path was generated by the computation of the CURM. The proposed path was safer compared with the conventional path when motion uncertainty existed. However, if the CURM was obtained with the method presented in Figure 7, the expected value of the velocity near the obstacle decreased when the motion uncertainty was considered. Thus, the probability of obstacle collision decreased.
Figure 7 shows the computed clearance object using the conventional scheme and Equation (7). The CURMs at three different locations, A, B, and C of Figure 6, are shown in Figure 7b,d,f, respectively. The red lines in Figure 6b–d are the paths selected by the proposed scheme. Figure 7a,c,e shows the clearance objects without considering the motion uncertainty.
Sharp peaks signify that the robot can be safely driven at the selected input velocities. However, the collision risk dramatically increased if motion uncertainty existed. Dramatic decreases in the clearance objects of Figure 7b,d,f are clearly observed. A decrease in a clearance object implies an increase in the collision risk. Therefore, the computation of CURM was essential in order to guarantee collision-free navigation in practical applications.

4.3. Reactive Motion Controller

Algorithm 1 was experimentally tested. In the cluttered environment of Figure 8, the collision risk of the conventional and proposed methods was investigated. The navigation system was implemented on a Core-Duo 2.53-GHz laptop using the ROS [30] platform and written in C++. The ground truth for the robot pose was obtained by Adaptive Monte-Carlo Localization(AMCL) [31]. For comparison, four indices were used to evaluate the risk of collision: the minimum distance, TTC, minimum distance in the input space, and CRI [14]. The maximum translational velocity of the robot was 0.5 m/s.
Figure 9 shows three paths of a robot in a static cluttered environment. Figure 10 compares their navigation safety indices. The conventional path was assumed to be the ideal case, where there was no motion uncertainty. For the LU robot, the safety indices of the proposed method were similar to the conventional method. For the HU robot, the proposed control method was more cautious than the conventional method. This can be concluded from the comparison of the collision risk indices, which indicated the collision risk in the admissible velocity and that the proposed method navigated more safely.

4.4. Path Planner

Figure 11 shows the simulation results with the proposed path planner in an environment with narrow passages from 0.35–0.85 m. The simulator was developed using the MFClibrary by the authors.
The width of the passage increased when the y-coordinate of the passage increased in Figure 11. The purple lines indicate the resultant paths of the simulation using the conventional method. The red and blue lines indicate the paths of the LU and HU robots, respectively. A total of 100 simulations were carried out for each method. A trajectory can be updated online using the modeled information of the motion uncertainty. In an environment with fixed obstacles, a Core-Duo2 2.53-MHz laptop can generate about one trajectory per second online.
Table 3 shows the quality of paths [32] presented in Figure 11. Using the quality of paths, quantitative comparisons between the conventional and proposed schemes are presented. The collision probability was calculated through the trajectory. The middle and right columns of Table 3 present the approximations that can be computed given the probability distributions along the trajectories. Since the simulation was based on practical motion uncertainty, the conventional method that assumed perfect motion in the collision was excluded.
When the motion uncertainty of the path planner was applied excessively, the quality of paths was 98.7%. When the motion uncertainty of the robot was exactly applied to the path generation, the quality of paths of the LU and HU robots were 97.0% and 89.3%, respectively. When the motion uncertainty of the robot was insufficiently applied, the quality of paths was under 80%. It can be seen that the proposed scheme showed superior performances from the viewpoint of collision safety.
Figure 12 shows the distance traveled, time taken, and width of the passage for the 100 simulations. The conventional method tended to generate shorter paths through narrow passages and took up shorter times. The HU robots traveled longer distances and took a longer time. This result signifies that the HU robot moved through wider passages in order to avoid collision under highly uncertain conditions.
Figure 13 shows the simulation results of the safety indices. The results indicate the minimum distance to the obstacles, TTC, the minimum distance from the obstacle in the input space, and CRI when the robot is in a dangerous situation. In all the indices’ results, the conventional approach exhibited the highest collision risk, while the HU robots took the safest paths. This result implies that the consideration of motion uncertainty is extremely significant in real-world applications, where uncertainty is not negligible.
From the simulation results, the motion uncertainty must be explicitly applied to the controller or path planner. It was shown that the safety of the robot improved by considering the motion uncertainty. The application of excessively-extended uncertainty has also demonstrated that the collision risk was reduced. However, the navigation efficiency was reduced. Therefore, it is possible to balance the collision safety and navigation efficiency by accurately applying the motion uncertainty of practical robots.

5. Conclusions

In this study, a new motion control scheme toward reliable obstacle avoidance by reflecting the experimental motion uncertainties was proposed. It was shown how the uncertainty of robot motions can be quantitatively modeled based on the performance of the velocity control of a wheel. A controller was proposed where obstacles are extended as much as the motion uncertainty modeled in the input space. The usefulness of the proposed approach was experimentally verified.
The experimental results clearly show that the consideration of the motion uncertainty is essential to successful collision avoidance. A path planner was proposed where the uncertainty of motion is quantitatively reflected. In the environment with multiple narrow passages, the proposed method was compared to the conventional method through generated paths. The conventional method generated shorter paths. However, based on actual motion uncertainty, the conventional method had a high risk of collision in simulation. The path generated by the proposed method may not be the fastest. However, it was generated with both safety and efficiency in consideration. The presented simulation results demonstrated that the proposed method can accurately reflect the motion uncertainty and balance the collision safety with the navigation efficiency of the robot.

Author Contributions

Conceptualization, J.J. and W.C.; Methodology, J.J. and W.C.; Validation, J.J.; Investigation, J.J.; Writing-Original Draft Preparation, J.J.; Writing-Review & Editing, J.J. and W.C.; Supervision, W.C.

Funding

This work was supported in part by the NRF, MSIP(NRF-2017R1A2A1A17069329), and also supported by the Agriculture, Food and Rural Affairs Research Center Support Program (Project No. 714002-07), Ministry of Agriculture, Food and Rural Affairs.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  2. Timcenko, A.; Allen, P. Modeling Uncertainties in Robot Motions. Available online: https://pdfs.semanticscholar.org/c1fa/b2b61dc33c4db945a690c74af68e2e2c4250.pdf (accessed on 10 January 2019).
  3. LaValle, S.M. Planning Algorithms; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  4. Hunter, A.; Parsons, S.D. Applications of Uncertainty Formalisms; Springer: Berlin, Germany, 2003. [Google Scholar]
  5. Kim, J.; Park, J.; Chung, W. Self-Diagnosis of Localization Status for Autonomous Mobile Robots. Sensors 2018, 18, 3168. [Google Scholar] [CrossRef] [PubMed]
  6. Kelly, A. Mobile Robotics: Mathematics, Models, and Methods; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  7. Van Den Berg, J.; Patil, S.; Alterovitz, R. Motion planning under uncertainty using iterative local optimization in belief space. Int. J. Robot. Res. 2012, 31, 1263–1278. [Google Scholar] [CrossRef] [Green Version]
  8. Fox, D.; Burgard, W.; Thrun, S. The dynamic window approach to collision avoidance. IEEE Robot. Autom. Mag. 1997, 4, 23–33. [Google Scholar] [CrossRef] [Green Version]
  9. Brock, O.; Khatib, O. High-speed navigation using the global dynamic window approach. In Proceedings of the 1999 IEEE International Conference on Robotics and Automation, Detroit, MI, USA, 10–15 May 1999; Volume 1, pp. 341–346. [Google Scholar]
  10. Minguez, J.; Montano, L. Nearness diagram (ND) navigation: Collision avoidance in troublesome scenarios. IEEE Trans. Robot. Autom. 2004, 20, 45–59. [Google Scholar] [CrossRef]
  11. Borenstein, J.; Koren, Y. The vector field histogram-fast obstacle avoidance for mobile robots. IEEE Trans. Robot. Autom. 1991, 7, 278–288. [Google Scholar] [CrossRef] [Green Version]
  12. Qian, J.; Zi, B.; Wang, D.; Ma, Y.; Zhang, D. The design and development of an omni-directional mobile robot oriented to an intelligent manufacturing system. Sensors 2017, 17, 2073. [Google Scholar] [CrossRef] [PubMed]
  13. Zi, B.; Lin, J.; Qian, S. Localization, obstacle avoidance planning and control of a cooperative cable parallel robot for multiple mobile cranes. Robot. Comput.-Integr. Manuf. 2015, 34, 105–123. [Google Scholar] [CrossRef]
  14. Chung, W.; Kim, S.; Choi, M.; Choi, J.; Kim, H.; Moon, C.B.; Song, J.B. Safe navigation of a mobile robot considering visibility of environment. IEEE Trans. Ind. Electron. 2009, 56, 3941–3950. [Google Scholar] [CrossRef]
  15. Richter, C.; Ware, J.; Roy, N. High-speed autonomous navigation of unknown environments using learned probabilities of collision. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–5 June 2014; pp. 6114–6121. [Google Scholar]
  16. Park, J.; Iagnemma, K. Sampling-based planning for maximum margin input space obstacle avoidance. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 2064–2071. [Google Scholar]
  17. Miura, J.; Negishi, Y.; Shirai, Y. Adaptive robot speed control by considering map and motion uncertainty. Robot. Auton. Syst. 2006, 54, 110–117. [Google Scholar] [CrossRef]
  18. LaValle, S.M.; Kuffner, J.J., Jr. Randomized kinodynamic planning. Int. J. Robot. Res. 2001, 20, 378–400. [Google Scholar] [CrossRef]
  19. Fraichard, T.; Asama, H. Inevitable collision states? A step towards safer robots? Adv. Robot. 2004, 18, 1001–1024. [Google Scholar] [CrossRef]
  20. Chan, N.; Kuffner, J.; Zucker, M. Improved motion planning speed and safety using regions of inevitable collision. In Proceedings of the 17th CISM-IFToMM Symposium on Robot Design, Dynamics, and Control, Tokyo, Japan, 5–9 July 2008; pp. 103–114. [Google Scholar]
  21. Van der Horst, R.; Hogema, J. Time-to-collision and collision avoidance systems. In Proceedings of the 6th ICTCT Workshop—Safety Evaluation of Traffic Systems: Traffic Conflicts and Other Measures, Salzburg, Austria, 27–29 October 1993; pp. 109–121. [Google Scholar]
  22. ISO. Intelligent Transport Systems—Lane Change Decision Aid Systems (LCDAS)—Performance Requirements and Test Procedures; International Organization for Standardization: Geneva, Switzerland, 2008. [Google Scholar]
  23. Moon, C.B.; Chung, W.; Doh, N.L. Observation likelihood model design and failure recovery scheme toward reliable localization of mobile robots. Int. J. Adv. Robot. Syst. 2010, 7, 24. [Google Scholar] [CrossRef]
  24. Do, C.B. The Multivariate Gaussian Distribution. Available online: http://cs229.stanford.edu/section/gaussians.pdf (accessed on 10 January 2019).
  25. Burgard, W.; Cremers, A.B.; Fox, D.; Hähnel, D.; Lakemeyer, G.; Schulz, D.; Steiner, W.; Thrun, S. The Interactive Museum Tour-Guide Robot. Available online: https://www.aaai.org/Papers/AAAI/1998/AAAI98-002.pdf (accessed on 10 January 2019).
  26. Thrun, S.; Beetz, M.; Bennewitz, M.; Burgard, W.; Cremers, A.B.; Dellaert, F.; Fox, D.; Haehnel, D.; Rosenberg, C.; Roy, N.; et al. Probabilistic algorithms and the interactive museum tour-guide robot minerva. Int. J. Robot. Res. 2000, 19, 972–999. [Google Scholar] [CrossRef]
  27. Kim, G.; Chung, W.; Kim, K.R.; Kim, M.; Han, S.; Shinn, R.H. The autonomous tour-guide robot jinny. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 28 September–2 October 2004; Volume 4, pp. 3450–3455. [Google Scholar]
  28. Murai, R.; Sakai, T.; Kawano, H.; Matsukawa, Y.; Kitano, Y.; Honda, Y.; Campbell, K.C. A novel visible light communication system for enhanced control of autonomous delivery robots in a hospital. In Proceedings of the 2012 IEEE/SICE International Symposium on System Integration (SII), Fukuoka, Japan, 16–18 December 2012; pp. 510–516. [Google Scholar]
  29. Moon, C.B.; Chung, W. Kinodynamic planner dual-tree RRT (DT-RRT) for two-wheeled mobile robots using the rapidly exploring random tree. IEEE Trans. Ind. Electron. 2015, 62, 1080–1090. [Google Scholar] [CrossRef]
  30. ROS: Robot Operating System. Available online: http://wiki.ros.org/wiki (accessed on 12 January 2019).
  31. AMCL. Available online: http://wiki.ros.org/amcl (accessed on 12 January 2019).
  32. Van Den Berg, J.; Abbeel, P.; Goldberg, K. LQG-MP: Optimized path planning for robots with motion uncertainty and imperfect state information. Int. J. Robot. Res. 2011, 30, 895–913. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Illustration of the motion uncertainties during obstacle avoidance.
Figure 1. Illustration of the motion uncertainties during obstacle avoidance.
Sensors 19 00289 g001
Figure 2. Uncertainty ellipse around the reference velocity in the input space.
Figure 2. Uncertainty ellipse around the reference velocity in the input space.
Sensors 19 00289 g002
Figure 3. Uncertainty ellipse around the reference velocity in the input space.
Figure 3. Uncertainty ellipse around the reference velocity in the input space.
Sensors 19 00289 g003
Figure 4. Stella B3.
Figure 4. Stella B3.
Sensors 19 00289 g004
Figure 5. Uncertainty of the experimental resultant velocity in the input space. LU, low uncertainty; HU, high uncertainty.
Figure 5. Uncertainty of the experimental resultant velocity in the input space. LU, low uncertainty; HU, high uncertainty.
Sensors 19 00289 g005
Figure 6. Simulation environment and path results.
Figure 6. Simulation environment and path results.
Sensors 19 00289 g006
Figure 7. The clearance of the simulation environments.
Figure 7. The clearance of the simulation environments.
Sensors 19 00289 g007aSensors 19 00289 g007b
Figure 8. The experimental environment.
Figure 8. The experimental environment.
Sensors 19 00289 g008
Figure 9. Resultant path.
Figure 9. Resultant path.
Sensors 19 00289 g009
Figure 10. Comparisons of the navigation safety indices.
Figure 10. Comparisons of the navigation safety indices.
Sensors 19 00289 g010
Figure 11. Resultant paths.
Figure 11. Resultant paths.
Sensors 19 00289 g011
Figure 12. Comparisons of navigated distance, navigated time, and passage width.
Figure 12. Comparisons of navigated distance, navigated time, and passage width.
Sensors 19 00289 g012
Figure 13. Navigation results: comparison of the safety indices.
Figure 13. Navigation results: comparison of the safety indices.
Sensors 19 00289 g013
Table 1. Maximum translational velocities and accelerations of previous service robots.
Table 1. Maximum translational velocities and accelerations of previous service robots.
YearRobot or PaperMax. VelocityMax. Acceleration
1998Rhino [25]0.36 m/s
2000MINERVA [26]0.38 m/s
2004Jinny [27]1.0 m/s0.5 m/s 2
2009Safe Navigation [14]0.5 m/s0.8 m/s 2
2013HOSPY [28]1.0 m/s
2015Dual-Tree RRT [29]1.5 m/s
Table 2. Standard deviation of the control error with respect to the velocity and acceleration.
Table 2. Standard deviation of the control error with respect to the velocity and acceleration.
Acc. (m/s 2 )00.10.20.30.40.5
Std. dev. (LU)0.0020.0050.0170.020.0310.036
Std. dev. (HU)0.0110.0170.0740.0720.1010.109
Vel. (m/s)00.10.20.30.40.5
Std. dev. (LU)0.0050.0020.0060.0060.0080.003
Std. dev. (HU)0.0180.0100.0210.020.020.017
Table 3. The quality of paths.
Table 3. The quality of paths.
LU Robot (Avg.)HU Robot (Avg.)
Conv. Path72.4%27.7%
LU Path97.0%72.0%
HU Path98.7%89.3%

Share and Cite

MDPI and ACS Style

Jin, J.; Chung, W. Obstacle Avoidance of Two-Wheel Differential Robots Considering the Uncertainty of Robot Motion on the Basis of Encoder Odometry Information. Sensors 2019, 19, 289. https://doi.org/10.3390/s19020289

AMA Style

Jin J, Chung W. Obstacle Avoidance of Two-Wheel Differential Robots Considering the Uncertainty of Robot Motion on the Basis of Encoder Odometry Information. Sensors. 2019; 19(2):289. https://doi.org/10.3390/s19020289

Chicago/Turabian Style

Jin, Jiyong, and Woojin Chung. 2019. "Obstacle Avoidance of Two-Wheel Differential Robots Considering the Uncertainty of Robot Motion on the Basis of Encoder Odometry Information" Sensors 19, no. 2: 289. https://doi.org/10.3390/s19020289

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop