0% found this document useful (0 votes)
15 views5 pages

0013 UAV and UGV State-Based Behavioral Formation Control For Landmine Operations

Uploaded by

Moustafa Kurdi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views5 pages

0013 UAV and UGV State-Based Behavioral Formation Control For Landmine Operations

Uploaded by

Moustafa Kurdi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

UAV and UGV State-Based Behavioral Formation

Control For Landmine Operations

Moustafa M. Kurdi Alex K. Dadykin


Belarusian National Technical University Belarusian National Technical University
BNTU BNTU
Minsk, Belarus Minsk, Belarus
[email protected] [email protected]

Imad Elzein
Belarusian National Technical University
BNTU
Minsk, Belarus
[email protected]

Abstract— Robot formation control has drawn important robot, e.g., formation keeping, trajectory tracking, goal
consideration for a long time, and now it is surely known and seeking, landmine detection, and obstacle avoidance.
developed in its field. Initially, this paper is dealt with planning
the movement of air/ground robots in Behavior Formation The approach is potentially applicable in many other
Control, which means certain behavioral constraints limitations domains such as search and rescue, agricultural coverage tasks
are forced on the relative positions and orientations of the robots and security patrols. Behavior-based approaches are also useful
throughout their travel. Second, it describes the projection of in guiding a multi-robot system in an unknown or dynamically
landmine detection on GIS-map based-on the ground penetrating changing environment using vision processing system.
radar (GPR) attached to unmanned aerial vehicle (UAV). Third,
it identifies how the unmanned ground vehicle (UGV) uses GIS- This article discusses Behavior-Based (B-B) formation of
map to analyze path in real-time and to remove landmines. UAV (Phantom-4 Pro Visio) and UGV (Belarus-132N) as
Fourth, it focuses on the cooperation between air and ground follows (Fig.1):
robots by using an efficient unwired transmission through base 1. Quadcopter takes off, moves over the region,
station. Behavior-State Formation System uses a control strategy photograph the area, searches and detects landmine by
of Finite State Automata for landmines detection, GIS-map ground penetrating radar.
uploading/downloading, and landmines removal. This proposed
system will demonstrate how the class of behavior formations of 2. Quadcopter projects the landmine findings into
hybrid robots (air-ground vehicle) can be used effectively for Geographic Information System (GIS) Mapping.
landmine operations.
3. Quadcopter transmits GIS images, collected data to
Keywords—unmanned aircraft vehicle (UAV); unmanned base-station located near to the field of operation.
ground vehicle (UGV); Behavior Formation Control; hybrid
4. Central Unit Base Station uploads the GIS landmine
robots; landmine operations
findings into digital-map, and then it sends the updated
I. INTRODUCTION digital map to the Ground Robot.
Robot formation control [1 ] has drawn significant attention 5. Ground Robot uses digital-map to move through the
for many years especially for ground mobile robot and operational area and remove the landmines.
quadcopter. The ability to keep up a specific formation of
This paper is organized as follows: Section II reviews the
hybrid robots is of particular where information should be
state-based behavioral formation control, and Section III
gathered from various inputs or resources. A controller is
presents UAV-UGV behavioral formation control with DFA
actualized for every robot to guarantee it tracks its planned
for UAV and UGV control. Our conclusions and thoughts on
path. Robot formation control is done by setting up xyz system
future extensions are summarized in Section IV.
rather than xy system [2].
Researchers classify formation control strategies [3] into II. STATE-BASED BEHAVIORAL FORMATION CONTROL
three strategies: i) behavior-based; ii) virtual-structure; and iii) State-based modeling is a modeling concept used to specify
leader-follower. A particular approach of our hybrid robots the complex hybrid control systems where the entire objective
using the Behavior-Based (B-B) formation starts by designing of the system is divided into set of functional task achieving
simple behaviors or motion primitives for each individual behaviors / motion states working on the individual goals
III. UAV-UGV BEHAVIORAL FORMATION CONTROL
The system describes a formation of 2-robots [8], ground
robot Belarus-132N and ground penetrating radar GPR
attached to quadcopter Phantom-4, as shown in Fig.1. The
proposed control algorithm is applying State-Based Behavioral
control formation [9].
A. Quadcopter Phantom-4 Pro Control
The UAVs are usually used for military, and nonmilitary
operations such as landmines detection. Phantom-4 Vision
Quadcopter is a light-weight, multi-functional aircraft, and
equipped by Ground Penetrating Radar (GPR).
Fig.1. General Architecture of the Proposed System a) UAV Deterministic Finite Automata
A proposed system of UAV is depicted in Fig. 2, where the
concurrently and asynchronously, which upon integration
mission consists in finding landmines, locating their position
yields the global objective of the system.
on GIS-map and saving data into base station sitting near the
The behaviors or motion states of the robots are modeled and operational region while avoiding obstacles. The agent and its
expressed in state based model as: interaction with the environment are modeled by a FSA and is
ρ = C (G*B(s)) (1) denoted the plant.
where, * is the vector encoding the global responses Figure 2 shows the transition diagram for DFA for UAV
undertaken by the robot; G= [g1, g2, g3 …..]T is the vector landmine detection. Node A represents “Start”; Node B
encoding gain of each behavior βi, ; B is the Vector of all represents “Find a landmine”; Node C represents “Locate
active behavior βi at time t, B= [β1, β2, β3…]T ; S= [s1, s2, landmine object into GIS map”. There is a Start arrow entering
s3…]T is the vector of all stimuli si for each behavior at time t; the start state, A, and the final state, B, is represented by a
and C is the Coordination/arbitration function which is double circle. The transition state 0 represents cancel an action
competitive or cooperative. (landmine detection), while 1 represents do the action
(landmine detection).
Autonomous robotic systems present several difficult
challenges. An autonomous robotic system must be extremely
self-reliant to operate in complex, partially known and
challenging environments using its limited physical and
computational resources. In spite of these difficulties, the
control system must ensure in real time that the robot will
achieve its tasks.
A. Finite State Automata Approach
A set of behaviors that is adequately competent to handle
the situation corresponding to the given state is selected. Using Fig 3: UGV Transition diagram of DFA for landmine removal
this formalism, systems are modeled in terms of Finite State
Automata (FSA), where states correspond to execution of
actions/behaviors and events, corresponding to observations
and actions, cause transitions between the states.
FSA consists of two types: i) Deterministic Finite
Automata (DFA) – There is a fixed number of states and we
can only be in one state at a time; and ii) Nondeterministic
Finite Automata (NFA) –There is a fixed number of states but
we can be in multiple states at one time. The DFA is
represented as: A = (Q, , δ, qo, F).

Fig 2: UAV Transition diagram of DFA for landmine detections Fig 4: UGV Model for Obstacle Avoidance
The DFA formal definition for UAV landmine detection is: 5. A transition function δ
1. Finite set of states Q={A, B, C}  δ (A,0)=A // do not start
2. Input symbols Σ={0 , 1 }  δ (A,1)=B // upload GIS-map from base station
3. Start state A in order to start landmines removal

4. Final state F={B}  δ (B,0)=B // cannot upload GIS-map

5. A transition function δ  δ (B,1)=C // GIS-map was uploaded, go-to


remove landmine state
 δ (A,0)=A // do not start
 δ (C,0)=C // cannot remove (extract) landmine
 δ (A,1)=B // go-to Find landmine state due to technical errors.
 δ (B,0)=B // cannot find landmines  δ (C,1)=D // update GIS-map after one landmine
object removal
 δ (B,1)=C // New landmine was found, so locate
it at GIS-map  δ (D,0)=D // cannot update GIS-map into base
station due to technical and communication
 δ (C,0)=C // New landmine object cannot be
errors.
added (located) to GIS-map due to technical or
communication errors.  δ (D,1)=C // GIS-map was updated, go-to
remove landmine state to continue removing all
 δ (C,1)=B // search for another landmine
landmines.
b) UAV Landmine detection
IV. UGV REMOVE LANDMINE STATE
UAV projects each landmine found with a vertex on the
GIS-map and transfer it to base station. Each landmine found The State C which represents “Remove landmine from the
has the following properties: ground”, as shown in Fig. 3, consists of two sub-Finite
Automata (Navigation and Obstacle Avoidance; Tracking).
1) ID: Unique number assigned to the new landmine.
A. Scanning and Analysis of the selected area
2) Position of new object (xr , yr).
The three stages for the field of view of the robot with
3) Label the dimension of new object (lr , wr, dr). obstacle avoidance strategies are:
4) Object type (Anti-personal, anti-tank). 1. Based on the distance and the closest object (obstacle)
in the robot's path, we figure the suitable turning rate,
B. Ground Robot Belarus-132N Control which is proportional to the ratio of obstacle distance
Ground Robot Belarus-132N consists of four-wheeled, and to the velocity of the robot.
serial chassis of Belarus-132N tractor with 120cm length,
2. While the UGV robot is directing far from the
120cm width, 180cm height, and weighing about 500 kg.
obstacle, the system show the location of the obstacle
a) UGV Deterministic Finite Automata in the field of view.
A proposed system of UGV is depicted in Fig. 3, where the 3. When the field of view is without obstacles, the UGV
mission consists in uploading GIS-map from base station, robot will start adjusting for the angle of deviation
removing landmines, updating GIS-map and saving updated and reach the desired goal and path.
map into base station while avoiding obstacles.
B) Obstacle Avoidance Behavior
Figure 3 shows the transition diagram for DFA for UGV
Obstacle avoidance behavior system shows the field of
landmine removal. Node A represents “Start”; Node B
view and free space in front of the mobile robot
represents “Upload GIS map to UGV”; Node C represents
(Belarus132N). When the obstacle is labeled the avoidance
“Remove landmine from the ground”; Node D represents
strategy is applied. Obstacle avoidance behavior is modeled as
“Update GIS-map after one landmine removal”. There is a
(Fig. 4):
Start arrow entering the start state, A, and the final state, C, is
represented by a double circle. The transition state 0 represents
cancel an action (remove landmine or upload GIS), while 1
represents do the action (remove landmine or upload GIS). The
DFA formal definition for UGV landmine removal:
1. Finite set of states Q={A, B, C, D}
2. Input symbols Σ={0 , 1 }
3. Start state A
4. Final state F={C} Fig 5: UGV Model for Tracking
1. States Q = {S1 ; S2; S3; S4} where S1 represents center of the field of view and within a constant
“Rest”, S2 represents “Moving”, S3 represents “Steer distance from the vehicle.
Away”, and S4 represents “Following the path”.
7. Event a= target and goal detected: is asserted
2. Events Σ= {a; b; c; d; e; f; g; h} where a = motion when target to be tracked is detected.
detected; b = motionless; c = obstacle detected; d=
move and avoid; e = stopped; f = path finished; g= 8. Event b= target lost: is asserted when target
path evaluated; h= clear and free the path. disappears from the field of view.

3. Initial state q0 = {stable} 9. Event c= move and follow the track: is a


command to the actuator in order to keep the
4. State S1 = rest: Mobile robot is stable (at rest). target in the center of the field of view.
5. State S2= moving: Mobile robot is moving ahead and V. CONCLUSIONS
sensors are continuously monitoring the free space in
front of the robot. This paper presented a strategy for the generation of
formation trajectories for UGV and UAV. Autonomous take-
6. State S3= steer away: Mobile robot is faring away off, tracking of a UAV and movement of UGV is discussed in
from the obstacle with a particular velocity. this paper. A State-Base Behavioral approach was adopted in
the formation manner by using Deterministic Finite Automata.
7. State S4= following the path: The path is being
followed using feed forward control strategy. The successful generation and simulation of landmines
from UAV are projected on GIS-map. The program has been
8. Event a= motion detected: is asserted when motion
developed in such a way that it increases efficiency of moving
has been detected.
and localization of mobile robot by the help of quadcopter
9. Event b= motionless: is asserted when vehicle which can be introduced into other related applications for
becomes stationary. unmanned navigation designs [13]. Our work includes
scenarios with obstacles avoidance and the application of the
10. Event c= obstacle detected: is asserted when an proposed controller experimentally.
obstacle is present in the vehicle's path.
11. Event d= move and avoid: changes the heading and/or REFERENCES
speed of the vehicle. [1] P. Coelho, U. Nunes, “Path-Following Control of Mobile Robots in
Presence of Uncertainties”, IEEE Transactions on Robotics, Vol. 21,
12. Event e= stopped: vehicle stopped. No. 2, April 2005.
[2] H. Cheng, Y. Chen, X. Li, and W. S. Wong, “Autonomous takeoff,
13. Event f= path finished: is asserted when a given path tracking and landing of a UAV on a moving UGV using onboard
is completed. monocular vision”, Chinese Control Conference (CCC), 2013, pp.
5895–5901.
14. Event g= path evaluated: is asserted when a given [3] R. Chetty, M. Singaperumal, T. Nagarajan, “Distributed formation
path is computed. planning and navigation framework for wheeled mobile robots”, J.
Applied Sci., 2011, 11: 1501-1509.
15. Event h= clear and free the path: is asserted when
[4] M. Dorigo, M. Colombetti, “Robot Shaping an Experiment in Behavior
there is no obstacle in the vehicle's field of view. Engineering”, The MIT Press, Cambridge, MA, 1998
C) Tracking [5] G. Antonelli, F. Arrichiello, and S. Chiaverini, “Experiments of
formation control with multirobot systems using the null-space-based
During the tracking mode, we provide the robot with the behavioral control”, IEEE Transactions on Control Systems Technology,
capabilities of tracking the map to reach the mine (target). This 17(5):1173–1182, Sept. 2009.
requires choosing the right features to track and thereby [6] G. Antonelli, “Swarm of robots flocking via the null-space-based
uniquely describing the motion of the target in front. Tracking behavioral control”, 2009 IEEE International Conference on Automation
behavior is modeled as (Fig. 5): and Logistics, 08/2009
[7] L. Parker, G. Antonelli, F. Caccavale, “A Decentralized Architecture for
1. Set of states Q = {S1; S2} where S1 stands for Multi-Robot Systems Based on the Null-Space-Behavioral Control with
“Searching”; while S2 represents “Tracking the Application to Multi-Robot Border Patrolling”, Journal of Intelligent
target”. & Robotic Systems, 2013.
[8] S. Chiaverini, “The null-space-based behavioral control for soccer-
2. Set of events Σ={a; b} where a = target detected; playing mobile robots”, IEEE/ASME International Conference on
b= target lost; c = move and follow the track; Advanced Intelligent Mechatronics, 2005
[9] R.C. Arkin, “Motor schema based mobile robot navigation”, The
3. Where Σu = {target and goal detected; target lost} International Journal of Robotics Research, 8(4):92–112, 1989.
4. Initial state q0 = {searching} [10] J. Bryson, “Action selection and individuation in agent based
modelling”, Agent 2003: Challenges of Social Simulation, pp. 317–330,
5. State S1 = searching. The vehicle is stationary or 2003.
moving looking for the determined target. [11] Y. Cao, A. Fukunaga, A. Kanhg, “Cooperative mobile robotics:
Antecedents and directions”, Autonomous Robots, 4:7–27, 1997.
6. State S2 = tracking the target. The target is in the [12] M. Kurdi, A. Dadykin, I. Elzein, “Model Predictive Control for
field of view of the vehicle and the process Positioning and Navigation of Mobile Robot with Cooperation of
generates commands to keep the target in the
UAV”, Communications on Applied Electronics (CAE), vol.6, no.7, pp. [14] J. Carvalho, E. Paiva, J. Ramos, A. Elfes, M. Bergerman, S. Bueno,
17-25, February 2017. “Air-ground robotic ensembles for cooperative applications: concepts
[13] J. Kosecka, R. Bajcsy, “Discrete event systems for autonomous mobile and preliminary results”, International Conference on Field and Service
agents”, Robotics and Autonomous Systems, 12:187–198, 1993. Robotics, pages 75–80, 1999.

You might also like