Vision-Based Guidance and Navigation For Autonomous MAV in Indoor Environment
Vision-Based Guidance and Navigation For Autonomous MAV in Indoor Environment
Vision-Based Guidance and Navigation For Autonomous MAV in Indoor Environment
Abstract— The paper presents an autonomous vision-based ArUco markers. Finally, the simultaneous localization and
guidance and mapping algorithm for navigation of drones in a mapping (SLAM) based model using cartographer ROS has
GPS-denied environment. We propose a novel algorithm that been implemented for mapping 2D/3D surrounding. The test
accurately uses OpenCV ArUco markers as a reference for path is performed in a virtual environment created by gazebo and
detection and guidance using a stereo camera. It enables the Rviz by placing different dynamic ArUco markers for
drone to navigate and map an environment using vision-based different destinations on the map.
path planning. Special attention has been given towards the
robustness of guidance and controlling strategy, accuracy in the
vehicle pose estimation and real-time operation. The proposed
algorithm is evaluated in a 3D simulated environment using
ROS and Gazebo. The results have been presented for drone
navigation in a maze pattern indoor scenario. Evaluation of the
given guidance system in the simulated environment suggests
that the proposed system can be used for generating a 2D/3D
occupancy grid map autonomously without the use of high-level
algorithms and expensive sensors such as lidars.
I. INTRODUCTION
Micro Aerial Vehicles (MAVs) commonly known as
drones playing an important role in achieving various tasks in
both civilian and military areas because of their autonomy,
high versatility, low cost, extreme agility, easy to deploy, and
the capability for performing complex maneuvers in both
structured and unstructured environment [1]. However, a lot Figure 1. A vision-based navigation control law
of work has been progressed during the past recent years,
research on their deployment and control efforts are still This research aims to contribute to the emerging field of
focused on achieving fully autonomous drones that can research of MAVs in a wide range of different applications by
perform high-level missions particularly in GPS denied introducing a low-cost vision-based intelligent indoor
environment [2]. Estimating and mapping of the surrounding pathfinding and mapping technique. The novelty of the
3D environment is one of the key challenging task for fully developed algorithm relies on its simple yet accurate
autonomous MAVs to navigate safely and operate high-level autonomous real-time online navigation and mapping with no
tasks [3]. prior information from external infrastructure.
The proposed research work aims at developing an II. RELATED WORK
algorithm for vision-based path following and navigation
approach in indoor application. In this paper, we have The real-time vision-based navigation and mapping
deployed an online ArUco based OpenCV tracker, which approach is the research of interest in many areas from
helps the MAV to autonomously track, detect and find its augmented reality to intelligent robotics and autonomous
position with reference to the ArUco marker for the online aerial vehicle application. It collects the visual information
navigation application. Whenever an ArUco tracker is from vehicles using path trackers like ArUco [4], [5]. The
detected, the controller flag is set to high which results in the other type works without using such markers but consider the
position controller module to receive the sensor feedback data 3D surrounding information [6]. There are many techniques
in order to generate (x- y) relative positions of the MAV with that are already implemented to control the MAV
respect to the tracker (shown in Fig. 1). The reference value r attitude/altitude and position [7], which includes PID, LQR,
which is a relative distance to the marker of the controller is H∞ , and model predictive controllers. The ideal purpose of
set to zero. In order to achieve the proposed task, the such a controller is to maintain a suitable accurate position of
localization relies on the drone vision-based technique using the vehicle, however, the position control is used to drive the
points (X, Y) in a camera matrix. Rotation and translation are ROS and Gazebo integrate with autopilot to communicate
arbitrarily selected and the obtained image point will be with the drone platform and receive camera output, IMU,
linked to projected X-Y. barometer, magnetometer and GPS sensor data with noise
from the simulated world and sends this data to generate
C. Intelligent controller design
motor/actuator thrust command values which are sent back to
In this section, a vision-based hybrid intelligent controller Gazebo physics simulator. This can also communicate with
is designed and developed to provide accurate position control Ground Control Station and Offboard API to guide telemetry
mechanism for MAV. The PID controller take difference data with simulated gazebo and accept commands. The
between pose feedback information and generated reference comprehensive configuration model of proposed SITL is
trajectory to calculate the vehicles 3D velocity.
shown in Fig. 4.
A set of PID controllers were implemented to control the
vehicle position by guiding thrust and attitude set point to the
flight controller. The controller system consists of three
parameters, taken as constant Kp, Ki, and Kd which are being
used respectively to tune the vehicle proportional, integral and
differential unit respectively. Output Control equation in
continuous form (3) and in discrete form (4) and it is
structured in Fig. 3. Figure 4. SITL configuration for MAV navigation and
visualization
The obtained 3D x y z pose is used to help the drone approach in ROS based Gazebo simulation by implementing proposed
towards trackers with the right orientation. An intelligence vision Algorithm 1. The obtained result explain that the
PID controller is designed to reduce the error on x-, y- and z- quadcopter recognize ArUco markers accurately and
axis along with yaw angle. The drone continuously fly and navigate in a 3D maze scenario from its initial take-off
keep tracking the marker provided at a given threshold value (position marked as green) to the final desired location
(distance of 50 cm and an angle of ±5°). Every tracker is (marked as red colour).
coded with some pre-defined task (perform the next
movement) done by the MAV, as it track the ArUco and
reaches near at a fixed distance, MAV read the coded
information and perform the given task to explore the pre-
defined path created in indoor environment by controlling the
drone position. While tracking the ArUco, MAV create a
SLAM based map of the surrounding environment in Rviz.