0% found this document useful (0 votes)
18 views

Lecture-4 - Robot-Navigation

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Lecture-4 - Robot-Navigation

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Edge AI and Robotics Teaching Kit

Lecture 4.4
Robot Navigation
The Edge AI and Robotics Teaching Kit is licensed by NVIDIA and UMBC under the
Creative Commons Attribution-NonCommercial 4.0 International License.

2
Topics

List the components involved in Robot Navigation


Describe and demonstrate using camera images to perform Robot Navigation
Describe what Mapping is and how it is essential to Robot navigation
Describe other methods for position and path planning
Demonstrate integration of basic robot navigation and collision python

3
Robot Navigation

4
JetBot Components for Navigation

Motors

Motor Controller

Camera

Power

Jetson Nano

Deep Learning Programming Framework

5
Navigation Methods

You can make use of the Jetson Nano GPU, NVIDIA’s CUDA Deep Learning framework and
camera to provide sophisticated autonomous navigation capabilities.

Jetson uses camera images to determine obstacles and path forward

Certain use cases such as drones will need more than a camera for navigation. You upgrade from
the basic JetBot hardware to include Lidar and other sensors that provide more advanced Location
and Mapping capabilities. We will mention some of those methods, but they are beyond the scope
of this Teaching Kit.

6
Steps for Navigation and Obstacle Avoidance - Jetbot

The JetBot starter notebooks provide exercises to perform the four required steps which are:

• Collect data on free and blocked paths

• Train a classification model using Pytorch and a Neural Network such as Resnet-18

• Optimize the Model using TensorRT

• Load the model and demonstrate

7
Mapping

8
Mapping in Robotics

– Maps are essential for robots to devise plans for navigating


– Robots use maps to avoid obstacles and find shortest paths
– Often robots update their own maps as they explore and learn

9
Aerial Maps

– A top-down view of an environment provides useful information that


may be impossible to gather by a ground robot
– Sources of aerial maps
– satellites
– drones
– planes

10
Point Clouds

– Point Clouds or a set of coordinate with x,y,z values


– Sometimes points will also have color data associated with them.

http://grail.cs.washington.edu/rome/dense.html
11
Generating Point Clouds

– Stereo Cameras
– LiDAR
– Radar
– Structure From Motion
– Image Alignment and Stitching

12
Other Path Planning Methods

13
Simultaneous Localization and Mapping

– JetBot does not use SLAM


– SLAM is used by robots that must construct a map of their
environment while localizing themselves
– SLAM is used by self-driving cars, agricultural robots, and
underwater robots
– GPS can be used to aide SLAM, but GPS alone is insufficient for
SLAM.

14
Solution Likelihood

– SLAM operates by maximizing two likelihoods:


– likelihood of the map given the pose and sensor readings of the robot
– likelihood of the pose of the robot given the map and the sensor readings
– Simultaneously optimizing for both of these will let the robot produce
a map while estimating its pose in the map

15
Kalman Filters

– Extended Kalman Filters can be used to estimate the pose of the


robot and the map
– The state vector is the landmarks and the pose of the robot
– The map transforms are non-linear so the basic Kalman Filter is
insufficient
– Complexity is quadratic with number of landmarks

16
Monocular SLAM

– Recent research has demonstrated successful SLAM with a single


monocular camera
– LSD-SLAM
– ORB-SLAM
– Monocular cameras are still not as reliable as depth data from LiDar
or stereo cameras, but SLAM is still possible with a single camera

17
Dead Reckoning

– Estimates a position based on the change from a previous position

– Does not require maps or outside references

– Utilizes sensor data and the precise physical measurement of the


robot

18
Humans and Dead Reckoning
– Humans can estimate their position by counting their steps and
multiplying by the size of each step
– Most people, however, estimate their distance traveled based on
how long they have been walking and how fast they typically walk

19
Jetbot Path Planning

– We can collect data using the camera images and train on areas of the image that the Jetson
should follow, then build a regression model
– This is useful for paths, racetracks and room navigation
– See the Road Following secion in the Jetbot Notebooks. . Work through the Notebooks to:
– Collect and label the data, be sure to collect data off the path and pointing back to the path so that the Jetbot knows
how to get back on track
– Train the model
– Optimize the model with TensorRT
– Load and run the model to see the Jetbot in action
– The road following notebooks can be accessed directly on the Jetbot image or at the Github
site https://github.com/NVIDIA-AI-IOT/jetbot/tree/master/notebooks/road_following

20
Lab Community Project- Combine Collision Avoidance
and Road Following

– This community project combines the Collision Avoidance and Road Following Jupyter
Notebooks and was contributed by community member Abu Abuelgasimsaadeldin

– You can access this notebook at


https://github.com/abuelgasimsaadeldin/Jetbot-Road-Following-and-Collision-Avoidance/blob/
main/combine_scripts/RoadFollowing%2BCollisionAvoidance.ipynb

21
Navigation Using ROS2 and Gazebo

ROS2 acts a middleware to work with robots such as the Jetbot to enable greater degree of
control and automation
For those who do not have a Jetbot but have Jetson Nano, you can use ROS2 and Gazebo to
work with a virtual jetbot.
For more information, refer to https://github.com/dusty-nv/jetbot_ros.

22
Thank You
Edge AI and Robotics Teaching Kit

You might also like