BBGN Minor
BBGN Minor
On
SERVEILLANCE”
SUBMITTED BY:
SUBMITTED TO:
AUGUST, 2022
LALITPUR, NEPAL
1
NATIONAL COLLEGE OF ENGINEERING (NCE)
(Affiliated to Tribhuwan University)
Talchikhel, Lalitpur
On
SERVEILLANCE”
SUBMITTED BY:
ENGINEERING.
AUGUST, 2022
LALITPUR, NEPAL
1
ii
ABSTRACT
iii
2
TABLE OF CONTENTS
ABSTRACT....................................................................................................................... iii
TABLE OF CONTENTS ................................................................................................... iv
LIST OF FIGURES ............................................................................................................ v
LIST OF ABBREVIATIONS ............................................................................................ vi
1. INTRODUCTION ....................................................................................................... 1
1.1 Background .......................................................................................................... 1
1.2 Problem statement ................................................................................................ 2
1.3 Aims and Objectives ............................................................................................ 3
1.3.1 Aims .............................................................................................................. 3
1.3.2 Objectives ............................................................................................................... 3
1.4 Scope .................................................................................................................... 4
2. LITERATURE REVIEW ............................................................................................ 5
3. METHODOLOGY ...................................................................................................... 7
3.1 Diagram of Proposed System ............................................................................... 7
3.2 Tool used ............................................................................................................ 10
3.2.1 Hardware requirements..................................................................................... 10
3.2.2 Software requirements ...................................................................................... 10
3.3 SYSTEM REQUIREMENTS ............................................................................ 11
3.4 FEASIBILITY STUDY ..................................................................................... 14
4. EPILOGUE ................................................................................................................ 15
4.1 EXPECTED OUTPUT....................................................................................... 15
4.2 GANTT CHART ................................................................................................ 16
REFERENCE AND BIBLIOGRAPHY ........................................................................... 17
5.REFERENCES………………………………………………………………………17
3
iv i
LIST OF FIGURES
4v
LIST OF ABBREVIATIONS
5vi
1. INTRODUCTION
1.1 Background
The second edition of the autonomous vehicle “Grand Challenge” of the U.S. Defense
Advanced Research Projects Agency (DARPA), in 2005, and the Urban Challenge two
years later, revived interest by making the technology for self-driving cars seem within
reach. That stimulated technology companies to jump in notably Google, which launched
its program in 2009. Major programs have followed at giant, traditional carmakers
including General Motors, Ford, and Toyota. But what does “self-driving” really mean?
The Society of Automotive Engineers (SAE) defines five levels of automation, beyond
fully manual control (Level 0). These range from Level 1 (“feet off”) automation, typified
by cruise control; to Level 2 (“hands off”),systems such as Tesla’s “Autopilot,” which
assume the driver is poised to take immediate control; all the way to Level 4 (“mind off”),
which can stop and turn over control when needed, and completely automated, Level5
vehicles requiring no driver at all. Level 4 and Level 5 constitute the Holy Grail for
autonomous-vehicle development—and also the most difficult targets to reach.
Recently, the rapid development of the artificial intelligence has greatly promoted the
progress of unmanned driving ,such as self-driving cars, unmanned aerial vehicles and so
on.[1,2] Among these unmanned driving technologies, self-driving cars have attracted
more and more attention for their important economic effect. However, there are lots of
challenges in. More and more solutions based on VSLAM for self-driving cars have been
presented, including obstacle-detection, scene recognition, lane detection, and so on.
1
1.2 Problem statement
We focus on self-driving cars which are categorized as level 3 or above. The environment
perception system utilizes the prior knowledge of the environment to establish an
environmental model including obstacles, road structures, and traffic signs through
obtaining surrounding environmental information.
The main function of the environment perception system is to realize functions like lane
detection, traffic signal detection, and obstacle detection, by using some hardware devices
such as cameras and laser radars. The main function of the autonomous decision system is
to make some decisions for the self-driving car, including obstacle avoidance, path
planning, navigation, and so on.
For example, in the path planning, the autonomous decision system plans a global path
according to the current location and the target location firstly, then reasonably plans a
local path for the self-driving car by combining the global path and the local environment
information provided by the environment perception system.
2
1.3 Aims and Objectives
1.3.1 Aims
▪ To design and run an autonomous robot for rescue and surveillance using
VSLAM.
1.3.2 Objectives
3
1.4 Scope
Visual SLAM systems are also used in a wide variety of field robots. For Example,
rover and Landers for exploring mars use visual SLAM systems to navigate
autonomously. Field robots in agriculture, as well as drones, can use the same
technology to independent travel around crop fields. Autonomous Vehicles could
potentially use visual SLAM system for mapping and understand world around them
for rescue and surveillance.
One major potential opportunity for Visual SLAM systems is to replace GPS tracking
and navigation in certain applications.GPS systems aren’t useful indoors or in big cities
where the view of the sky is obstructed, and they’re only accurate within a few meters.
Visual SLAM systems solve these problems as they aren’t dependent on satellite
information and they’re taking accurate measurement of the physical world around
them.
4
2. LITERATURE REVIEW
Kim et al. used SSD algorithm using LIDAR 3D for autonomous driving application. They
used RGB dataset. The method used is GFU SSD method which showed better result than
baseline SSD.
In [110], Kim et al. used the SSD algorithm for general object detection in the autonomous
driving applications. LIDAR 3D point clouds were converted into 2D images, and then
these images were used along with RGB images as inputs for two separate SSD networks.
Finally, gated fusion units (GFU) were used to assign selective weights to fuse both feature
maps produced by the two SSD networks through a feature fusion level. The experimental
results showed that the proposed GFU SSD method outperformed the baseline SSD [1]
Sighandhupe et. al proposed and reviewed SLAM algorithm for localization and mapping
in his paper presented in IEE conference. SLAM relies on vision-based sensors which
separate it from use of GPS, LIDAR system. However, the aforementioned system can also
be included in SLAM.
SLAM is an algorithm that combines a set of sensors to build a map of the AV and its
surroundings, while simultaneously keeping track of the vehicles current position in
reference to the built map. Although SLAM algorithms were initially applied in the field
of mobile robots, researchers have put a noticeable effort into adjusting the algorithms to
suit autonomous vehicle applications. This was done by taking into consideration different
key challenges, such as the need for faster processing, the outdoor lighting conditions, and
the dynamic road obstacles. It is important to point out that while SLAM mainly relies on
vision-based sensors, other sensors such as GPS, LIDAR, and sonar have also been used
to implement SLAM algorithms. Surveys on recent SLAM methods have been done. [2]
Hrisham Ali et. al in his book has proposed using DOT to improve accuracy in dynamic
environments. The book also proposes edge detection in SLAM for localization and
mapping.
5
Localization and navigation play a key role in many location-based services and have
attracted numerous research efforts from both academic and industrial community.
However, the ever-growing computation resource demanded by SLAM impedes its
application to resource-constrained mobile devices. The design, implementation, and
evaluation of edge SLAM, an Ed assisted real-time semantic visual SLAM service running
on mobile devices is provided. It presents DOT (Dynamic Object Tracking); a front-end
that added to existing SLAM systems can significantly improve their robustness and
accuracy in highly dynamic environments. [3]
Irene Ballaster et. al has developed a project using Dot-technology in VSLAM. Using
ORB-SLAM 2 dataset, the result was significantly shown to be improved.
Segmentation and multi-view geometry to generate masks for dynamic objects in order to
allow SLAM systems based on rigid scene models to avoid such image areas in their
optimizations. This short-term tracking improves the accuracy of the segmentation with
respect to other approaches. In the end, only actually dynamic masks are generated.
Evaluation of DOT with ORB-SLAM 2 in three public datasets is done. Results show that
the approach improves significantly the accuracy and robustness of ORB-SLAM2,
especially in highly dynamic scenes [4].
Lu. W et. al in his research has shown the disadvantages the VSLAM offers where it is
shown to be prone to ambient lighting.
The Visual SLAM is sensitive to ambient lighting and optical texture and not stable in the
outdoor environment, so it cannot be used for all-weather unmanned vehicle within a short
time. [5]
6
3. METHODOLOGY
Field of
Vision
U PI U
CAMERA
W W
TCR
T
Legend
Raspberry PI 4
W-Wheels
W TCR W TCRT-Line following sensor
T
PI U-Ultra sonic Sensor
CAMERA
U U
Field of
Vision
7
ROBOT CONTROLS
WHEELS,
ARDUINO, MOTOR
Raspberry PI DRIVER
4
Level 2ROS
Some automation
PI
CAMER
A DATA
Rviz
(Simulation)
Level 5
Level 4
Level 3 Fully
Level 1 Level 2
High Autonomous
Nominal
Level 0 Some Automation
Intelligent Autonomy
automation
Manual Features
Driver
8
LEVEL Automation Features
9
3.2 Tool used
1. Raspberry Pi 4 1 9000
6. Pi Camera 2 1500
9. ATMEGA328 1 700
TOTAL 18050
10
3.3 SYSTEM REQUIREMENTS
✓ Open VSLAM
✓ Raspberry PI 4
11
✓ Ubuntu
Ubuntu is Linux distribution based on Debian and composed mostly of free and open-
source software. Due to compatibility issues, we use LTS Version 18.04 or 20.04. It is
officially released in three editions: Desktop, Server, and Core for Internet of things devices
and robots. All the editions can run on the computer alone, or in a virtual machine.
Figure 3 Source:https://logo-worlds.net
✓ RViz
Rviz is a 3D visualization software tool for robots, sensors, and algorithms. It enables you
to see the robot's perception of its world (real or simulated). The purpose of it is to enable
you to visualize the state of a robot. If an actual robot is communicating with a workstation
that is running rviz, it will display the robot's current configuration on the virtual robot
model. ROS topics will be displayed as live representations based on the sensor data
published by any cameras, infrared sensors, and laser scanners that are part of the robot's
system. This can be useful to develop and debug.
12
✓ ATMEGA328
Figure 5 www.microchip.com
13
3.4 FEASIBILITY STUDY
➢ One major potential opportunity for visual SLAM systems is to replace GPS
tracking and navigation in certain applications. GPS systems aren’t useful indoors,
or in big cities where the view of the sky is obstructed, and they’re only accurate
within a few meters. Visual SLAM systems solve each of these problems as they’re
not dependent on satellite information and they’re taking accurate measurements of
the physical world around them.
14
4. EPILOGUE
• Rich and mature visual SLAM approaches have been provided across May
operating domains of service robots. The adoption of VSLAM approaches on
service robots is a critical evolution to grow the industry away from prohibitively
expensive Lidars service robots are deployed in many environments, with any
different sensors, on the ground and in the air.
• The goal of this analysis is to identify general purpose techniques which may be
used to support innumerable services robot applications. Through this
experimentation, it was concluded the open SLAM was the overall best general
purpose technique for the broadest range of service robot types, environments and
sensors. It performed well in all three studies, showcasing superior re-localization,
variable lighting performance with high reality.
15
4.2 GANTT CHART
Days
1 21 41 61 81 101
Requirement gathering
Analysis
Design
Coding
Prototyping
Implementation
Documentation
16
REFERENCE AND BIBLIOGRAPHY
[1]110.Kim, J.; Choi, J.; Kim, Y.; Koh, J.; Chung, C.C.; Choi, J.W. Robust Camera Lidar
Sensor Fusion Via Deep Gated Information Fusion Network. In Proceedings of the 2018
IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26 30 June 2018; IEEE:
Changshu, China, 2018; pp. 1620 1625
[2]127. Singandhupe, A.; La, H.M. A Review of SLAM Techniques and Security in
Autonomous Driving. In Proceedings of the 2019 Third IEEE International Conference
on Robotic Computing (IRC), Naples, Italy, 25 27 February 2019; pp. 602 607. [Google
Scholar]
[3] Htisham Ali Ahmed Durmush Olli SuominenJariYli-Hietanen Sari PeltonenJussi Collin
Atanas Gotchev, Robotics and Autonomous system 132(2020) 103610
[4] Edge Assisted Mobile Semantic Visual SLAM ,Jingao Xu, Hao Cao, Danyang Li,
Kehong Huang, Chen Qian, LongfeiShangguan, Zheng Yang 2020
[4][DOT: Dynamic Object Tracking for Visual SLAM, Irene Ballester, Alejandro Fontan,
Javier Civera, Klaus H. Strobl, Rudolph Triebel 2021]
[5]166.Lu, W.; Zhou, Y.; Wan, G.; Hou, S.; Song, S. L3-Net: Towards Learning Based
LiDAR Localization for Autonomous Driving. In Proceedings of the 2019 IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA,
16 21 June 2019; pp. 6382 6391. [Google Scholar]
17
18
19
20