0% found this document useful (0 votes)
64 views6 pages

Spydobot-AI Based Autonomous Spider Like Robot For Spying

Project Work reference paper

Uploaded by

belen59604
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views6 pages

Spydobot-AI Based Autonomous Spider Like Robot For Spying

Project Work reference paper

Uploaded by

belen59604
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Spydobot- AI Based autonomous spider like robot for spying

2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT) | 978-1-6654-9781-7/22/$31.00 ©2022 IEEE | DOI: 10.1109/CONECCT55679.2022.9865761

Jamal Pasha Karpagavalli S


Department of Electronics and communication Engineering Department of Electronics and communication Engineering
PES University PES University
Bengaluru, Karnataka Bengaluru, Karnataka
[email protected] [email protected]

Abstract— Autonomous companion robots have shown to control the robot self-governing [6]. By diminishing the size
be particularly beneficial for gathering information in and weight of the robot and with few different alterations this
areas where people are restricted. Controlling them is work could be additionally upgraded to adjust the robot
often a challengeng feat due to the environment's development in an upward direction. Adjustments on the
ambiguity and the nonlinear dynamics of the grounds. calculation can be made to anticipate snag developments and
Despite the fact that a variety of controller designs are to explore in a more effective way in powerful conditions.
feasible, and some are documented in the literature, it is
unknown which designs are best suited for a certain This robot accompanies the element of distinguishing and
context. In this paper, we attempted to design a robot that finding objects which limits human mediation in such places
can be adapted for usage in any environment by making [3] and [8].
only skeleton alterations, we designed the controller with The adequacy of this robot is estimated by the exhibition on
integrating Neural network nodes with Q-learning harsh landscape through six legs. Calculations are worked for
algorithm to regulate movement of the robot using adaptable movement against various conditions like harsh
LIDAR samples. With military applications in territories, slants [2].
consideration, we implemented encryptors to send and
receive data, and we distributed all dumps to the This robot is modelled to look like an arachnid (class of joint-
controller to ensure that we only needed to be connected legged invertebrate animals) in design and development.
when delivering data to the owner. As all this requires
high processing speed and storage, we recommend using
ESP32-S2 for its high clock speed.

I. INTRODUCTION
Over a couple of years, robot innovation has grown quickly
in various regions and applications, for example, in clinical
applications, modern computerization, route, salvage activity,
and so on. Controlling Robot is yet thought to be one of the
significant difficulties.
dynamics and self-sufficient activities for some unpredictable
control issue, particularly development of discretion strolling
is required. By some broad iterative testing the controlling
boundary of this robot is given physically in an objective
program. AI regulator is equipped for improving its control
component self-governing over the long haul, in some sense
Fig 1.1 Designed model of robot
tending towards an object. As of late learning-based control
Machine learning calculations give the model capacity to
methods are viewed as generally encouraging and testing. So,
consequently, take input and improve as a matter of fact
in making strolling robots solid, adaptable, adaptable and
without being ambiguously modified.
versatile with conditions Learning based control strategies
Q-Learning calculations have been executed to make the
have all the earmarks of being generally striking,
robot self-sustaining [10] and [4].
subsequently in AI assembling a shrewd dynamic strolling
robots have been one of the principal research revenues in
Many robots with animal abilities have been developed,
costing. This robot can perform object discovery and helps in
however they are either drone-based, environment-specific,
Tracing and finding missing things which destroys the
or lack visual input [11]. To address all of these issues, we
association of human in extreme spots which help with
used machine learning and AI to make it self-contained, built
settling the frail versatile capacity of normal existing robots.
buffers with encryptors to delay data transmission when not
This insect robots’ capacities without human interference [5],
connected to the internet, used the most up-to-date API’s and
it can undoubtedly adjust to the new circumstance or snags,
utilities for data transmission, and used LIDAR for 360
it likewise helps in checking harmful or atomic conditions
degree view with a supplement sonic sensor for any
[1]. The extent of this task includes plan and working of leg
unplanned obstacles when the lidar scan is in the reverse
component, engine linkages, and expected programming to
direction [9].

978-1-6654-9781-7/22/$31.00 ©2022 IEEE

Authorized licensed use limited to: VTU Consortium. Downloaded on September 29,2024 at 04:10:02 UTC from IEEE Xplore. Restrictions apply.
The entire process was broken down into chunks with several
tests, and once each test was passed, the blocks were
deployed and integrated to create a successful run.[7]

The paper is organized as follows: The approach that was


utilised to make this robot work can be found in Section II,
Section III is a visual depiction of the infra setup for this
project, design and operation are discussed in section IV in
its subparts followed by a conclusion in Section V.
II. METHODOLOGY
The robot is initialized with the source point (to start from),
target data, destination location or distance in metres after
completing the robot indicates job completed. GPS module is
used to find the location of the vehicle.
The 2D 360-degree LiDAR is used to obtain sampled 3d
image of points from surrounding to calculate the next step.
An ESP32’cam board handles processing of the obstacle
detection algorithm, the motor control and the UART
communication to the LIDAR. Fig 3.1 Block diagram

The robot makes decision of what step should be taken next,


IV. DESIGN AND WORKING
for the optimal move to reach the distance with the least steps,
also achieves to balance itself on different areas. For designing the model (Robot’s body frame) software called
Learning Algorithm: Q learning is implemented to solve the Fusion360 was used. After creating sketches, extruding 2d
obstruction avoidance problem in self-directed mobile robot. shapes and editing the model to the required dimensions STL
Q-learning is a reinforcement learning algorithm which file is ready for printing. This STL file was loaded into a slicer
learns by trial-and-error technique, the robot attempts to learn and 3D printed.
an optimal way to navigate its environment, by estimating
which immediate action available to it at any point is most
likely to lead to the best long-term outcome, defined by
rewards received by reaching some goal state/states.

Neural model which is been created has 4 input ports and 2


output ports and Have 5 hidden layers, this helps in finding
the right path to move in. The basic idea of Reinforcement is
that the learning system can learn to solve a difficult task
through repeated interactions with nature. Object detection
model is created using Tensor flow CV2, which helps in Fig 4.1 Robot design
identifying and locating malicious objects or a person.
A 3D slicer software was used to convert 3D models into
III. BLOCK DIAGRAM
printing instructions for the 3D printer.

Fig 4.2 Printed body


OBSTACLE AVOIDANCE
To start with the program is initialized with the target
location. The GPS sensor is still used to determine the
robot's current position. The 2D LiDAR is used to obtain
information about the robot's immediate surroundings in order
to explore a known bounded area. By using of Median filter
pre-processing of LiDAR data was performed to eliminate

Authorized licensed use limited to: VTU Consortium. Downloaded on September 29,2024 at 04:10:02 UTC from IEEE Xplore. Restrictions apply.
noise points from time to time. To comprehend captured data details of obstructions. The conditions for partition are met
in terms of 2D space, a point cloud is formed. To detect when the distance between point I (x(i),y(i)) and point i+1
obstacles and predict their geometrical shape, the Obstacle (x(i+1),y(i+1)) is greater than the robot's width, the
Detection module analyses laser-point clouds and distance enhancement factor k is increased.
measurements. d=√[x(i)x(i+1)]^2+[y(i)y(i+1)]^2 (3)

LiDAR serves as eye of this robot which is capable of On the off chance that distance
measuring range upto 0.2-8m. Let us think that the Di is the is more than width of robot the
laser point range. The coordinates of this laser point in the squares are being parted and if
rectangular coordinate system with the lidar origin are: distance between two squares
(Xi,Yi):Xi=D∗icosθ∗cosaYi=D∗isinθ∗cosa (1) is not as much as width they
are consolidated. d>k∗Width
Where, i=, number of laser data points, θ=horizontal scan Thus we get N blocks
range subsequent to grouping to such
an extent that hindrance
LIDAR data filtering: For filtering, the 3 median filtering information and distance
masks are added to the laser point cloud. The distance between between two squares are
the robot and the laser points is calculated using an average of addressed by each square so
nine values to minimise the effect of noise. Vectors are robot can securely go through
provided with laser points range values. The present time ‘t' them. L-space interval
and the distance between two points the mask shown in between the start of i+1 and the
equation is added to distance data after. end of square I (set its record value to p ). (set its file esteem
as q ).
D(t−1,i−1) D(t−1,i) D(t−1,i+1) We must give individuals laser guides cloud amid p and q
D(t,i−1) D(t,i) D(t,i+1) value in order to render joined laser-point cloud impedes
D(t+1,i−1) D(t+1,i) D(t+1,i+1) (2) spatially ceaseless, presumptuous that there are number laser
focuses needed for the mission, so i∈[1, num].
D(i) can be communicated as :
D(i)=D(p)+i∗(D(q)−D(p))/num (4)

Fig 4.3 Lidar Scanning


SEGMENTATION:
Division is done to improve grouping and execution of
Fig 4.4 Flow Diagram
Obstacle Detection. The cycle holds just those laser-point
cloud information which experience deterrent, disposing of The Sensors are initialized to get light samples at an interval
focuses which stay away for the indefinite future to the robot. which is dependent on motors RPM and camera frame quality.
The approaching LIDAR laser-point cloud information is The point cloud is generated after pre-processing distance
isolated into gatherings of some laser focuses. The record data (light samples) acquired by LiDAR which serves as input
worth of those laser focuses is I, the range of laser point I is to obstacle detection algorithm as shown in fig 4.4.
D(i), As a result, the first and last lists of each square of free The Point cloud file can be viewed from web point viewer
laser-point cloud blocks are start(i) and stop(i), respectively. which is a 360-object viewer fig 4.5 for 3D monitoring
Ultrasonic sensor is also used to prevent crashing from
(D(i−1)=0)&&(D(i)!=0) accidental obstacle(animals),
Similarly, stop(i) fulfills "Why: Whenever the lidar is filtering the snag at a point which
(D(i)!=0)&&(D(i+1)=0) is not forward position of the robot and any impediment
happens, as an exemption we can move the bot to other
For each points obtained division measure is applied, laser position"
focuses are parted into N blocks, each containing the location

Authorized licensed use limited to: VTU Consortium. Downloaded on September 29,2024 at 04:10:02 UTC from IEEE Xplore. Restrictions apply.
This decides the mathematical construction of impediment. The learning aims to maximise the amount of remuneration it
By utilizing Convex Hull calculation Mathematical portrayal receives in the long run. Q-learning is a support learning
of impediment is being accomplished. To produce obstruction calculation that endeavours to get familiar with the state-
structure as far as polygon Graham Scan calculation activity esteem ,whose worth is the most extreme limited prize
dependent on polar point is used. that can be accomplished in the beginning of the state.

Learning Rate = 0.3, Momentum = 0.9,InitialWeightMax =


0.5,Success = 0.0015.

Fig 4.5 Lidar Scanning Output


Q LEARNING FOR OBSTACLE AVOIDANCE

Q-learning is a form of support learning technique that works


in a similar way to unique programming, and the neural
organisation has a huge capacity for storing qualities.

• These two approaches are combined with the goal of


ensuring autonomous robot behaviour in a muddled
and unpredictable environment.
Fig 4.7 Q-Learning Algorithm Flow Chart
• Reinforcement learning is kind of a learning
procedure dependent on experimentation. The A standard support learning arrangement comprises of a
essential thought behind support is that the learning specialist associating with the climate in discrete timesteps. At
• framework can figure out how to address an each timestep the specialist gets a perception xt, makes a move
unpredictable assignment through rehashed 'at' and gets a scalar award 'rt'.
association with the climate. In the climate considered the activities are essential for the
• The learning framework gets state data about the predefined set of activities. Now and then, climate might be
climate through its sensors, and this state data is somewhat noticed, anyway for the situation study, the climate
utilized by a thinking cycle to decide the activity to is completely noticed so st = xt.
be given state. After the activity is executed, the An specialist's conduct is characterized by an approach, which
learning framework gets a support signal from the guides states to a likelihood dissemination over the activities:
climate to demonstrate the outcomes of its activities. π:S → P(A).The climate, may likewise be stochastic.
Thus, it is displayed as a Markov Decision Process (MDP),
with a state space , an activity space , an underlying
conveyance p(s1), progress elements P(st+1|st, at) and reward
work r(st, at).
The get back from a state is characterized as the amount of
limited potential compensations Gt=σ γτ-t Tτ=t r(sτ, aτ) with a
limiting factor∈[0, 1].
As it very well may be seen, the return relies upon the
activities picked, and thusly, on the arrangement , and might
be stochastic.
The objective in support learning is to gain proficiency with
an approach which boosts the normal get back from the
beginning dissemination J= Eπ[Gt]. (5)

Fig 4.6 Decision Making Algorithm

Authorized licensed use limited to: VTU Consortium. Downloaded on September 29,2024 at 04:10:02 UTC from IEEE Xplore. Restrictions apply.
Fig 4.9.2 Pixel Matching with objects database
(ex: explosive)

explosive="ff8f57a8-aa20-4e1b-90c2-418c2998c2f2"
in the event that [ $objectt= $explosive,$guilty]
here,
objectt=processed image
explosive=pictural dataset of explosives to match with objects
Fig 4.8 Deep Neural Network and Lidar Preprocessing Flow • Those segments are processed and filtered with best
OBJECT DETECTION match
Object recognition is a computer vision technique that allows
one to recognize and locate features in a photograph or film.
Recognizable proof and limitation, object detection could be
used to include objects in a scene and specifically mark them.
An Object Detection API: An open source system based on
Fig 4.9.3 Filtering of accurately matched
top of TensorFlow, Keras and open CV that makes it simple
to develop ,train ,and convey object discovery models.
segments

• If instinct and target is a match, Target data with


location is compiled and mailed to the admin.

Fig 4.10 Mail sent when person is detected

As soon as the object is matched it clicks a live image and


sends it through a mail to the verified user. This is the image
captured and sent to the user by the robot after detecting the
Fig 4.9 Object Detection Flow Diagram person and the object which it was trained for. It also sends
Procedure to compile :- the live location to the user by making use of Geolocation API
and AWS server. The below pictures shows the location
• Get the images from ANN model into the Deep details that the user have received from the system when it
learning model with libraries detected an object. We can also access the live view from
admin commands.
LOCATION CAPTURE AND MAILING SERVER:
With the system's permission, the Geolocation API is being
used to work out where the robot is currently located. This
function can also be used to direct a user to their desired
location and geotag user-generated content, such as a picture's
Fig 4.9.1 Frame capture from live camera location. The Geolocation API also enables us to map the
• Classifier segments the image into bounded boxes robot when it is in motion while keeping the same level of
accuracy. Just when the page is open, and only when the user's
privacy is respected. This brings up a lot of interesting

Authorized licensed use limited to: VTU Consortium. Downloaded on September 29,2024 at 04:10:02 UTC from IEEE Xplore. Restrictions apply.
possibilities, like integrating with backend systems to arrange creature or impediment, pictures which are taken from IP
an order for pickup if the customer is close. When using the camera ought to be handled by utilizing required calculations.
Geolocation API, there are a few things to keep in mind. This paper gives an outline of the best in class on six-leg
Browser compatibility: While the majority of browsers strolling robots. Cautious consideration is paid to the plan
already support the Geolocation API, it's always a good idea issues and requirements that impact the specialized possibility
to double-check before doing anything. and execution of these frameworks. A plan strategy is
illustrated to efficiently plan a six-leg strolling robot.
Specifically, the proposed plan technique considers the
mechanical construction and leg arrangement, the inciting and
drive components, payload, movement conditions, and
strolling step, giving a valuable instrument to the deliberate
decision of fundamental plan trademark. A contextual
investigation has been accounted for to show the adequacy and
achieve-ability of the proposed methodology.

VI. REFERENCES
Fig 4.11 Location sent when object is detected [1] M.Nandhini ; V.Krithika ; K.Chittal;”Design of four pedal quadrupped
robot”, 2017 IEEE International Conference on Power, Control, Signals
The consequences where we’ve used geolocation to keep an and Instrumentation Engineering (ICPCSI) .
eye on a robot’s location are:
[2] Muddasar Naeem, Syed Tahir Hussain Rizvi, Antonio Coronato; “A
•When a more precise lock on the robot area is needed.
Gentle Introduction to Reinforcement Learning and its Application in
•Application needs to refresh the UI dependent on new area Different Fields”, IEEE Access ( Volume: 8) - 17 November 2020
data.
•As the client reaches a specified location, the application [3] Lanxiang Zheng, Ping Zhang, Jia Ta, Fang Li; "The Obstacle Detection
Method of UAV Based on 2D Lidar", IEEE Access ( Volume: 7) - 07
must update the logic.
November 2019
Geolocation API returns an area and exactness sweep
dependent on data about cell pinnacles and Wi-Fi hubs that [4] Beakcheol Jang, Myeonghwi Kim, Gaspard Harerimana, Jong Wook
the portable customer can recognize. The report portrays Kim; "Q-Learning Algorithms: A Comprehensive Classification and
Applications", IEEE Access ( Volume: 7) - 13 September 2019
convention used to send this information to the worker and to
return a reaction to the customer. [5] Metin TOZ, "Design and Kinematic Analysis of a 6-DOF Asymmetric
Correspondence is done over HTTPS utilizing POST. Both Parallel Robot Manipulator with 4-SPS and 2-CPS Type Legs", 2021
solicitation and reaction are arranged as JSON, and the International Conference in Advances in Power, Signal, and Information
Technology (APSIT) - 21 December 2021
substance sort of both is application/json.
Both location and target data is compiled in a mail and sent
through AWS SES. [6] Fahad Alaieri; Andr´e Vellino; ” Ethical Decision Making in Robots:
Autonomy, Trust and Responsibility Autonomy Trust and
Amazon SES is a convenient and cost-effective email gateway Responsibility”,2016 The Eight International Conference on Social
that allows users to send and receive email using email Robotics.
addresses and domains.
[7]Dekui Lv ; Xiaxin Ying ; Yanjun Cui ; Jianyu Song ; Kuidong Qian ;
Maolin Li; “Research on the technology of LIDAR data processing”, 2017
first International Conference on Electronics Instrumentation &
V. CONCLUSION Information Systems (EIIS).
[8]Angelo Nikko Catapang and Manuel Ramos, Jr.“Obstacle Detection
In this venture, the robot is planned and created for both level using a 2D LIDAR System for an Autonomous Vehicle”, 2016 6th IEEE
grounds and unpleasant landscapes. A few issues, for International Conference on Control System, Computing and Engineering.
example, not involving bigger field and all the more widely to [9]Deepali Ghorpade , Anuradha D. Thakare,”Obstacle Detection and
use for different points was thought of while planning the Avoidance Algorithm for Autonomous Mobile Robot using 2D
robot. For correspondence reason GPS module is coordinated, LiDAR”International Conference on Computing Communication Control
and Automation (ICCUBEA) 2017.
for correspondence from PC to system and ESP32 CAM board
is utilized to control. Records from PI camera which is situated [10] Kristijan Macek, Nedjeljko Peric, “A reinforcement learning approach
to obstacle avoidance of mobile robots”, 7th International Workshop on
on the robot might be moved remotely to PC continuously. Advanced Motion Control (AMC).
Mechanical segments, plan of robot and control framework
can likewise be improved. Additionally, a few safeguards [11] bing-qiang huang, guang-yi cao, min guo; “reinforcement learning
neural network to the problem of autonomous mobile robot obstacle
against to falling, slamming and reversal might be taken. avoidance” Fourth International Conference on Machine Learning and
Regardless of whether robot turns other surface, a constant Cybernetics (ICMLC)-21 August 2005.
development might be given by creating appropriate
programming and equipment. For the least complex correction
in equipment, engines can be supplanted with one that can be
turned 180 degrees. Wheels can be incorporated to speed up
robot in the level landscape. nonetheless, legs give greater
adaptability. These given enhancements might be created in
mechanics and programming. At the point when the guard
component of robot is thought of, to recognize individuals,

Authorized licensed use limited to: VTU Consortium. Downloaded on September 29,2024 at 04:10:02 UTC from IEEE Xplore. Restrictions apply.

You might also like