0% found this document useful (0 votes)
18 views16 pages

Hexapod Ideation Document

The document outlines the design and methodology for building a hexapod robot, detailing its components, electrical design, and programming for locomotion using reinforcement learning. It highlights the use of advanced technologies like LiDAR for mapping and Kalman filters for sensor fusion to enhance the robot's adaptability across various terrains. The hexapod aims to provide a versatile solution for surveillance and exploration, capable of walking, climbing, flying, and swimming by utilizing modular attachments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views16 pages

Hexapod Ideation Document

The document outlines the design and methodology for building a hexapod robot, detailing its components, electrical design, and programming for locomotion using reinforcement learning. It highlights the use of advanced technologies like LiDAR for mapping and Kalman filters for sensor fusion to enhance the robot's adaptability across various terrains. The hexapod aims to provide a versatile solution for surveillance and exploration, capable of walking, climbing, flying, and swimming by utilizing modular attachments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

HEXAPOD ROBOT

1. Type of Robot:
Hexapod

2. Robot Assembly Design:

Figure 1 U-
joint

Figure 2
Servo
mount
Figure 1
Bottom
Cover

Figure
2Leg
Figure
3Link2

Figure 4
MG995
servo motor
Components used
TYPE NAME /SPEC QUANTITY

Computation Raspberry Pi 4 1
8GB RAM
Coral TPU 1

Actuation SC15 Servo motors 18

ESP32 Driver 2

Power 3s Lipo 1
2,300
1
Buck converter
Sensor modules Camera 1
Rpi cam

The methodology of making the robot:

Electrical and electronics:


Power Supply:
First is the design of all the internal electronics. Since we are going for
custom made electronics and parts, we need a power supply which can
roughly take 15A of current. To tackle this, we are going to use the 4S Lipo
battery and step the voltage down to 5v. Stepping down the voltage of a
battery with a maximum output of 200W we designed a custom converter
which can handle such power. The schematic of the PCB is given below. This
design uses an asynchronous method which gives an efficiency which is
more than 95%
Figure 5 converter design

General electronics:
The main challenge of making a hexapod is the actuation, using 18 servo
makes the actuation harder without any hardware abstraction layer. To solve
this problem, we made another custom-made PCB which repurposes a
display PWM driver to work a driver which can actuate the motors. This now
simplifies the entire circuit and makes it easier to actuate the 18 servos
using I2C communication. Hence making the footprint of the bot smaller and
even more efficient with back EMF protection.

For the computation we have picked both Raspberry pi and Coral TPU. This
combo is powerful enough to run all the ML applications which enables path
planning and obstacle avoidance.
Figure 7Servo Driver

Figure 6 servo driver brd


Figure 8 Servo driver PCB
Calculation
Figure 9 Inverse Kinematics

Figure 10 Trajectory planning (a)


Figure 11 Trajectory planning (b)

Programing
Locomotion: using Reinforcement Learning
The proposed hexapod robot aims to be Versatile and Efficient Mobilizer of
terrains thanks to state-of-the-art reinforcement learning capabilities. Our
current prototype, which includes a Time-of-Flight (ToF) depth camera for
depth sensing, an Inertial Measurement Unit (IMU) for robot positioning and
orientation, and conventional servo motors for leg movements, points to the
technology’s future possibility, BLDC motors with advanced FOC control all
set to be incorporated in the final version along with motor drivers for higher
accurate control. RL is incorporated in the control for mobility of autonomous
hexapod robot that enables it to learn basic environment.

To train the robot, we use the Policy Proximal Optimization (PPO) which is an
involved reinforcement learning algorithm to train within the NVIDIA Isaac
Gym environment. PPO is beneficial when working with multi-dimensional
control spaces, which is suitable for our hexapod due to complex movements
needed for walking.

PPO is the Policy Proximal Optimization which is an enhanced reinforcement


learning approach developed to fine-tune policies for choosing actions in a
highly uncertain setting. PPO works in the following manner—for
continuously updating the policy, to obtain the maximum expected reward
while ensuring stability. It does this using a clipped objective function that
limits the size of policy adjustments to ensure that the now policy does not
develop large updates that could cause fluctuations during learning.
Thereby, while on the one hand promoting the search for new effective
actions within a given state through exploration-exploitation trade-off on the
other hand, it constrains the difference from the current policy via a
surrogate objective that contains a clipping function. It helps in making sure
that when the new policy is being made, it is not too far from the old one to
ensure that there is steady progress in learning. The [PPO] also has the
advantage of dynamically adjusting the learning rate depending on the
agent’s performance to optimize learning. The use of the collected
experience for minibatch updates multiple epochs in the algorithm helps the
algorithm use data effectively and makes it converge far much better than
other techniques in high control dimensions such as robotic gait.

Key Features of PPO:

1. Clipped Objective Function: PPO employs a clipped objective function to


regulate the policy update size to achieve stable learning and progressive
enhancement in the robot’s movement skills.

2. Adaptive Learning Rate: It uses an algorithm that adapts the learning rate
during its learning process ensuring that between exploitation and
exploration is done in the right proportion depending on the terrain to be
traversed.

3. Parallel Training: Isaac Gym facilitates parallel training for different


scenarios while makes the training phase shorter and improve the efficiency
of adaptation of the robot.

Based on the above discussions; to provide a more accurate navigational


route, the first one introduces LiDAR mapping, and the second one applies
the Kalman filters for precise movement estimation by combining the data
from four different sensors.
Mapping with LiDAR
LiDAR (Light Detection and Ranging) system helps the robot in mapping its
environment for detecting obstacles and for analysing the ground. This
makes it possible for the robot to make changes in its movements to meet
the changes on the ground hence reducing on instability as well as the
timeframe to accomplish a given task.
• Environmental Awareness: LiDAR gives detailed mapping which is
important in identifying obstacles in real time and avoid them.
• Terrain Adaptation: The generated maps allow the robot to adapt its
gait and the changes in movements depending on characteristics of the
terrain.
Sensor Fusion using ‘Kalman Filters’.
Locomotion estimation is then calculated from IMU, encoded motor data, and
LiDAR data with the help of Kalman filters. Thus, this integration gives a
detailed state estimate which is very essential for the robot.
• IMU Data: Gives information’s on acceleration and angular velocity that
are key to studying dynamic motions.
• Encoded Motor Data: Generates required joint rotations which is
important for calculating position and velocity of Robot.
• LiDAR Integration: Aids in enhancing spatial context and thus
enhancing the position estimation and correcting the sensor drift.
Advantages of Kalman Filters:
1. Noise Reduction: They decrease the sensor noise thus providing accurate
state estimates.
2. Data Integration: They incorporate data from multiple sensors which help
in accomplishing real time and precise state estimation.
3. Dynamic Adaptation: They adapt to the raw data inputs, thus giving the
robot precise and well measured movements of the system through
conditions.

Simulation and Real-World Integration

The training is carried out in the virtual environment to mimic real-world


physics in the movements of the robot. Combined with the Isaac Gym
environment, ROS makes it possible to smoothly translate practice
combinations into practice applications; the hexapod will be able to function
on different surfaces without many modifications.
GitHub link

https://github.com/saieswaramurali/HEXA_1

Application of proposed Robot

Existing surveillance and exploration technologies face limitations in terrain


adaptability, often requiring multiple robots for varied environments. Drones,
for example, can fly over obstacles but lack ground or aquatic capabilities,
while rovers are limited to rough terrains and cannot fly or float. We propose
Hexapod, a modular, six-legged robot designed to adapt to various terrains.
It can walk, climb, fly, and swim by attaching different modules, such as
drones or aquatic modules, enhancing its versatility. Hexapod enables
efficient surveillance and exploration across multiple terrains with a single
device, reducing power consumption using multiple actuators.

Timeline for Robot Making with milestones

ACTIVITY DAYS

DESIGNING 5

MANUFACTURING 3

PROGRAMMING 5

TESTING 3

FIELD TESTING 2
Proposed Photo
Figure 12 Render of the Hexapod

Figure 13 Actual
Assembly
Link for other videos and photos
https://photos.app.goo.gl/geKAumizGeE7DUQb8

You might also like