0% found this document useful (0 votes)
8 views28 pages

Major Proposal Final

The project proposal outlines the development of the Multi-Agent Robot System (M.A.R.S) for warehouse automation, aiming to enhance efficiency and reduce reliance on manual labor through a network of autonomous mobile robots. It integrates advanced sensor fusion techniques for accurate localization and utilizes the Robot Operating System (ROS) for real-time communication and task execution. The prototype will be tested in a controlled environment to demonstrate its scalability and effectiveness in improving warehouse operations.

Uploaded by

danielruthers2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views28 pages

Major Proposal Final

The project proposal outlines the development of the Multi-Agent Robot System (M.A.R.S) for warehouse automation, aiming to enhance efficiency and reduce reliance on manual labor through a network of autonomous mobile robots. It integrates advanced sensor fusion techniques for accurate localization and utilizes the Robot Operating System (ROS) for real-time communication and task execution. The prototype will be tested in a controlled environment to demonstrate its scalability and effectiveness in improving warehouse operations.

Uploaded by

danielruthers2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

TRIBHUVAN UNIVERSITY

INSTITUTE OF ENGINEERING
PASHCHIMANCHAL CAMPUS

A Major Project Proposal


On
MULTI AGENT ROBOT SYSTEM (MARS) FOR WAREHOUSE
AUTOMATION

Submitted By:
AMRIT KUMAR BANJADE PAS078BEI003
LAXMI PRASAD UPADHYAYA PAS078BEI018
SIDDARTHA GUPTA PAS078BEI038
SUMIT SIGDEL PAS078BEI042

Submitted To:
Department of Electronics and Computer Engineering
Pashchimanchal Campus

June, 2025
ABSTRACT

The increasing demand for efficient, scalable, and low-cost logistics solutions has high-
lighted the limitations of traditional, labor-intensive warehouse operations. This project
introduces M.A.R.S (Multi-Agent Robot System), a modular and economically viable
approach to warehouse automation utilizing a network of autonomous mobile robots.
The system integrates hybrid sensor fusion techniques—combining data from IMUs,
encoders, and cameras—for robust and accurate localization in dynamic indoor envi-
ronments. Built on the Robot Operating System (ROS), M.A.R.S enables real-time
communication, coordination, and parallel task execution. Using ESP32 microcon-
trollers, OpenCV, and grid-based path planning algorithms such as A*, the system en-
sures optimized navigation and collision avoidance. The prototype will be tested in
a controlled warehouse-like environment, demonstrating its potential to reduce oper-
ational errors, increase throughput, and offer a scalable solution adaptable to various
warehouse sizes.
Keywords : Localization, Multi-Agent Robotics, OpenCV, Path Planning, ROS, Sensor
Fusion, Warehouse Automation

i
TABLE OF CONTENTS
ABSTRACT i

LIST OF FIGURES iv

LIST OF TABLES v

LIST OF ABBREVIATIONS vi

1 INTRODUCTION 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Feasibility Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4.1 Technical Feasibility . . . . . . . . . . . . . . . . . . . . . . . 2
1.4.2 Economic Feasibility . . . . . . . . . . . . . . . . . . . . . . . 3
1.4.3 Operational Feasibility . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Scope of the Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 LITERATURE REVIEW 4

3 BACKGROUND THEORY 6
3.1 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.4 Path Planning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.5 Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4 METHODOLOGY 8
4.1 Hardware Design and Fabrication . . . . . . . . . . . . . . . . . . . . . 8
4.1.1 Mechanical Design . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1.2 Electronic Architecture . . . . . . . . . . . . . . . . . . . . . . 8
4.2 Software Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.2.1 Robot Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.2.2 Central Controller . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.3 System Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

ii
4.3.1 System Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3.2 System Block Diagram . . . . . . . . . . . . . . . . . . . . . . 12
4.3.3 Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

5 TOOLS AND TECHNIQUES 14


5.1 Harware Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.2 Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 16

6 EPILOGUE 18
6.1 Expected Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.2 Budget Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.3 Work Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

REFERENCES 20

iii
List of Figures
4.1 Electronic Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2 Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.3 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.4 Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.5 Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.1 ESP-32 Wroom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.2 IMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.3 Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.4 Encoder Motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.5 Motor Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.1 Gantt Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

iv
List of Tables
6.1 Budget Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

v
LIST OF ABBREVIATIONS

ARUCO Augmented Reality University of Coimbra (marker system)


CAD Computer-Aided Design
PDR Pedestrian Dead Reckoning
DRL Deep Reinforcement Learning
ESP Espressif Module
IMU Inertial Measurement Unit
ILP Integer Linear Programming
LTL Linear Temporal Logic
MAPF Multi-Agent Path Finding
M.A.R.S Multi Agent Robot System
MILP Mixed Integer Linear Programming
MRTA Multi-Robot Task Allocation
MPU Motion Processing Unit
OpenCV Open Source Computer Vision Library
PCB Printed Circuit Board
QR Quick Response
ROS Robot Operating System
RViz ROS Visualization Tool
SCT Supervisory Control Theory
SLAM Simultaneous Localization and Mapping

vi
1. INTRODUCTION

1.1. Background

The first uses of automation in warehouse management began in 1960s with the in-
troduction of basic conveyor belts, automated storage and retrieval systems. Despite
these early innovations, the majority of the warehouses around the world continued
to rely heavily on manual labor for tasks such as inventory management, order pick-
ing, and goods sorting. Even as technological advancements have accelerated, many
warehouses–especially in small and medium-sized businesses–still operate with min-
imal automation, leading to really time-consuming, error prone, and labor-intensive
processes.
In most warehouses around the world, manual workers are required to physically move
the products, pick items and transport them to packing or dispatch areas. These manual
process have the problems of increased errors and injuries while decreasing the pro-
ductivity and creating costly bottlenecks. While few leading global companies have
adopted advanced robotics and multi-robot systems, widespread adoption remains lim-
ited by high costs and integration challenges. As a global e-commerce and supply
chain becomes complex, the need of efficient, scalable, and reliable warehouse opera-
tion is essential to stay competitive. The solution of this problem can be done through
the integration of advanced technologies such as robotics, sensor fusion and intelligent
coordination algorithms.
This project aims to address these challenges by developing a cost-effective multi-robot
warehouse automations system that leverages hybrid sensor fusion–combining IMU,
encoder, and camera data for precise robot localization and coordination. By enabling
multiple robots to work in parallel with minimal human intervention, the system aims to
reduce human workload, minimize errors, and greatly enhance operational efficiency.
This shift from manual to intelligent and automated systems with modern robotics and
sensor technologies is a step toward showcasing the real-world benefits and feasibility
of multi-robot automation in warehouse environments.

1
1.2. Problem statement

Despite the advancements in robotics and automation, most warehouses around the
world still rely heavily on manual labor for inventory management, order picking and
goods handling. This manual approach leads to slow operations, frequent errors, in-
creased labor costs and limited scalability while the need for rapid, accurate order
fulfillment is increasing with the rise of e-commerce. While multi-robots automated
systems have the potential to address these challenges, their widespread adoption is
hindered by difficulties in achieving low cost, accurate, reliable localization and coor-
dination among the multiple robots, particularly in dynamic warehouse environments.
Traditional localization methods using only one sensor systems like either camera or
IMU or encoder, often suffer issues like view block, drift or wheel slippage, resulting
in inefficient robot movements and increased risk of collisions. There is a huge need
for a cost-effective, robust solution that enables precise multi-robot localization and
coordination, allowing efficient parallel task execution with minimal human interven-
tion. Addressing this gap is essential for modernizing warehouse operations, reducing
human workload, and significantly improving efficiency and competitiveness in the lo-
gistics sector.

1.3. Objectives

• To design and implement a cost-effective multi-robot warehouse automation sys-


tem and evaluate how this system reduces human workload and improve opera-
tional efficiency.

1.4. Feasibility Analysis

The feasibility analysis evaluates whether the proposed multi-robot warehouse automa-
tion project is practical and viable. For the proposed project it can be evaluated in
following methods:

1.4.1. Technical Feasibility

Various advancements in robotics, automation, sensor fusion and real time control make
it possible to implement reliable multi-robot systems in warehouse environments. The

2
required hardware components for our project like IMUs, encoders, and cameras are
widely available and also compatible with open source software platforms like KiCad,
ROS, FreeCad and OpenCV. The integration of these technologies allows accurate lo-
calization and coordination among multiple robots,ensuring the technical feasibility of
the project.

1.4.2. Economic Feasibility

Our project is designed to be cost-effective as possible with the use of affordable sen-
sors and open source robotic platforms. By automating routine warehouse tasks, the
system can significantly reduce labor expenses and improving operational efficiency
while offering favorable return on investment to the business owners within a few years.

1.4.3. Operational Feasibility

The system is designed for dynamic and structured warehouse environments and can be
scaled depending on the warehouse sizes and the business needs. User friendly nature
of the system reduces the need of specialized training and allows easily integration with
the existing workflows. Our system allows business owners to reduce manual labor and
increase the productivity of the warehouses while making warehouses more reliable
and efficient.

1.5. Scope of the Project

The proposed system aims to develop a cost effective and reliable multi-robot ware-
house automation system that leverages the use of hybrid sensor fusion. We would
combine various sensors to enable accurate localization and coordination of multiple
robots.
This project mainly focuses on affordable and easily available hardware-software com-
bination to ensure practical implementation even for small business owners. The scope
of the project includes designing, implementing, and doing experimental validation of
system in small scale warehouse setup.

3
2. LITERATURE REVIEW

The field of warehouse automation using multi-agent robotic systems has rapidly ad-
vanced through innovations in task planning, motion coordination, and system safety.
These developments form the foundation of the proposed M.A.R.S (Multi-Agent Robot
System for Warehouse Automation), integrating safe control, efficient scheduling, and
scalable coordination for dynamic warehouse environments.
Konishi et al. [1] proposed a hybrid approach that combines Deep Reinforcement
Learning (DRL) with Supervisory Control Theory (SCT) to ensure scalable and for-
mally verified safe control for multi-agent systems. While SCT (Supervisory Control
Theory) provides correct-by-construction guarantees, it suffers from state-space explo-
sion in large systems. Their proposed method extracts near-optimal DRL policies and
integrates them into an SCT framework, achieving certified collision-free control for
fleets of up to 20 robots.
Hustiu et al. [2] developed a Mixed Integer Linear Programming (MILP)-based strat-
egy for parallel motion execution and path rerouting. Their approach introduces σ -
vectors and parallel-move checks for coordinated path planning, reducing mission makespan
by over 50% without additional sensing. The model is grounded in MAPF (Multi-Agent
Path Finding) techniques with Boolean and LTL (Linear Temporal Logic) formulations
and extends Petri-net frameworks with global rerouting and deadlock-avoidance algo-
rithms like a modified Banker’s Algorithm.
Gong et al. [3] focused on the trajectory planning of omni-directional mobile robots
(OMRs) and introduced the Orientation-Aware Timed Elastic Band (OATEB) frame-
work. This system simultaneously optimizes position and orientation, enabling parallel
task execution and reducing mission duration by more than 50% in both simulated
and physical environments. The study highlights how integrating MPC (Model Predic-
tive Control) and fuzzy-PI (Fuzzy-Proportional Integral) strategies enhances trajectory
smoothness and robot adaptability.
Miloradović et al. [4] addressed gaps in the MRTA (Multi-Robot Task Allocation) tax-
onomy, particularly under Time-Extended Single-Robot Multi-Task Allocation (MT-
SR-TA) scenarios. Their work proposes ILP (Integer Linear Programming) and CP
(Constraint Programming) models to allow parallel task execution and formal represen-
tation of physical and virtual tasks, reducing overall makespan and advancing MRTA

4
theory for complex mission planning.
Pecora et al. [5] proposed a loosely-coupled control framework that enables central-
ized coordination while treating planning, coordination, and control as modular pro-
cesses. The system supports heterogeneous robot fleets, allowing dynamic goal re-
assignment and path sharing using spatial envelopes and critical sections. It ensures
deadlock-free navigation through dynamic prioritization heuristics and supports up to
17 robots in industrial scenarios.
Ghassemi and Chowdhury [6] tackled decentralized task allocation in disaster re-
sponse by introducing online algorithms that use weighted bipartite matching. These
methods handle asynchronous task arrivals, deadlines, and heterogeneous robot capa-
bilities (e.g., payload and range constraints), achieving near-optimal task completion
rates and demonstrating scalability in dynamic, real-time scenarios.
Kato and Kamoshida [7] developed a simulation environment for logistics warehouse
design using self-contained agents. This multi-agent simulation supports large-scale
experimentation with distributed decision-making and task execution, providing a valu-
able testbed for coordination and logistics strategies before physical deployment.
Ikumapayi et al. [8] reviewed the role of swarm robotics in sustainable warehouse
automation. Inspired by biological collectives, swarm systems offer energy efficiency,
fault tolerance, and adaptability. The review identifies challenges in communication,
coordination, and system integration while outlining future directions for improving
real-time responsiveness and scalability in swarm-based solutions.
Poulose and Han [9] reviewed indoor localization methods using IMU (Inertial Mea-
surement Unit) sensors and smartphone cameras, noting that IMU-based systems suffer
from drift and cumulative errors, while camera-based systems are vulnerable to lighting
conditions and rapid motion. They examine prior PDR (Pedestrian Dead Reckoning)
and SLAM (Simultaneous Localization and Mapping) based approaches, as well as hy-
brid systems that attempt to mitigate individual sensor weaknesses. Building on this,
they propose a hybrid localization system that fuses IMU data with ORB-SLAM and
UcoSLAM using a Kalman filter, achieving significantly improved accuracy over stan-
dalone methods.

5
3. BACKGROUND THEORY

3.1. Localization

Localization is the process by which a robot figures out where it is within its environ-
ment. In simple terms, it can be defined as giving the senses to the robot. For warehouse
robots to navigate efficiently and perform tasks like picking and placing items, knowing
their exact position and direction is crucial.
There are different ways for robots to localize themselves. One common method is the
use of wheel encoders and IMUs (Inertial Measurement Units) that track movement
and orientation. While it can be seen as a good way to localize robots,their data alone
isn’t always reliable as the errors can accumulate over time. To improve accuracy,
robots can use cameras to recognize visual markers placed on the various parts of the
warehouse. Combining data from sensors and cameras through a process called sensor
fusion helps correct errors and provides a more accurate and reliable estimation. This
hybrid approach is not only cost effective but also a practical and effective way for
robots to determine their position which helps for effective coordination and efficiency
of robots.

3.2. Sensor Fusion

Sensor fusion is the process of combining the data from various sensors to get more
clearer and reliable picture of where a robot is and how it is functioning. As no sensor
is perfect with their own strengths and weakness, sensor fusion is essential part of our
project.
Cameras are effective to pinpoint the robot’s localization using visual markers, but they
have higher latency and struggle in low light and when view is poor. IMUs can track
movement and direction instantly but their readings can drift and become less accurate
over time. While encoders measure how far the robot’s wheel have turned but can’t tell
if the robot slips or get stuck.
Using the strengths of each sensors to cancel out the other’s weakness, system can be
more reliable and accurate while making operations smoother, safer, and more efficient.

6
3.3. Computer Vision

Computer vision is a field of technology that enables computers and robots to see and
understand images or videos like we human do with our eyes. It uses cameras and
algorithms to detect, recognize and interpret the required results from the environment.
In our project, computer vision helps robots spot visual markers like QR codes so that
they know exactly where they are. While it gives accurate info, computer vision can
struggle with bad lighting or blocked views and also have problem of high latency
compared to IMUs and encoders. So which is why we combine it with other sensors
for best results.

3.4. Path Planning Algorithms

Path planning algorithms serve as a essential part of our project determining the way
for robots to use data from sensors to find efficient and fast routes to travel from one
point to another while avoiding obstacles. For our project, we are going to use A*(A
star ) algorithm,which is widely recognized for its ability to find the shortest and most
efficient path in a grid like environment. This algorithm is efficient, easy to implement
and ensures the efficient navigation of the robots. To further enhance our system, we
will optimize A* to better handle multiple robots moving in same time considering
time as a factor which in terms will reduce the risk of collisions and improving the
productivity of the whole system.

3.5. Odometry

Odometry is a technique used by robots to estimate their position and orientation by


measuring their own movement over time. Most commonly, robots use wheel encoders
to count their movement and direction of the motion. This allows our robot to keep track
of it’s motion in a warehouse. However, odometry is not perfect as errors can build up
over time due to wheel slip,uneven floors and other inaccuracies in the sensors. So to
compensate for that problem odometry is combined with other sensors like cameras and
IMUs to correct those errors and ensure the robot’s location remains accurate. In our
project, odometry provides a continuous estimate of each robot’s movement forming
the basis of reliable navigation when used in the combination of other sensor data.

7
4. METHODOLOGY

4.1. Hardware Design and Fabrication

4.1.1. Mechanical Design

For the mechanical design, we’ll use FreeCAD to create the robot chassis, which pro-
vides us with the flexibility to tweak the design as needed. The robot will be a differen-
tial drive type, featuring two powered wheels at the back and a free-moving caster wheel
at the front, which helps keep it balanced and facilitates easy turning. We’ll start by 3D
printing the chassis for quick prototyping, but if needed, we can switch to laser-cut
acrylic or other options later for a stronger, more durable build. The design will also
include space to mount the motors, sensors, and a small platform to carry packages,
keeping everything compact so the robot can move smoothly around the warehouse
setup.

4.1.2. Electronic Architecture

Figure 4.1: Electronic Architecture

The electronics architecture centers around the ESP32-S3 microcontroller, which man-
ages all processing and communication tasks. Movement is driven by N20 DC motors
equipped with encoders to provide precise control of speed and position. An L293D
motor driver handles the power and direction control for the motors. To help with
orientation and motion sensing, an MPU6050 IMU is included, offering data for bet-
ter navigation. A Li-Po battery will be used to power the electronics component. All
of this integrates to complete an Agent system that is able to get commands from the
central processor and execute tasks.

8
4.2. Software Framework

The software architecture is divided into two main components: the robot-side firmware
and the central controller logic.

4.2.1. Robot Firmware

Each robot runs embedded firmware on the ESP32-WROOM that handles low-level
tasks such as motor control, encoder feedback processing, IMU reading, and command
execution. To enable structured and scalable communication in a multi-robot setup,
the firmware integrates micro-ROS, a lightweight version of the Robot Operating Sys-
tem (ROS) tailored for microcontrollers. Through micro-ROS, each robot operates as a
ROS 2 node, capable of publishing sensor data and subscribing to task commands over
a Wi-Fi network. This setup allows for seamless communication with a central con-
troller running full ROS 2, simplifying coordination and data exchange. The firmware
is organized into non-blocking tasks, enabling real-time responsiveness and reliable
operation in a parallel multi-agent system.

4.2.2. Central Controller

The central controller is deployed on a PC running the ROS 2 framework, serving as


the central hub for communication, task management, and coordination of multiple
autonomous robots in the warehouse. Its modular structure allows for easy scalability
and integration of additional functionalities. Below are the key components of the
central controller:

1. ROS Communication
The Robot Operating System (ROS) provides the communication backbone for
our multi-agent system. The central controller communicates with each robot
over Wi-Fi using micro-ROS, allowing the robots to operate as lightweight ROS
2 nodes. An overhead camera continuously monitors the warehouse, detecting
and publishing robot positions to the system. The controller subscribes to both
the sensor and status topics published by the robots and the position data from
the camera, then publishes command messages to assign tasks such as pickup,
delivery, and navigation. Further, the system leverages ROS 2 visualization tools
like RViz2, enabling real-time observation of robot positions, paths, and sensor

9
data.

2. Image Processing
Each robot and load is tagged with a unique ArUco marker, allowing for reliable
visual identification within the workspace. A fixed overhead camera continuously
captures footage of the area, and the central controller processes these images us-
ing the OpenCV ArUco module, which implements a fast and efficient marker
detection algorithm in real-time using OpenCV integrated with ROS 2. By de-
tecting and decoding the ArUco markers, the system determines the positions
and orientations of both robots and loads. This information is crucial for assign-
ing tasks, tracking progress, and ensuring smooth coordination in the multi-agent
environment.

Figure 4.2: Image Processing

3. Motion Planning
Motion planning in the system is based on a virtual grid overlaid onto the workspace,
generated using the field of view from the overhead camera. This grid divides
the area into uniform cells, enabling the central controller to abstract the envi-
ronment for pathfinding. Each robot’s and load’s position is mapped to a cor-
responding grid cell based on the ArUco marker detection. The planner then
uses grid-based algorithms like A* or Dijkstra’s algorithm to compute optimal,
collision-free paths from the robot’s current location to the assigned pickup or
delivery point. The resulting path is translated into waypoints and movement
commands, which are sent to the respective robot for execution.

4. Task Allocation
Task allocation is the mechanism by which the central controller assigns delivery

10
tasks to individual robots based on their current position and proximity to loads.
Using a cost-based or Greedy algorithm, the controller allocates tasks to robots
in a way that minimizes travel time.

4.3. System Design

4.3.1. System Setup

Figure 4.3: Experimental Setup

The presented experimental setup consists of a platform designed to simulate a minia-


ture warehouse environment. On this platform, multiple mobile robots operate in coor-
dination to perform pick-and-place tasks involving lightweight loads, which are repre-
sented by colored blocks or markers. An overhead camera is mounted directly above
the platform, providing a top-down view of the entire workspace. This camera contin-
uously captures real-time footage and feeds it to a central controller, which processes
the images to identify and track the positions of robots and loads using ArUco markers.

11
4.3.2. System Block Diagram

Figure 4.4: Block Diagram

12
4.3.3. Flowchart

Figure 4.5: Flowchart

13
5. TOOLS AND TECHNIQUES

5.1. Harware Tools

1. ESP32 WROOM
It is a highly popular and versatile system-on-a-chip (SoC) that combines a mi-
crocontroller unit (MCU) with Wi-Fi and Bluetooth capabilities. It is developed
by Espressif Systems and has gained significant attention in the IoT (Internet of
Things) and embedded systems communities. Some of the key features of ESP32
microcontroller are:

• It is a dual-core processor.
• It has built in security features.
• It needs low power to operate.
• It has a separate development environment where we can program the mi-
crocontroller to our requirement.

Figure 5.1: ESP-32 Wroom

14
2. Inertial Measurement Unit(IMU)
It is an electronic device that measures and reports a body’s specific force, an-
gular rate, and sometimes the orientation of the body, using a combination of
accelerometers, gyroscopes, and sometimes magnetometers.Two main Compo-
nents of IMU are:

(a) Accelerometer
(b) Gyroscope

Figure 5.2: IMU

3. Encoder Motor
Motor encoders are rotary encoders adapted to provide information about an elec-
tric motor shaft’s speed and position. Incremental motor encoders are generally
used to provide information about a motor shaft’s speed and direction by means
of two clock channels. Absolute motor encoders indicate the shaft’s angular po-
sition as well as the direction and speed of its movement.

Figure 5.3: Encoder Figure 5.4: Encoder Motor

15
4. Motor Driver (L293D)
The L293D is a dual H-bridge motor driver IC that allows you to control the di-
rection and speed of two DC motors or one stepper motor using logic-level inputs.
It can drive motors up to 600 mA per channel and supports bidirectional control
with built-in diodes for back EMF protection. It’s widely used in robotics for
controlling small motors in projects like line-followers, robotic arms, or mobile
robots.

Figure 5.5: Motor Driver

5.2. Software Requirements

1. Kicad
KiCad is a free and open-source Electronic Design Automation (EDA) software
suite used for designing and simulating electronic hardware, specifically printed
circuit boards (PCBs). It encompasses tools for schematic capture, PCB layout,
3D visualization

2. FreeCAD
FreeCAD is a free, open-source 3D CAD (Computer-Aided Design) software
that allows users to create, modify, and analyze 3D models. It is widely used

16
for product design, engineering, and architecture, offering parametric modeling,
which means you can easily edit your designs by changing parameters. FreeCAD
supports a wide range of file formats and is suitable for beginners and profession-
als alike.

3. Programming languages
Programming language is the backbone of software. In our project we use C++
and Python to solve the problems. C++ serves as embedded system programming
and Python serves as a primary programming language for agent detection and
collision avoidance to accomplish tasks by efficient collaboration using computer
vision. Python provides a wide range of libraries and frameworks that facilitate
image processing, deep learning, and communication with hardware components.

4. Ros(Robot Operating System)


ROS is an open-source framework of tools, libraries and conventions that stan-
dardizes communication between software components, abstracts hardware in-
terfaces, and streamlines building and sharing robot code. It’s widely used for
perception and sensing , motion planning and control, and realistic simulation.
Beyond individual robots, ROS powers multi-robot coordination, data logging,
diagnostics and rapid prototyping across industry and research.

17
6. EPILOGUE

6.1. Expected Output

After the completion of the project three agents will execute tasks parallelly within the
indoor warehouse. Each agent will be controlled and monitored by a central controller
with camera setup positioned to oversee the entire arena. Each agent will autonomously
pick up objects from their current locations and accurately deliver them to predefined
drop zones within the warehouse ensuring collision free environment.

6.2. Budget Analysis

Component Quantity Price (NRs.)


ESP32-WROOM Module 3 3000
N20 Gear Motor with Encoder 6 4723
L293D Motor Driver 3 250
MPU6050 IMU 3 1380
Web Camera 1 2353
Servo Motor 6 1500
Power Supply / Battery 6 1000
Miscellaneous – 3000
Total 17,206

Table 6.1: Budget Analysis

18
6.3. Work Schedule

Figure 6.1: Gantt Chart

It is estimated the project will take 8-9 months for completion. This estimation can
change depending upon the complexity of the project and the availability of the hard-
ware components.

19
REFERENCES

[1] H. Konishi, Y. Takahashi, and K. Ozawa, “Efficient supervisory control of


multi-agent systems using deep reinforcement learning,” IEEE Access, vol. 10, pp.
49 656–49 669, 2022. [Online]. Available: https://doi.org/10.1109/ACCESS.2022.
3173523

[2] I. Hustiu, O. Picard, J. Stuber, M. Faure, A. Rivière, O. Simonin, and F. Charpillet,


“Parallel motion execution and rerouting for multi-robot systems: A milp-based
unified framework,” Robotics and Autonomous Systems, vol. 153, p. 104100, 2022.
[Online]. Available: https://doi.org/10.1016/j.robot.2022.104100

[3] T. Gong, X. Tang, J. Li, Y. Meng, and Y. Chen, “Orientation-aware timed elastic
band for improved trajectory planning of omnidirectional mobile robots,” IEEE
Transactions on Industrial Informatics, vol. 17, no. 10, pp. 6820–6830, 2021.
[Online]. Available: https://doi.org/10.1109/TII.2020.3042445

[4] B. Miloradović, B. Çürüklü, M. Ekström, and A. V. Papadopoulos, “Optimizing


parallel task execution for multi-agent mission planning,” IEEE Access, vol. 11,
pp. 24 367–24 381, 2023. [Online]. Available: https://doi.org/10.1109/ACCESS.
2023.3254900

[5] F. Pecora, H. Andreasson, M. Mansouri, and V. Petkov, “A loosely-coupled


approach for multi-robot coordination, motion planning and control,” Proceedings
of the International Conference on Automated Planning and Scheduling, vol. 28,
pp. 485–493, 2018. [Online]. Available: https://ojs.aaai.org/index.php/ICAPS/
article/view/13923

[6] P. Ghassemi and S. Chowdhury, “Multi-robot task allocation in disaster response:


Addressing dynamic tasks with deadlines and robots with range and payload
constraints,” Robotics and Autonomous Systems, vol. 147, p. 103905, 2021.
[Online]. Available: https://doi.org/10.1016/j.robot.2021.103905

[7] T. Kato and R. Kamoshida, “Multi-agent simulation environment for logistics


warehouse design based on self-contained agents,” Applied Sciences, vol. 10,
no. 21, p. 7552, 2020. [Online]. Available: https://doi.org/10.3390/app10217552

20
[8] O. Ikumapayi, O. Laseinde, R. Elewa, T. Ogedengbe, and E. Akinlabi, “Swarm
robotics in a sustainable warehouse automation: Opportunities, challenges and
solutions,” E3S Web of Conferences, vol. 552, p. 01080, 2024. [Online]. Available:
https://doi.org/10.1051/e3sconf/202455201080

[9] A. Poulose and D. Han, “Hybrid indoor localization using imu sensors and
smartphone camera,” Sensors, vol. 19, no. 23, p. 5084, 2019. [Online]. Available:
https://doi.org/10.3390/s19235084

21

You might also like