Multi Tasking Robot Synopsis1507 Official
Multi Tasking Robot Synopsis1507 Official
Synopsis
on
Submitted by,
6] Research and design of robot obstacle avoidance strategy based on multi-sensor and control:
IEEE 2nd International Conference on Data Science and Computer Application (ICDSCA)
Year: 2022 | Conference Paper | Publisher: IEEE
Cited by: Papers (3)
7] Design and Implementation of a Line Follower
RobotShervin Shirmohammadi;Fahimeh 2024 10th International Conference on Artificial Intelligence
Robotics ( QICAR) Year: 2024 | Conference Paper | Publisher: IEEE
LITERATURE SURVEY :
1. According to Zhihao Chen et.al[1], it implements framework for object identification, localization and
monitoring for smart mobility applications such as road traffic and railway climate. An object detection
and tracking approach was firstly carried out within two deep learning approaches: You Only Look Once
(YOLO) V3 and Single Shot Detector (SSD).
2. Zhong-Qiu Zhao et.al[2], a analysis of deep learning focused on the frameworks for object detection
is presented in this paper. Generic object detection architectures are addressed in context with
convolution neural network ( CNN), along with some modifications and useful tricks to boost detection
efficiency.
3. Licheng Jiao et.al [3], This paper highlights the rapid growth of deep learning networks for detection
tasks, the efficiency of object detectors that has been greatly enhanced.
4. Yakup Demir2 et.al[4], addresses autonomous driving that involves reliable and accurate detection and
identification in real drivable environments of surrounding objects. While numerous algorithms for
object detection have been proposed, not all are robust enough to detect and identify occluded or
truncated objects. A new hybrid Local Multiple System (LMCNNSVM) based on Convolutional Neural
Networks (CNNs) and Support Vector Machines ( SVMs) is proposed in this paper due to its powerful
extraction capability and robust classification property.
5. Mukesh Tiwari ed.al[5] discusses that the identification and tracking of objects are important research
areas due to daily change in object motion and variance in scene size, occlusions, variations in
appearance, and changes in ego-motion and illumination. Specifically, selection of features is a vital part
of tracking objects.
PROBLEM DEFINATION :
The goal is to design and develop a multitasking robot that can perform multiple tasks simultaneously. The
robot should be able to:
Primary Tasks
1. Line Following: Follow a predetermined path using sensors.
2. Obstacle Avoidance: Detect and avoid obstacles using ultrasonic sensors.
3. Object Detection: Use computer vision to detect and recognize objects.
Performance Requirements
1. Multitasking Capability: Perform multiple tasks simultaneously without significant delays.
2. Efficient Performance: Optimize performance to minimize processing time and maximize efficiency.
3. Reliability: Ensure reliable operation, minimizing errors and failures.
Environmental Considerations
1. Operating Environment: The robot will operate in a controlled environment with minimal obstacles.
2. Lighting Conditions: The robot will operate in a well-lit environment with minimal variations in lighting.
PROBLEM SOLUTION:
Hardware Components
1. Arduino Board: Arduino Mega 2560
2. Sensors:
- Infrared sensors for line following.
- Ultrasonic sensors for obstacle avoidance.
- Camera module for object detection.
3. Actuators:
- DC motors for movement.
- LEDs for status indication.
4. Power Supply: Battery pack with voltage Regulator.
OBJECTIVE:
The objectives of the multi-Tasking robot project focus on integrating manual control, obstacle detection,
voice control, and line-following capabilities into a single platform. This involves creating an intuitive user
interface that facilitates seamless interaction, enhancing accessibility for users with varying technical
expertise. A key aim is to implement an effective obstacle detection system that ensures safe navigation in
dynamic environments, alongside developing a reliable line-following algorithm for autonomous operation.
The project will also conduct thorough testing and evaluation of the robot's functionalities to ensure
reliability and efficiency in real-world scenarios.
DEVELOPMENT :
Power Supply
Left DC
IR sensor Left Motor
L293D
(M1,M2)
Motor
Driver
IR Sensor Right Right DC
Arduino Motor
Nano (M3,M4)
)
Ultrasonic Sensor
Servo
Power Supply Motor
Mobile
RESOURCES REQUIRED FOR PROPOSED WORK:-
HARDWARE USED:-
SR.NO Components Name Requirement
1. Hard Board Sheet 15x10 cm
2 BO Motors 4
3 Wheels 4
4 Arduino Nano 1
5 L293D Motor Driver 1
6 Bread Board / (PCB) 1
7 IR Sensor 2
8 Jumper Wires As per required
9 Single Stand Wires As per required
10 Mini Servo (MG90S) 1
11 2x3.7 Li-ion Battery 3
12 ON-OFF Switch 1
13 33pf Capacitor 3
14 102pf Capacitor 1
CIRCUIT DIAGRAM :
SOFTWARE USED:
The software used to program a multitasking robot with manual control, obstacle detection, voice control,
and line-following can vary depending on the microcontroller or platform you choose. Below are the most
common software environments and programming languages used for such projects:
➢ Component Integration: It uses multiple sensors (IR for line detection, ultrasonic for distance
measurement, and cameras for object recognition) alongside a processing unit (like Arduino or
Raspberry Pi).
➢ Data Processing: The processing unit continuously analyzes sensor data to interpret the
environment and make real-time decisions.
➢ Task Scheduling: It prioritizes tasks based on urgency, allowing simultaneous execution of
functions, like navigation and object recognition.
➢ Actuation: Commands are sent to motors and servos based on sensor inputs, enabling precise
movements (e.g., avoiding obstacles).
➢ Feedback Mechanism: A feedback loop allows the robot to adjust its actions dynamically in
response to environmental changes.
➢ Communication and Control: It may include features for voice recognition and communication
with external devices for user control.
.
EXPECTED OUTCOMES:-
A multitasking robot that incorporates manual control, obstacle detection, voice control, and line-
following functionality is expected to operate seamlessly across a variety of scenarios. The robot
should respond quickly and accurately to user inputs, whether from a remote control or voice
commands, and demonstrate smooth, precise movement. In obstacle detection mode, it will avoid
collisions by stopping or changing direction upon detecting objects in its path. The robot will also be
able to follow a line with minimal deviation, making real-time adjustments to stay on track. It will
prioritize critical tasks, such as obstacle avoidance, over others like line-following if necessary,
ensuring safety and reliability. Additionally, the robot should be power-efficient, allowing for
extended operating times, and robust enough to handle typical environmental challenges without
frequent malfunctions. Ultimately, the robot will deliver smooth, real-time performance while
offering intuitive control and adaptability to various tasks .
ESTIMATED EXPENDITURE:
Hardware Components
1. Microcontroller (e.g., Arduino Mega)
2. Sensors (e.g., ultrasonic, infrared, camera)
3. Actuators (e.g., motors, LEDs)
4. Power Supply
5. Chassis and Mechanical Components
6. Wheels and Movement System
7. Communication Module (e.g., Wi-Fi, Bluetooth)
[2] Chaudhry, Aditya, et al. "Arduino Based Voice Controlled Robot." 2019 International Conference on
Computing, Communication, and Intelligent Systems (ICCCIS). IEEE, 2019.
[3] Srivastava, Deeksha, Awanish Kesarwani, and Shivani Dubey. "Measurement of Temperature and
Humidity by using Arduino Tool and DHT11." International Research Journal of Engineering and
Technology (IRJET) 5.12 (2018): 876- 878.
[4] Prity, Sadia Akter, Jannatul Afrose, and Md Mahmudul Hasan. "RFID Based Smart Door Lock Security
System." American Journal of Sciences and Engineering Research E-ISSN-2348-703X 4.3 (2021). [5]
Srivastava, Shubh, and Rajanish Sing.
CONCLUSION :
The developed model detects and identifies the objects using YOLO algorithm, which gives distance of
obstacles from it using ultrasonic sensors and at the same time gives information regarding the ambient
conditions such as levels of temperature and flammable gases such as LPG using temperature and gas
sensors respectively. It is the use of deep learning based object detection algorithm YOLO, which increases
the efficiency of the system compared to other systems which use conventional image processing (SVM
approach). This developed model can be successfully used in industrial process automation, exploration of
complex places and developing self -driving cars as per the set goals. Not only, it is efficient but it also
reduces human intervention and is does not create pollution.