0% found this document useful (0 votes)
43 views8 pages

Kshitij Synopsis

The document discusses developing a multi-sensory object detection system for autonomous drones with the following objectives: 1. Implement robust object detection that can reliably identify objects in various conditions including low light and adverse weather. 2. Achieve real-time processing to allow the drone to react quickly to detected objects. 3. Attain high detection accuracy across multiple sensor modalities like cameras, LiDAR, and radar to minimize errors. 4. Develop effective sensor fusion techniques to combine data from different sensors and enhance detection capabilities. 5. Prioritize safety by reliably detecting critical objects like obstacles to prevent collisions.

Uploaded by

JADHAV KUNAL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views8 pages

Kshitij Synopsis

The document discusses developing a multi-sensory object detection system for autonomous drones with the following objectives: 1. Implement robust object detection that can reliably identify objects in various conditions including low light and adverse weather. 2. Achieve real-time processing to allow the drone to react quickly to detected objects. 3. Attain high detection accuracy across multiple sensor modalities like cameras, LiDAR, and radar to minimize errors. 4. Develop effective sensor fusion techniques to combine data from different sensors and enhance detection capabilities. 5. Prioritize safety by reliably detecting critical objects like obstacles to prevent collisions.

Uploaded by

JADHAV KUNAL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Pimpri Chinchwad Education Trust’s

Pimpri Chinchwad College of Engineering &


Research Ravet, Pune
IQAC PCCOER
Academic Year:
STCL Term: I
2023 - 24

Synopsis

Name of the Student: Kshitij. M. Patil

Roll No: B29 Branch: Computer Engineering

Mobile no: 8669477501

Email ID: [email protected]

Seminar Guide: Mrs. Shailaja Lohar

Signature of Student Signature of seminar guide


1. Title of the topic: Multi-Sensory Object Detection In Autonomous Drones.

2. Area of topic: Computer Vision

3. Abstract:

Detecting objects in images or videos automatically and recognizing them is a critical


task in computer vision.

It is necessary for a variety of applications, such as augmented reality, security systems,


autonomous vehicles, and medical imaging. For instance, object detection techniques
employ convolutional neural networks (CNNs) to analyze visual data, recognize
various objects, and precisely outline their positions with bounding boxes. This method
involves categorizing the object via classification, then estimating its precise position
using regression.

Recent advancements in object detection accuracy and speed have benefited numerous
industries that depend on visual comprehension and interpretation.
Brief About Contents:

3.1 Introduction:

Identifying and localizing different objects within an image or video stream is a


crucial component of the computer vision task known as object detection. This
technology has revolutionized how machines perceive and communicate with their
environment, opening up a wide range of applications in several industries.

Unlike image classification, which gives a full image a single label, object detection
pinpoints the locations of various things inside an image. Bounding boxes are
commonly used to depict this localization by being drawn around the detected objects.
Numerous cutting-edge applications, including autonomous driving, surveillance,
imaging in medicine, robotics, and more, depend on object detection as a fundamental
building element.

Traditional methodologies for object detection have been replaced by more advanced
deep learning techniques over time, such as sliding window methods and manually
extracted features. Convolutional Neural Networks (CNNs) have been essential in this
growth, allowing the development of extremely precise and effective object detection
models. For their capacity to perform real-time or nearly real-time object recognition,
methods like Faster R-CNN, YOLO (You Only Look Once), and SSD (Single Shot
Multi Box Detector) have become well known.
3.2 Literature review :

Research article Objective/Proposed Methods/Techniques Datasets Relevant


(Author/Year) work findings/Limitations
identified
[1] Assessing Evaluate uncertainty Bayesian neural KITTI, WAYMO, Uncertainty distribution
Distribution Shift as a metric for network, uncertainty Canadian Adverse differences analyzed for
in Probabilistic distinguishing true- estimation, signal Driving true-positive and false-
Object Detection and false-positive detection theory, Conditions positive detections. KL
Under Adverse detections in object Probabilistic object (CADC). divergence used to quantify
Weather By detection under detection. Augmented differences AUC metrics
M.Hildebrand et adverse weather datasets with calculated for ROC curves.
al./2023 conditions. simulated ADR used for real-time
distortions. performance assessment.
[2] A Fine-Grained Improve fine-grained YOLOv5-based FAIR1M, DOTA Proposed modification
Object Detection target detection in modification with enables precise recognition
Model For Aerial aerial images using CSL angle of remote sensing objects
Images Based on YOLOv5 neural classification. with arbitrary orientation.
YOLOv5 Deep network Achieved mAP of 39.2 on
Neural Network FAIR1M dataset and 72.68
By Zhang Rui et on DOTA dataset.
al./2023
[3] R-YOLOv5: A Enhance vehicle Rotation angle branch Drone-Vehicle Improved accuracy in
Lightweight detection accuracy for orientation UCAS-AOD. detecting rotation vehicles.
Rotating Object and application in prediction, Circular Fast inference speed.
Detection vehicle-road smooth labels for Compression in model size,
Algorithm for cooperative system. angle classification, parameters and operations.
Real-Time Swin Transformer
Detection of block for feature
Vehicles in Dense fusion, Feature
Scenes By Zheng Enhancement
Wei Li et al./2023 Attention Module,
Adaptive Spatial
Feature Fusion
Structure
[4] RGB-D Salient Improve RGB-D Saliency and Edge Mixed dataset for Improved object detection
Object Detection Salient Object Reverse Attention training (DUT- accuracy. Enhanced
Using Saliency and Detection. (SERA), Multi-Scale RGBD + NJUD + detection of important object
Edge Reverse Interactive Module NLPR) – parts. Sharper object
Attention By (MSIM). Evaluation on boundaries. Future work:
Tomoki Ikeda et DUT-RGBD, Address limitations, explore
al./2023 NJUD, NLPR, lighter weight networks
STEREO, and SIP
[5] A CNN- Propose a CNN- CSWin Transformer, VisDrone2021- Proposed model achieves
Transformer transformer hybrid Hybrid Patch DET, UAVDT better results than classic
Hybrid Model model for efficient Embedding Module models. CSWin Transformer
Based on CSWin object detection in (HPEM), Slicing- enhances multiscale object
Transformer for UAV images Based Inference (SI) detection. HPEM improves
UAV Image Object feature extraction. SI method
Detection enhances small object
detection. Limitations
include varying
contributions of pretrained
models and gaps from
SOTA methods
3.3 Objectives:

Multi-Sensory Object Detection in Drones:

1. Develop Robust Object Detection:


- Objective: Implement a multi-sensory object detection system that can reliably
identify and classify objects in various environmental conditions, including low light,
adverse weather, and complex backgrounds.

2. Enable Real-time Processing:


- Objective: Achieve real-time data processing to ensure that the drone can react
quickly to detected objects and make timely navigation decisions.

3. Achieve High Accuracy:


- Objective: Attain a high level of accuracy in object detection and tracking across
multiple sensor modalities, minimizing false positives and false negatives.

4. Implement Sensor Fusion:


- Objective: Develop effective sensor fusion techniques to combine data from
cameras, LiDAR, radar, and microphones, enhancing object detection capabilities by
leveraging the strengths of each sensor type.

5. Ensure Safety and Reliability:


- Objective: Prioritize safety by creating a system that can reliably detect critical
objects, such as obstacles, other aircraft, or emergency situations, to prevent collisions
and ensure safe drone operation.

6. Contextual Awareness:
- Objective: Enhance object detection with contextual awareness by integrating
audio data to better understand the environment and the behavior of detected objects.

7. Adaptability to Different Environments:


- Objective: Develop a system that can adapt to a variety of environments, including
urban, rural, indoor, and outdoor settings, without a significant drop in performance.
8. Multimodal Object Tracking:
- Objective: Implement robust multi-modal object tracking algorithms to maintain a
consistent understanding of object movement and position across sensor modalities.

9. Autonomous Navigation:
- Objective: Enable the drone to autonomously navigate complex scenarios,
avoiding obstacles, following waypoints, and responding to dynamic situations using
multi-sensory object detection.

10. Continuous Improvement:


- Objective: Establish a framework for continuous improvement, including regular
updates to algorithms, sensors, and software to keep the system up to date with
technological advancements.
3.4 Methodology:

This concept focuses on enhancing object detection capabilities for autonomous


drones by integrating multiple sensory inputs beyond just visual data. Drones
equipped with a combination of cameras, LiDAR, radar, and microphone arrays can
provide more comprehensive and accurate object detection in various scenarios:

1. Multi-Sensor Fusion: Combine data from cameras, LiDAR, radar, and


microphones in real-time to create a holistic perception of the environment. Each
sensor type offers unique advantages, such as LiDAR's ability to detect depth and
radar's capacity to detect objects through obstacles or in adverse weather conditions.

2. Contextual Awareness: Develop algorithms that consider the context of detected


objects. For instance, by analyzing audio data, the system can differentiate between a
moving car and a parked car, making better decisions about potential obstacles or
threats.

3. Machine Learning Integration: Train machine learning models to process and


interpret data from multiple sensors simultaneously. Use deep learning techniques for
efficient feature extraction and object classification across sensor modalities.

4. Dynamic Object Tracking: Implement advanced object tracking algorithms that


can handle objects moving across different sensor fields of view. This allows the
drone to maintain a consistent tracking of objects, even when transitioning between
sensor types.

5.Adaptive Decision-Making: Develop a decision-making system that dynamically


adapts to the reliability of each sensor's data. For example, if the camera is obscured
by fog, the drone can rely more on LiDAR and radar data for navigation.
References:

[1] Hildebrand, M., Brown, A., Brown, S., & Waslander, S. L. (2023). Assessing distribution
shift in probabilistic object detection under adverse weather. IEEE Access, 11, 3270447-
3270462.

[2] Zhang, R., Zhang, J., Zhang, X., Zhao, J., & Zhang, M. (2023). A fine-grained object
detection model for aerial images based on YOLOv5 deep neural network. IEEE
Transactions on Geoscience and Remote Sensing, 61(6), 1-11.

[3] Li, Z. W., Liu, S., Chen, H., & Chen, C. S. (2023). R-YOLOv5: A lightweight rotating
object detection algorithm for real-time detection of vehicles in dense scenes. IEEE
Transactions on Intelligent Transportation Systems.

[4] Ikeda, T., Kondo, K., & Takiguchi, T. (2023). RGB-D salient object detection using
saliency and edge reverse attention. IEEE Transactions on Image Processing, 32(5), 2091-
2103.

[5] Zhou, Y., Hu, H., Wu, P., & Sun, J. (2023). A CNN-Transformer hybrid model based on
CSWin transformer for UAV image object detection. IEEE Transactions on Circuits and
Systems for Video Technology.

[6] Multi-Sensor Object Detection for Autonomous Drone Navigation in GPS-Denied


Environments by A. H. Al-Kaff et al., IEEE Access, 2022

[7] Multisensory Surveillance Drone for Survivor Detection and Geolocalization in Complex
Post-Disaster Environment by K. Andra et al., Semantic Scholar, 2021

[8] Multi-Sensor Fusion for Object Detection in Autonomous Drones by G. K. Gupta et al.,
2022 IEEE Aerospace Conference, 2022.

[9] Multi-Sensor Fusion for Object Detection in Autonomous Aerial Vehicles: A Survey by J.
Liang et al., IEEE Sensors Journal, 2022

You might also like