0% found this document useful (0 votes)
47 views7 pages

Strategies For Improving Object Detection in Real-Time Projects That Use Deep Learning Technology

This document summarizes a conference paper about strategies for improving object detection in real-time projects using deep learning technology. The paper discusses how neural networks and deep learning combined with IoT systems have achieved success in object detection applications. It then provides guidelines and solutions focused on improving the accuracy of object detection in real-time projects using the YOLO algorithm, with a practical case study of traffic light classification.

Uploaded by

Mindful Nation
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views7 pages

Strategies For Improving Object Detection in Real-Time Projects That Use Deep Learning Technology

This document summarizes a conference paper about strategies for improving object detection in real-time projects using deep learning technology. The paper discusses how neural networks and deep learning combined with IoT systems have achieved success in object detection applications. It then provides guidelines and solutions focused on improving the accuracy of object detection in real-time projects using the YOLO algorithm, with a practical case study of traffic light classification.

Uploaded by

Mindful Nation
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/371130999

Strategies for Improving Object Detection in Real-Time Projects that use Deep
Learning Technology

Conference Paper · April 2023


DOI: 10.1109/I2CT57861.2023.10126449.

CITATIONS READS
0 4

2 authors:

Niloofar Abed Ramu Murugan


Amrita Vishwa Vidyapeetham Amrita Vishwa Vidyapeetham
3 PUBLICATIONS   3 CITATIONS    51 PUBLICATIONS   381 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

3D Printing/Additive manufacturing for Tissue Engineering View project

All content following this page was uploaded by Niloofar Abed on 29 May 2023.

The user has requested enhancement of the downloaded file.


2023 IEEE 8th International Conference for Convergence in Technology (I2CT)
Pune, India. Apr 7-9, 2023

Strategies for Improving Object Detection in Real-


Time Projects that use Deep Learning Technology
2023 IEEE 8th International Conference for Convergence in Technology (I2CT) | 979-8-3503-3401-2/23/$31.00 ©2023 IEEE | DOI: 10.1109/I2CT57861.2023.10126449

Niloofar Abed Ramu Murugan,


Amrita School for Sustainable Development Associate Professor, Department of Mechanical Engineering,
Amrita Vishwa Vidyapeetham Amrita School of Engineering
Amritapuri, India Amrita Vishwa Vidyapeetham
[email protected] Coimbatore, India
https://orcid.org/0000-0002-3645-2469 https://orcid.org/0000-0002-4724-9052

Abstract— The popularity and prevalence of devices brain by algorithms, they have provided a Deep Learning (DL)
equipped with object detection technology and controllable via field. Deep Learning is a type of machine learning and artificial
the Internet of Things (IoT) have increased, especially in the intelligence that utilizes artificial neural networks to develop
post-Corona era. The development of neural networks and solutions for tasks such as speech recognition, music
artificial intelligence by combining them with IoT systems has composition, and pharmaceutical development.
achieved acceptable satisfaction among users in adverse
conditions by reducing the need for manpower and increasing A. Neural Networks
productivity. Therefore, the scope of using such mechanisms has The contraction of the neural network is taken from the
expanded in most fields, from self-driving vehicles to agricultural network of the human brain, so it is also called the artificial
crops. Beginners will be confronted with a massive amount of neurons of the node, which are structured in three layers
complex information as a result of the design and application of
(Fig.1).
such technologies in interdisciplinary fields. Due to the
popularity of using the You Only Look Once (YOLO) object ¾ The input layer
detection algorithm, this article provided a guideline as a traffic
light subject classification and, offers suggested solutions and ¾ The hidden layer(s)
exclusive approches to increase the accuracy of object detection ¾ The output layer
in real-time projects with a practical application attitude for the
enthusiasts and developers particularly in object detection
scenarios by employing YOLO.

Keywords— Deep learning, Yolo, object detection, IoT

I. INTRODUCTION
The desire to use gadgets equipped with object detection
technology in interdisciplinary fields is undeniable. As a result,
the following article is offered as a traffic light to make
arrangements to select the most appropriate tool from the vast
array of possibilities already available. Therefore, after
introducing the principal concepts, the YOLO algorithm will
be describing briefly, and finally, solutions to enhance the
accuracy of the object detection mechanisms based on deep
learning will be offered in this paper [1]. Fig. 1. Layer of neural network

Techniques for detecting objects constitute the foundation Each node sends the calculated received information with
of artificial intelligence (AI) [2]. Machine vision systems random weighting and applying bias to a non-linear function or
provide operational behavior by interpreting and processing activation function to estimate and select neuron firing [3].
visuals collected from their environment. They are collections
of integrated, computer hardware, electrical components, and B. Deep Learning Algorithms
software algorithms. The proposed process is controlled and A deep learning algorithm uses unknown elements in the
automated through the data obtained from the vision system. distribution and extraction of features during self-learning.
Indeed, computer vision is a subset of Machine Learning (ML) Additionally, clustering objects and discovering efficient paths
that makes it possible for computers to process, analyze, and work in the same way. Deep learning algorithms use different
interpret the visual environment that it is based on the purpose layers for modeling, and it should be noted that these models
of analyzing data extracted from images and videos. In the include several algorithms. Among the deep learning
fascinating world of ML, experts have gone beyond, and by algorithms, the following can be mentioned, each of which is
trying to bring the cognitive power of artificial intelligence as used for one or more specific fields (Table1). Considering the
close as possible to the magnificent functioning of a human

979-8-3503-3401-2/23/$31.00 ©2023 IEEE 1


Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 29,2023 at 12:37:33 UTC from IEEE Xplore. Restrictions apply.
focus of this article on object recognition, it has a brief pooling layer. This pooling layer reduces the feature map's
overview of convolutional neural networks. dimensions and transforms it into a single linear vector.
Finally, the flattened matrix from the pooling layer is given as
an input to a fully connected layer, which is responsible for
TABLE I. DEEP LEARNING ALGORITHMS AND ACTIVITY FIELD
classifying and identifying the images.
D. Object detection algorithms
Deep Learning Algorithms Supporting filed Object detection algorithms can be classified into two
types: multi-stage and one-stage detectors. Multi-stage
Convolutional Neural Networks
detect anomalies , process medical detectors, such as R-CNN[13], Fast R-CNN, Faster R-CNN,
shoots, forecast time series, Spatial Pyramid Pooling (SPP) Networks, Feature Pyramid
(CNNs)
satellite images
Networks (FPN), Mask R-CNN, Cascade R-CNN, Relation
pharmaceutical discovery, Networks for Object Detection, Deformable R-FCN and
Auto encoders popularity prediction, and image PANet[2][3], first identify a subset of potential object regions
processing in an image and then classify the object in each bounding box
region. On the contrary, one-stage detectors are able to predict
helping users to understanding bounding boxes in a single step without using region proposals.
Self-Organizing Maps (SOMs)
the high-dimensional information This approach involves using a grid box and anchors to specify
the area in the image where the object can be found and
constrain the shape of the object. Examples of one-stage
Render 3D objects, create the detectors are Single Shot Detector (SSD), Deconvolutional
Generative Adversarial Networks
pictures of people's faces, make Single Shot Detector (DSSD), RetinaNet, You Only Look
(GANs)
cartoon characters
Once (YOLO), RefineDet, Fast-D, EfficientDet,
CornerNet512, NAS-FPN, M2Det, and RetinaNet. Of all of
Radial Basis Function Networks regression, time-series prediction, these, YOLO as a single-stage detector has been the most
(RBFNs) classification, popular among users from different disciplines.
E. Deep learning framework
image-recognition , speech-
Multilayer Perceptrons (MLPs) recognition, , machine-translation Deep learning (DL) frameworks provide a way to create,
software train and check the accuracy of deep neural networks using a
high-level programming language. Popular DL frameworks
speech recognition, music like TensorFlow, PyTorch, PyTorch Geometric, DGL, and
Long Short Term Memory Networks more use GPU-accelerated libraries like NCCL, DALI, and
composition, and pharmaceutical
(LSTMs)
development DNN to ensure high-speed and multi-GPU-accelerated
training.
image-recognition, video-
Deep Belief Networks (DBNs) recognition, and motion-capture
data F. Object detection and IoT
There are typically three methods used to detect objects in
collaborative filtering,
various projects. In some projects, detection is carried out by
Restricted Boltzmann classification, topic modeling utilizing sensors like ultrasonic or infrared sensors depending
Machines(RBMs)[16] regression, dimensionality on the manner of the return wave change. This is done when
reduction , feature learning, the presence or absence of presence in a particular place or at a
specified time is sought. Its application in public area lighting
image captioning, time-series that switches on when a person's presentation is recognized is a
analysis, natural-language primarily specific example. Another type of detection of
Recurrent Neural Networks (RNNs)
processing, handwriting objects or states is often used in sorting fruits or detecting the
recognition, machine translation
quality of their texture in non-destructive quality control
methods (Fig2). From this example, we can refer to spectral
cameras that extract images based on many wavelengths band
C. Convolutional Neural Networks (CNNs) and are known in the category of high-dimensional images
Yann LeCun developed CNN in 1988, which was such as hyperspectral images (HSI) [5].
originally named LeNet and was used for recognizing postal The concept of object detection involves classifying the
codes and digits. Convolutional Neural Networks (ConvNets) items and pinpointing their precise position within the image
are multi-layered structures employed for image processing [7] often comprises familiar photos and videos that we interact
and object recognition. CNN comprises of a convolution layer with on a regular basis, which are captured with various
with numerous filters to execute the convolution process and a cameras in different dimensions and sizes depending on the
ReLU layer to carry out operations on elements The article type of project or from Existing datasets used for training.
describes the process of image recognition in deep learning. Photos taken by thermal cameras are also often included in this
First, a rectified feature map is created, which is then fed to a category [8].

2
Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 29,2023 at 12:37:33 UTC from IEEE Xplore. Restrictions apply.
object detection project the model helps the users to put their
profits on a sustainable track and can become self-dependent.
IoT system is an effective pathway in the sustainability
assessment of object detection in complex and remote projects.
Using an IoT monitoring system helps Through retrieving
optimal data from the cloud server, users are capable of making
more knowledgeable choices and execute their projects
effectively.
To improve the implementation of object recognition
projects, users use its integration with Internet of Things
technology through various platforms such as Google IoT,
Microsoft Azure, and Amazon Web Services. In many real-
Fig. 2. Hyperspectral image in the detection of fruit texture quality [6]
time projects, through the model quantization and its transfer to
boards such as Raspberry Pi, Arduino, or Tinker embedded
The phenomenon "Internet of Things" (IoT) that was first boards, very effective and practical gadgets are provided.
coined in 1998 and is divided into three categories: hardware,
software, and cloud. Internet as a basic element of IoT includes
making communication between things or devices for II. MATERIAL AND METHODS
processing, and sensing employing software and sensors, Evaluating the performance of object detection and
respectively. The Internet of Things includes different physical prediction models requires two criteria - Average Precision
layers and data links that figure 3 depicts a (IoT) network with (AP) and Intersection over Union (IoU). IoU is employed to
seven layers. Technology is advancing and being used more measure the accuracy of localization and to calculate the
often in today's society, with the aim of simplifying many localization errors in object detection models. A positive
processes and allowing for improved efficiency. IoT has prediction is determined when the IoU value is greater than
enabled short communication, with applications such as 0.5, while a negative prediction is found when the IoU value is
ethernet, Wi-Internet, VoIP, instant messaging and emails. It less than 0.5.
has also expanded to multiple areas, such as agriculture, due
the increase in people and commodities required [9]. ஺௥௘௔௢௙ை௩௘௥௟௔௣
‫=ܷ݋ܫ‬ (1)
஺௥௘௔௢௙௎௡௜௢௡
Wireless Sensor Networks (WSN) and the Internet of
Things (IoT) have become powerful tools that allow businesses How accurately you can identify true positives (TP) from
to develop more effective and sustainable strategies for their all positive expectations is a measure of precision. (TP+FP).
communities. These technological advancements offer a range ்௉
of potential applications.. for establishing sustainable ܴ݈݈݁ܿܽ ൌ (2)
்௉ାிே
communities since they provide a multitude of options, ்௉
including environmental monitoring and structural health ܲ݁‫ ݊݋݅ݏ݅ܿݎ‬ൌ (3)
்௉ାி௉
monitoring [10]. The combination of an IoT system with an
௉௘௥௖௜௦௜௢௡ିோ௘௖௔௟௟
‫ ͳܨ‬ൌ (4)
௉௘௥௖௜௦௜௢௡ାோ௘௖௔௟௟

The Accuracy Precision (AP) metric is a measure of how


accurate a set of predictions is. This metric is determined by
calculating the area underneath the precision versus recall
curve. This is achieved by taking the weighted average of the
precision at each threshold, with the weight reflecting the
amount of recall that was added from the preceding threshold
[15]. The average AP for each class, also known as output
precision, is called mean average precision.

݉‫ ܲܣ‬ൌ σே
௜ୀଵ ‫ܲܣ‬௜ (5)

In a deep learning network, passing through more layers


means deeper processing and closer to the human process, but
we must always consider the optimal mode in the learning rate
as the cost function minimization value in each iteration. The
learning rate should not be so high that it rejects the optimal
state, and it should not be so low that it takes too long for the
Fig. 3. Schematic of IoT network to converge.

3
Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 29,2023 at 12:37:33 UTC from IEEE Xplore. Restrictions apply.
III. RESULT AND DISCUSSION TABLE II. DETECTION DOCS IN DIFFERENT MODEL OF YOLO8 [20]

A. Yolo algorithm Model size *mAPva **Spee Speed param FLOP


(pixels l A100
d s s
Yolo was presented as a deep learning algorithm in the ) 50-95 CPU TensorR (M) (B)

detection of objects in year 2015 by Redmon and Farhadi ONNX T

based on a mechanism of once time to look and identify an (ms) (ms)

object as a human. They improved this open- source algorithm


until the third iteration. After that, it was modified by other YOLOv8n 640 37.3 80.4 0.99 3.2 8.7
developers and used in different fields of object detection, pose
estimation, Instance segmentation, etc. The immediate YOLOv8s 640 44.9 128.4 1.20 11.2 28.6
detection of classification and position data via YOLO
enhances the rate of detection rather than traditional methods YOLOv8 640 50.2 234.7 1.83 25.9 78.9
The HOG (Histogram of Oriented Gradients), SIFT (Scale- m
Invariant Feature Transform), Haar (Haar-like Features), and
R-CNN (with its two-stage operating system) are all examples
YOLOv8l 640 52.9 375.2 2.39 43.7 165.2
of algorithms used to detect objects in images. These
algorithms work by extracting features from the image and
YOLOv8x 640 53.9 479.1 3.53 68.2 257.8
comparing them to a database of known objects. By doing so,
they are able to accurately identify objects in images with high
accuracy. [11]. In the YOLOv3-tiny version, the process of
processing speeds up to 220 frames per second (FPS) was * mAPval Values are from the COCO val2017 dataset for a single
investigated, and obviously, accuracy was sacrificed for speed, model in a single scale. Yolo val detect data=coco.yaml device=0 to
also in the Yolv3-SPP version, they were able to increase the reproduce
mAP to 60.6 [12] [13]. . **Speed Employing an Amazon EC2 P4d instance, the average
Yolov 7 provided acceptable results in various tests, even was calculated over COCO val images. Yolo val detect reproduce
data=coco128.yaml batch=1 device=0/cpu
in Anchor Free Detection. The well-known YOLO v7 has
certain advantages, but it also has some weaknesses. When
applied to real-world scenarios in which the lighting can vary,
YOLO v7 may not be adequate. is sometimes impractical to
employ due to its tendency to be dependent on fluctuations in
lighting or other environmental circumstances. In addition, it
struggles with small object detection (SOD) and isn't
appropriate for detecting objects at multiple scales. In this
regard, valuable research has been done and solutions have
been proposed, such as the RFSOD model [7].
In January 2023, Ultralytics developed YOLOv8 which is
constructed on the Pytorch framework. It is a pioneering and
current SOTA model that improves on the capability of its
former iterations while introducing novel features and
advancements to make the system more efficient and flexible.
The models used for detection and segmentation are based on
the COCO dataset, while those for classification are based on
the ImageNet dataset.t. Figure 4 shows the comparison chart of
YOLO 8 with other widely used of YOLO models in the
COCO data set environment and table 2 represents the
detection Docs in a different model of Yolo8. Yolo can be
easily optimized by well-known optimizer functions such as
Stochastic Gradient Descent (SGD) with parameter Momentum
and optimizer Adam [14][15].
B. Enhancing strategies
Increasing the accuracy in different object recognition
projects is a challenge[16] that has led to the presentation of
different solutions during the application and introduction of
this field. Among these methods that are used during the
training of the algorithm, the following can be mentioned.
Methods such as BOF and BOS are divided into smaller sub-
branches such as Cut out and Random Erasing. This technique
Fig. 4. Performance and inference speed comparison between YOLO 8 and has proven that it can improve the ability and performance of
the prior iteration [20] convolutional neural networks. The Random Erasing technique

4
Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 29,2023 at 12:37:33 UTC from IEEE Xplore. Restrictions apply.
also selects rectangular parts of an image and erases the pixels doing this, the object detection model can more accurately
in that area. Often, this category of methods and dozens of detect objects of various sizes, reducing the number of false
other methods are used by developers in upgrading different positives and increasing the overall accuracy of the model. [17]
versions of detectors, even though such improvements and and taking classes from different datasets to avoid overlap
developments are effective in increasing accuracy, sometimes between two different classes.
the design of a new individual version or an ensemble model
does not show any significant advantages[18].The methods 9) Controlling; In projects that include the Internet of
mentioned below can easily be used by any user and result in Things, human users can be given permission to modify and
an effective output. control the detection system in addition to monitoring using an
application or any other method. This method will be effective
1) Class balancing; Training the model on more and rare for increasing the efficiency of systems that recognize objects
classes to solve the problem of unbalanced data. Because that are only responsible for the identification part. In projects
usually objects with easier access have more photos in the where the target object or the observer is moving, such as
corresponding class. For example, if 4 main goals are pursued checking the status of an animal [19] The use of drones in
in the project, 4 groups of 100 photos must be prepared, and if surveillance systems has become increasingly popular in recent
there is a group of 25 among them, the accuracy will be greatly years, offering the ability to broaden the scope of data
reduced. collection and improve the accuracy of the output. Alongside
the use of the Internet of Things (IoT), GPS-based systems are
2) Data Augmentation; Changing existing photos to create also often employed to further enhance the overall
new images and add to the corresponding class. effectiveness of the system.
3) Image duplication; Using the same image multiple times
to train a model can better model the data behavior of a
particular class.
4)Ensemble; An object recognition model should be
trained on the alternating class in the data set, and another
model should be trained to recognize specific objects in
images, objects that are difficult to recognize, or there is little
data to learn.
5) Real image; In real-time projects, it is better not to use
merely ready-made datasets and to use real photos of the same
area in each class. For example, in animal recognition projects,
don't use only the classes available on the internet and take
Fig. 5. Different light saturation and color tone
pictures of the animals in the area with a natural background.
6) Different position; Humans are able to identify that IV. CONCLUSION
object only by seeing a small part of an object in different Undoubtedly, deep learning detectors are one of the
angles. To teach the machine, you should improve each class appropriate choices in object detection projects due to their
by using photos in different angles. trainability and customization capability. Although algorithms
7) Different light saturation; As mentioned, detectors like Yolo are being developed to cover more and more positive
such as YOLOv 7 with all efficiency are sensitive to changing attributes, some of the performance enhancement techniques
the amount and angle of light and the detection accuracy outlined in this article can be applied to improve identification
decreases. However, in a real project, animals often attack the accuracy depending on the peculiarities of the object being
fields during darkness or sunset, or it is difficult to recognize investigated. It should be kept in mind that the developers and
them even with a real human driving. Therefore, to get rid of creators of a program or algorithm act with the vision of
this problem, with the tools that are available on most mobile covering general users, so to increase the accuracy and speed
phones, we can change a number of photos using different light of an object recognition project, appropriate and
and color filters and tone and add them to the corresponding personalization solutions should always be used
class (Fig.5).
ACKNOWLEDGMENT
8) Clearing; For training, you should avoid photos that This project has been funded by the E4LIFE International
contain different parts of several objects and remove Ph.D. Fellowship Program offered by Amrita Vishwa
unnecessary objects. If you use boxing methods, you must use Vidyapeetham. I extend my gratitude to the Amrita Live-in-
the most suitable anchor for the object in the theme and use the Labs® academic program for providing all the support.
appropriate labeling tool with your coding program.
Making anchor boxes smaller to better fit different sizes of REFERENCES
objects is a way to improve the accuracy of object detection [1] Yang J, Liu S, Su H, Tian Y. Driving assistance system based on data
models. It helps to reduce false positives, which can occur fusion of multisource sensors for autonomous unmanned ground
when the anchor boxes are too large or too small to accurately vehicles. Computer Networks. 2021 Jun 19;192:108053
detect objects. The size of the anchor boxes can be adjusted so [2] Jiang P, Ergu D, Liu F, Cai Y, Ma B. A Review of Yolo algorithm
that it better fits the size of the objects being detected. By developments. Procedia Computer Science. 2022 Jan 1;199:1066-73.

5
Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 29,2023 at 12:37:33 UTC from IEEE Xplore. Restrictions apply.
[3] L. Jiao et al., "A Survey of Deep Learning-Based Object Detection," in [13] Redmon J, Farhadi A. YOLO9000: better, faster, stronger.
IEEE Access, vol. 7, pp. 128837-128868, 2019, doi: InProceedings of the IEEE conference on computer vision and pattern
10.1109/ACCESS.2019.2939201. recognition 2017 (pp. 7263-7271).
[4] Lohia A, Kadam KD, Joshi RR, Bongale AM. Bibliometric analysis of [14] Zhang Z. Improved adam optimizer for deep neural networks. In2018
one-stage and two-stage object detection. Libr. Philos. Pract. 2021 Feb IEEE/ACM 26th international symposium on quality of service
1;4910:34. (IWQoS) 2018 Jun 4 (pp. 1-2). Ieee.
[5] Lone, Zubair Ahmad, and Alwyn Roshan Pais. "Object detection in [15] Aburaed, Nour, Mina Alsaad, Saeed Al Mansoori, and Hussain Al-
hyperspectral images." Digital Signal Processing (2022): 103752. Ahmad. "A Study on the Autonomous Detection of Impact Craters."
[6] Saha, Dhritiman, and Annamalai Manickavasagan. "Machine learning In Artificial Neural Networks in Pattern Recognition: 10th IAPR TC3
techniques for analysis of hyperspectral images to determine quality of Workshop, ANNPR 2022, Dubai, United Arab Emirates, November 24–
food products: A review." Current Research in Food Science 4 (2021): 26, 2022, Proceedings, pp. 181-194. Cham: Springer International
28-44. Publishing, 2022.
[7] Amudhan, A.N., Vrajesh, S.R., Sudheer, A.P. and Lijiya, A., 2022. [16] Subbiah, Uma, D. Kavin Kumar, Senthil Kumar Thangavel, and Latha
RFSOD: a lightweight single-stage detector for real-time embedded Parameswaran. "An extensive study and comparison of the various
applications to detect small-size objects. Journal of Real-Time Image approaches to object detection using deep learning." In 2020
Processing, 19(1), pp.133-146. International Conference on Smart Electronics and Communication
(ICOSEC), pp. 183-194. IEEE, 2020.
[8] R. Ippalapally, Mudumba, S. Harsha, Adkay, M., and Nandi Vardhan H.
R., “Object Detection Using Thermal Imaging”, in 2020 IEEE 17th [17] K. K. T R, S. Thiruvikkraman, G. R, N. A and K. R, "Evaluating the
India Council International Conference (INDICON), New Delhi, India, Scalability of a Multi-Object Detector Trained with Multiple
2020. Datasets," 2021 5th International Conference on Intelligent Computing
and Control Systems (ICICCS), Madurai, India, 2021, pp. 1359-1366,
[9] Manne, Ravi, and Sneha Chowdary Kantheti. "Green IoT Towards doi: 10.1109/ICICCS51141.2021.9432350.
Environmentally Friendly, Sustainable and Revolutionized Farming."
Green Internet of Things and Machine Learning: Towards a Smart [18] Allaparthi, Sree Roja Rani, and G. Jeyakumar. "An Investigational
Sustainable World (2021): 113-139. Study on Ensemble Learning Approaches to Solve Object Detection
Problems in Computer Vision." Mathematical Statistician and
[10] Ramesh, ManeeshaVinodini, Rekha Prabha, HemalathaThirugnanam, Engineering Applications 71, no. 3s (2022): 399-412.
AryadeviRemanideviDevidas, Dhanesh Raj, Sruthy Anand, and Rahul
Krishnan Pathinarupothi. "Achieving sustainability through smart city [19] Ramesh, Gowtham, Senthilkumar Mathi, Sini Raj Pulari, and Vidya
applications: protocols, systems and solutions using IoT and wireless Krishnamoorthy. "An automated vision-based method to detect
sensor network." CSI Transactions on ICT 8 (2020): 213-230. elephants for mitigation of human-elephant conflicts." In 2017
International conference on advances in computing, communications
[11] Cao, D., Chen, Z. & Gao, L. An improved object detection algorithm and informatics (ICACCI), pp. 2284-2288. IEEE, 2017.
based on multi-scaled and deformable convolutional neural networks.
Hum. Cent. Comput. Inf. Sci. 10, 14 (2020). [20] https://github.com/ultralytics/ultralytics
https://doi.org/10.1186/s13673-020-00219-9
[12] Redmon J, Farhadi A. Yolov3: An incremental improvement. arXiv
preprint arXiv:1804.02767. 2018 Apr

6
Authorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 29,2023 at 12:37:33 UTC from IEEE Xplore. Restrictions apply.

View publication stats

You might also like