A Review of Embedded Machine Learning Based On Hardware, Application
A Review of Embedded Machine Learning Based On Hardware, Application
Review
A Review of Embedded Machine Learning Based on Hardware,
Application, and Sensing Scheme
Amin Biglari † and Wei Tang *,†
Klipsch School of Electrical and Computer Engineering, New Mexico State University,
Las Cruces, NM 88001, USA
* Correspondence: [email protected]
† These authors contributed equally to this work.
Abstract: Machine learning is an expanding field with an ever-increasing role in everyday life, with its
utility in the industrial, agricultural, and medical sectors being undeniable. Recently, this utility has
come in the form of machine learning implementation on embedded system devices. While there have
been steady advances in the performance, memory, and power consumption of embedded devices,
most machine learning algorithms still have a very high power consumption and computational
demand, making the implementation of embedded machine learning somewhat difficult. However,
different devices can be implemented for different applications based on their overall processing
power and performance. This paper presents an overview of several different implementations
of machine learning on embedded systems divided by their specific device, application, specific
machine learning algorithm, and sensors. We will mainly focus on NVIDIA Jetson and Raspberry
Pi devices with a few different less utilized embedded computers, as well as which of these devices
were more commonly used for specific applications in different fields. We will also briefly analyze
the specific ML models most commonly implemented on the devices and the specific sensors that
were used to gather input from the field. All of the papers included in this review were selected
using Google Scholar and published papers in the IEEExplore database. The selection criterion for
these papers was the usage of embedded computing systems in either a theoretical study or practical
implementation of machine learning models. The papers needed to have provided either one or,
preferably, all of the following results in their studies—the overall accuracy of the models on the
Citation: Biglari, A.; Tang, W. A
system, the overall power consumption of the embedded machine learning system, and the inference
Review of Embedded Machine
time of their models on the embedded system. Embedded machine learning is experiencing an
Learning Based on Hardware,
explosion in both scale and scope, both due to advances in system performance and machine learning
Application, and Sensing Scheme.
models, as well as greater affordability and accessibility of both. Improvements are noted in quality,
Sensors 2023, 23, 2131. https://
doi.org/10.3390/s23042131
power usage, and effectiveness.
Academic Editors: Hyungsoon Im, Keywords: computer vision; embedded systems; Google Coral; machine learning; Nvidia Jetson;
Jiayi Ma and Alessandro Bevilacqua
RGB camera; Raspberry Pi; sensors
Received: 16 December 2022
Revised: 17 January 2023
Accepted: 9 February 2023
Published: 14 February 2023 1. Introduction
Machine learning has become a ubiquitous feature in everyday life. From self-driving
vehicles, facial recognition systems, and real-time interpretation of different languages,
to security surveillance, smart home applications, and health monitoring, artificial in-
Copyright: © 2023 by the authors.
telligence has changed almost every society on earth [1–4]. Due to the extremely high
Licensee MDPI, Basel, Switzerland.
computational requirements of machine learning models, until recently, the majority of
This article is an open access article
distributed under the terms and
these breakthroughs were implemented on high-power stationary computing systems.
conditions of the Creative Commons
However, continuous advancements in embedded system design have made the imple-
Attribution (CC BY) license (https:// mentation of machine learning models on embedded computing systems for a wide variety
creativecommons.org/licenses/by/ of mobile and low-power applications viable. One example of such an application would
4.0/).
be [5], a 2020 paper by Ouyang et al., titled “Deep CNN-Based Real-Time Traffic Light De-
tector for Self-Driving Vehicles”, which proposes a method for recognizing traffic lights for
autonomous vehicles. This ever-expanding research field of machine learning implementa-
tion in limited environments of embedded systems has been titled “Embedded Machine
Learning” [6]. There are many considerations when choosing an embedded system for a
specific machine learning application, such as power limitations, specific sensor outputs,
model architecture, and monetary cost. In this review paper, we focus on the system
models and assess which systems are better suited for which specific applications and
sensing schemes.
As stated, machine learning algorithms are trained and used for many different
applications, such as hand gesture recognition [7] and speech source identification [8].
They usually have a very high performance and memory requirement for both training
and inference. Effective implementation would require the tuning and modification of the
machine learning model architecture as well as the selection of the appropriate system
depending on the priorities of the application. All machine learning applications aim to
consume as little power and computation and be as fast and accurate as possible, however,
improvement in one of these areas almost always comes at a relative cost to the other ones.
Since embedded systems can vary drastically in power consumption, processing power,
memory, storage, and pricing, it is prudent to select the appropriate system for each specific
application. As an example, a system for pedestrian detection for autonomous vehicles [9]
would prioritize performance speed and accuracy much more so than a system designed
for recognizing marine life [10], even if it comes at a much higher monetary cost.
Training a machine learning model for any task requires a dataset, which can consist
of megabytes to terabytes of images, video files, audio files, graphs, etc., and their corre-
sponding annotation files. The specific files of a dataset used for training depend on the
intended application of the machine learning model, an image classification model, for
example, would use a dataset of image files and label annotations associated with each
image. The sensing schemes used for collecting these files, both for the initial training and
testing datasets, as well as for the inference of the trained machine learning algorithm on an
embedded system, are varied. Another subject of analysis in this research was the correla-
tion between the type of sensor scheme used in each system to the overall implementation
of the system.
Most of the papers reviewed in this work utilized some form of computer vision, mainly
in areas such as obstacle detection for autonomous vehicles (such as speed bumps) [11] or
safety and security measures (such as violent assault identification) [12]. However, several
also presented embedded machine learning methods for medical applications (such as
patient heart monitoring) [13] or automating more aspects of city management (such as
managing the direction and flow of vehicular traffic) [14].
Essentially, in this review, we emphasized specific applications, embedded hardware
platforms, and sensors, then compared them based on the nature of those networks and
applications, while any other embedded machine learning review papers have a greater
focus on the performance of specific lines of hardware [15], or the network architecture
implemented on the hardware [16]. The paper is structured in the following format: 1. Ab-
stract; 2. Introduction; 3. Hardware System Considerations; 4. Specific Hardware Systems
Covered In The Review; 5. Sensing Systems; 6. Network Applications; 7. Comprehensive
Comparisons; 8. Conclusions. This layout is also displayed in Figure 1. If the readers are
interested in machine learning algorithms, models, and databases, please refer to other
review and benchmark papers such as the ones used as sources in this work [15–17]. Works
such as [18–21] and [15,17] provide a comprehensive performance analysis and benchmark
of the embedded systems used in their specified applications, while works such as [22,23]
conduct a more in-depth study on improvement methods for both system hardware and
model architecture for their specific applications.
Sensors 2023, 23, 2131 3 of 55
Figure 1. Paper Layout Showing the Distribution of Subjects Covered in the Review.
3. Hardware
Embedded systems are computer hardware systems designed for performing ded-
icated functions in a combination with a larger system. They include and are used in
many everyday items from mobile phones and household appliances. Embedded computer
devices are a subset of embedded systems used for computational tasks for more dedicated
or remote operations, such as running machine learning algorithms in real time on small
unmanned aerial vehicles, connecting systems connected to the internet of things, and even
security monitoring. While the variety of the embedded computer devices produced and
used is quite wide, most academic research conducted on embedded machine learning is
focused on using Raspberry Pi and NVIDIA Jetson devices. Some other devices used in-
clude the ASUS Tinker board series, Google’s Coral TPU dev series, ODROID-XU4 Boards,
and the Banana Pi board series.
Sensors 2023, 23, 2131 4 of 55
Jetson, Coral Edge, and ASUS Tinker board devices, and others, such as ODROID-XU4
boards, do not have their own integrated storage devices but instead have flash storage
interface. Raspberry Pi boards have interfaces for both SD cards and Flash drives.
4. Specific Systems
4.1. Nividia Jetson
Jetson is the name of a series of machine learning embedded systems by NVIDIA
used for autonomous devices and various embedded applications. While Jetson Developer
Sensors 2023, 23, 2131 6 of 55
kits vary in capability and performance, they are generally very reliable for implementing
machine learning tasks—this is especially true for more graphically intensive applications.
The downside to this is that NVIDIA Jetson boards also tend to be more costly than market
alternatives. Most of the sources shown in this review either only made use of Jetson
boards or used their combination with other devices. These specific developer kits were
the NVIDIA Jetson Nano, NVIDIA Jetson TX1, NVIDIA Jetson TX2, NVIDIA Jetson AGX
Xavier, and NVIDIA Jetson Xavier NX.
NVIDIA Jetson Nano is one of the smaller Jetson kits specialized for machine learning
tasks like image classification, object detection, segmentation, and speech processing.
It has a 128-core Maxwell GPU, a Quad-core ARM Cortex A57 1.4Remote Sensing of
EnvironmentHz CPU, 4 GB 64-bit LPDDR4 25.6 GB/s Memory, 2x MIPI CSI-2 DPHY lanes
camera, Ethernet, HDMI, and USB connection ports. Unlike most other NVIDIA kits, Nano
does not have an integrated storage unit and has to rely on SD cards for that purpose. It
has a power consumption of 5–10 Watts and with a price range of USD 300–USD 500, it is
the more affordable option out of all of the NVIDIA development kits [24].
The Jetson TX1 and TX2 series are a discontinued line of embedded system develop-
ment kits with flexible capabilities that include great performance for machine learning
tasks. As the discontinuation of this line of kits is especially recent for the TX2 series,
research publications that utilize the TX2 board are not uncommon, with the TX1 being
much rarer. The TX1 has a 256-core Maxwell GPU, a Quad-core ARM® Cortex®-A57 CPU,
a 4 GB LPDDR4 memory, a 16 GB eMMC 5.1 Flash Storage, a 5 MP Fixed Focus MIPI CSI
Camera, Ethernet, HDMI, and USB type A and Micro AB connection ports. The TX2 has
NVIDIA Pascal™ Architecture GPU, 2 64-bit CPUs, Quad-Core Cortex®-A57 Complexes,
an 8 GB L128 bit DDR4 memory, a 32 GB eMMC 5.1 Flash Storage, a 16 GB eMMC 5.1 Flash
Storage, a 5 MP Fixed Focus MIPI CSI Camera, and Ethernet, HDMI, and USB type A and
Micro AB connection ports. The power consumption of the TX1 is around 15 Watts and
that of the TX2 is about 25 Watts [25,26].
The Jetson AGX Xavier is one of the most powerful developer kits produced by
NVIDIA. It is mainly used for creating and deploying end-to-end AI robotics applications
for manufacturing, delivery, retail, and agriculture, but it could also be applied for less
intensive machine learning applications. It has a 512-core Volta GPU with Tensor Cores, an
8-core ARM v8.2 64-bit CPU, a 32 GB 256-Bit LPDDR4x memory, a 32 GB eMMC 5.1 Flash
storage, as well as two USB C ports, and an HDMI and camera connector. It has a price of
about USD 4000 and has a power consumption of 30 Watts, making it much more costly in
both price and electricity than the other Jetson kits [27].
The Jetson Xavier NX kits is another series of NVIDIA developer kits designed as
the successor to the TX series. It is power-efficient and compact, making it suitable for
machine learning application development. It has an NVIDIA Volta architecture GPU with
384 NVIDIA CUDA® cores and 48 Tensor cores, a six-core NVIDIA Carmel ARM®v8.2
64-bit CPU, an 8 GB 128-bit LPDDR4x memory, two MIPI CSI-2 DPHY lanes cameras, and
Ethernet, HDMI, and USB type A and Micro AB connection ports. It has an integrated
storage component of its own, instead of relying on a micro SD storage interface. It has a
power consumption of 10 Watts and a price range of around USD 2000. Its well-rounded
quality makes it a very good, if somewhat expensive, the choice for machine learning
implementation on embedded systems [28].
and HMDI 2.0 ports. The overall board has a low power cost of 6–10 Watts and at USD 130,
the price for the board is relatively low [29].
4.3. Raspberry Pi
Raspberry Pi is a series of extremely popular embedded computers developed by the
Raspberry Pi Foundation in the United Kingdom. The uses for these systems are extremely
wide, including machine learning. Like the Jetson series, Raspberry Pi products are very
commonly used in embedded machine-learning implementation projects. For this review,
the three systems of Raspberry Pi that were commonly utilized were the Raspberry Pi 3
Model B, the Raspberry Pi 3 Model B+, and the Raspberry Pi 4 Model B.
The Raspberry Pi 3 Model B is the first iteration of the third-generation Raspberry
Pi computers. It has a Quad Core 1.2 GHz Broadcom BCM2837 64bit CPU, a 400 MHz
VideoCore IV video processor, a 1 GB LPDDR2 memory, a microSD port for storage, a
100 Base Ethernet, 4 USB 2.0, and full-size HDMI ports. It has an extremely low power
consumption of 1.5 Watts and a monetary cost of about USD 40 [30].
The Raspberry Pi 3 Model B+ is the final iteration of the third-generation Raspberry
Pi Computers. It has a Quad Core 1.4 GHz Broadcom BCM2837B0, Cortex-A53 (ARMv8)
64-bit SoC CPU, a 400 MHz VideoCore IV video processor, a 1 GB LPDDR2 memory, a
microSD port for storage, a 1000 Base Ethernet, 4 USB 2.0, and full-size HDMI ports. Its
main advantage to model 3b is its processor’s higher clock speed and its PoE (power over
Ethernet) support. At 2 Watts, its power consumption is still low but higher than that of
the model 3b series. It also has a very close monetary cost ranging around USD 40.
The Raspberry Pi 4 Model B is the first iteration of the fourth-generation Raspberry
Pi Computer. It has a Quad Core 1.5 GHz Broadcom BCM2837B0, Cortex-A72 (ARMv8)
64-bit SoC CPU, a 400 MHz VideoCore IV video processor, a choice between 1 GB, 2 GB,
4 GB, and 8 GB LPDDR2 memory, a microSD port for storage, a Gigabit Ethernet, 4 USB
2.0, and full size HDMI ports. Its main advantage to model 3b is its processor’s higher
clock speed and its PoE (power over Ethernet) support. Its newer processor and option
for memory make it a superior choice compared to the previous iteration of Raspberry pi.
It has a relatively low power consumption of 4 Watts and a monetary cost of about USD
40–USD 80 depending on the memory size [31].
4.5. Banana Pi
Banana Pi is an open-source hardware platform by Shenzhen SINOVOIP Co. located
in 7/F, Comprehensive Building of Zhongxing Industry City, Chuangye Road, Nanshan
District, Shenzhen, China. Like other embedded systems, it has a wide range of applications,
amongst them, embedded machine learning implementation. It has an H3 Quad-core
Cortex-A7 H.265/HEVC 4K, a Mali400MP2 GPU, 1 GB DDR3 Memory, an 8 GB eMMC
Onboard Storage, two USB 2.0 ports, an HDMI port, and an Ethernet interface. Its overall
power consumption is about 5 Watts and it has a price range of USD 50–USD 75 [33].
5. Sensors
Electrical sensors are components responsible for gathering input from a given physical
environment. The specific input that a sensor responds to varies from sensor to sensor
could be temperature, ultrasound waves, light waves, pressure [39,40], or motion. Sensors
do this by acting as switches in a circuit, controlling the flow of electric charges through
their overall systems. Sensors can be split into two separate overarching categories, active
sensors, and passive sensors. Active sensors emit their own radiation such as ultrasound
waves and laser, from an internal power source, which is then reflected from the objects in
the environment, the sensor then detects these reflections as inputs. radars are an example
of active sensors. Passive sensors simply detect the radiation or signature emitted from
their targets, such as body heat [41].
The most important characteristics of sensor performance are transfer function, sensi-
tivity, span, uncertainty, hysteresis, noise, resolution, and bandwidth. The transfer function
shows the functional relationship between the physical input signal and the electrical
output signal. The sensitivity is defined in terms of the relationship between the input
physical signal and the output electrical signal. The span is the range of input physical
signals that may be converted to electrical signals by the sensor. Uncertainty is generally
defined as the largest expected error between actual and ideal output signals. Hysteresis is
the width of the expected error in terms of the measured quantity for sensors that do not
return to the same output value when the input stimulus is cycled up or down. Output
noise is generated by all sensors in addition to the output signal, and since there is an
inverse relationship between the bandwidth and measurement time, it can be said that the
noise decreases with the square root of the measurement time. The resolution is defined as
the minimum detectable signal fluctuation. The bandwidth is the frequency range between
the upper and lower cutoff frequencies, which respectively correspond to the reciprocal of
the response and decay times [42].
Once sensors acquire input and convert it into electrical current, they can communicate
their data to the rest of an overarching system through a variety of means, the main
methods being to transfer data over a wired interface, or transfer data wirelessly [43,44].
Since the embedded systems studied in this research all made use of wired communication
for their sensing systems, we focus only on analog communication. Standard wired
interfaces between sensors and computing devices use serial ports, which transfer data
between the data terminal equipment (DTE) and data circuit-terminating equipment (DCE).
For successful data communication, the DTE and DCE must agree on a communication
standard, the transmission speed, the number of bits per character, and whether stop and
parity framing bits are used. Most modern-day computing devices and embedded systems
use USB standards for their communication, connection, and power peripherals, which
includes any additional sensor systems. USBs have had many port-type iterations since
their inception; USB 1.x (up to 12 Mbps speed), USB 2.0 (up to 480 Mbps speed), USB 3.0 (up
to 5 Gbps speed), and USB4 (super speed, up to 40 Gbps), most devices have ports for the
USB 2.0 and USB 3.0 port types, with the USB4 being mostly suited for mobile smartphone
devices. One of the main advantages of USB devices, including sensor systems, is that
they can have multiple functionalities through a single connection port, for example, a USB
camera can record both video and audio. These devices are referred to as composite devices
and each of their functionalities is assigned to a specific address. USB devices can draw
5V and a maximum of 500mA from a USB host, allowing both data interface for sensor
systems as well as powering the sensor component [45].
or external storage components connected to the overall system. These data are then used
for whatever purpose the overall system that employed the sensor has been designed for.
As the focus of these research projects is over-viewing the capability of different em-
bedded systems for running machine learning models, all of the sensor data are transferred
to a previously trained machine learning algorithm or used to train a new algorithm based
on existing architecture. In cases of trained model deployment, depending on the exact
application of the model as well as its architecture, the stored data collected by the sensor
systems is transferred to the model to perform predictions. For example, image identifica-
tion and object recognition models will compare images files to the dataset images they
have been trained with to either identify the specific objects of interest or the entire image,
while forest biomass estimation models would compare the results gathered from lidar
sensors to their trained dataset to estimate the concentration of vegetation in certain areas
of forests [46].
5.2.5. Radar
RADAR, short for Radio Detecting And Ranging, is a radio transmission-based sensor
system designed for detecting objects. They operate using short-pulse electromagnetic
waves, these pulses are then reflected from objects in the path of the RADAR sensor and
are then reflected back at it. Essentially, “When these pulses intercept precipitation, part of
the energy is scattered back to the RADAR” [55]. RADAR systems can rely on 14 different
frequency bands depending on the application. RADAR systems have a wide variety of
applications, from meteorology to military surveillance and astronomical studies. Among
the sources used for this review, RADAR systems were scarcely used, and within these
cases, the main usage was for electric hybrid car deep learning-based car following systems
as well as multi-target classification for security monitoring.
5.2.6. LiDar
Lidar (light detection and ranging) sensors are sensor systems that emit millions
of laser waveforms and then collect their reflection to precisely measure the shape and
distance of physical objects in a 3D environment. Essentially, they are laser-based radar
systems. This process is repeated millions of times per second to create a precise real-
time three-dimensional map of an area called a point cloud, which can then be used for
navigation systems [56]. While the technology itself is decades old, with improvements in
Lidar performance in terms of range detection, accuracy, power consumption, as well as
physical features such as dimension and weight, its popularity has been rising in recent
years, especially in the fields of robotics, navigation, remote sensing, and advanced driving
assistance [57]. Lidars’ main usage among our sources was for locating people in danger
in search and rescue operations, such as one following an earthquake, and optimizing
trajectory tracking for small multi-rotor aerial drones.
5.2.7. Microphones
Microphones are sound sensors that act as transducers, converting sound waves into
electrical current audio signals carrying the sound data. When sound waves interact with
the microphone diaphragm, the vibrations created are converted into a coinciding audio
signal via electromagnetic or electrostatic principles that will be outputted [58]. This audio
signal can then be stored as digital data and replayed or used in other applications such as
Sensors 2023, 23, 2131 12 of 55
training sound recognition machine learning models. The sources presented in this review
mainly used microphones for real-time speech source localization.
5.2.9. Electrocardiograms
Electrocardiograms are heart monitoring sensors used for quick analysis of a patient’s
heart [61–63]. Heart contractions generate natural electrical impulses that are measurable
by nonintrusive devices, such as lead wires placed on a patient’s skin. The measured pulses
are then converted into an electric signal that can be used to measure irregularities in the
patient’s heart rate [64]. Naturally, electrocardiograms are mainly used in medical facilities
or by caregivers and nurses to monitor heart health [65,66], however, the sources used for
this review have also utilized them for identifying epileptic seizures.
5.2.10. Electroencephalograms
Electroencephalograms are brain monitoring sensors used for analyzing a patient’s
brain activity. The brain’s processes are the result of electrical current traveling through
its neurons at varying levels depending on the current state of a patient, what they are
doing, or how they are feeling. Electroencephalograms record these currents across the
various brain regions using painless electrodes placed around a patient’s scalp. These
fluctuations recordings are then saved as either a paper or digital graph [67]. Much like
electrocardiograms, electroencephalograms are mainly used in medical facilities or by
caregivers and nurses to monitor heart health, however, sources used for this review have
also utilized them for anesthesia patient monitoring.
6. Applications
Embedded machine learning applications are all either of a remote nature or require
more mobile systems to be implemented. The applications which are covered in this review
are divided into the following categories: autonomous driving, security, personal health
and safety, unmanned aerial vehicle navigation, and agriculture.
individuals and ensure authorized access to secure locations and information. They do this
through facial recognition and biometric identification using embedded system-operated
camera systems, to name a few avenues. Ensuring personnel safety in hazardous work
environments also involves constant monitoring by camera systems, to see if any of the
employers are showing visible signs of illness or injury. Accuracy and computational speed
are both of very high import in these applications.
6.3. Healthcare
Monitoring the health of hospital and nursing home patients is one of the fields in
which machine learning has been found to be increasingly useful. The AI models trained
for these purposes are varied depending on the exact nature of the task they are created to
accomplish [69,70]. Applications involving the monitoring of the status of specific organs
of patients can rely on various different medical equipment as well as visual and thermal
cameras, such as monitoring a patient’s heart rate or brain activity, which are achieved with
electrocardiograms and electroencephalograms. Fast performance of the machine learning
models is of even greater importance in these scenarios as they can quite literally be about
"life and death". Other health monitoring applications can refer to posture recognition and
monitoring systems that rely on motion sensors and cameras to identify the posture of a
given patient and inform their caretakers in case of any danger.
6.4. Drones
Aerial drones, or unmanned aerial vehicles, have a long history of military use, but
have become increasingly utilized in everyday life over the past decade, be it for package
delivery, remote video recording, wildlife research, or simply for recreational purposes.
Many of these drones are of the quadcopter variety [71]. While most drones require remote
piloting, there has been an increasing element of automation to their navigation [72,73],
odometry, landing, and trajectory systems. AI models trained for these purposes use
pathways, object images, and balance data models. While performance speed is an impor-
tant factor for these models, accuracy takes far greater precedence as even the slightest
misclassification can result in damage to or the destruction of the drone.
6.5. Agriculture
Different agricultural sectors have also started making use of machine learning. Object
detection and facial recognition models are customized for recognizing individual animals
during feeding and drinking to measure their overall consumption as well as monitor
animal behavior and health. Object detection machine learning models are also used in
farming crops for identifying weeds within the field, damaged crops, and crops ready
for harvest, as well as any damage to the field and its fences. In both instances, the
detection accuracy and energy consumption of the models are far more important than the
performance speed.
Figure 2. Average inference time in agricultural computer vision for devices used in this application.
Sensors 2023, 23, 2131 15 of 55
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Crop identification via Logitech C925e 8 Watts for both sensor
[76] ASUS Tinker Board S 89.44% 0.7 s
aerial drone wWebcam and system
Vineyard Landmark
extraction for robot
Raspberry Pi infrared
Google Edge TPU, navigation in steep slope 15 Watts for both sensor
[77] camera, Mako G-125C 52.98% 54.20 ms
NVIDIA Jetson TX2 vineyard environment and system
infrablue camera
through vine trunk
identification
Figure 3. Average Inference time in facial recognition for devices used in this application.
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Thermal Camera
(Vanadium Oxide
Emotion and Personality 4 Watts for both sensor
[86] Banana Pi Microbolometer with 87.87% 3.851 s
Recognition and system
Chalcogenide Lens and a
Field of View 36O.)
Nvidia Jetson Nano,
Facial recognition
Nvidia Jetson TX2, 5 Watts (Nano) 7.5 Watts 0.37 s (Nano) 0.4 s (TX2)
inference comparison
[89] Nvidia Jetson Xavier NX, None 99.63% (TX2) 10 Watts (Xavier 0.18 s (Xavier NX) 0.28 s
between edge and cloud
Nvidia Jetson Xavier NX & AGX) (AGX)
devices
AGX
Analyze face structure
from video feed and 15 Watts for both sensor
[2] NVIDIA Jetson Nano Webcam camera 83.31% 2s
detect drowsiness from and system
facial features
Face mask detection TGCAM-2000STAR 17 Watts for both sensor
[90] NVIDIA Jetson Nano 99.02% 30.18 ms
system camera and system
2.8 Watts for both sensor
[87] Raspberry Pi 3 model B Facial biometric scan Pi camera 97.1% 2.283 min
and system
High-accuracy facial 14 Watts for both sensor
[91] Raspberry Pi 4 Webcam 75.26% 74.15 ms
recognition and system
Facial recognition and
14 Watts for both sensor
[92] Raspberry Pi 4 facial expression Logitech c270 camera 98% 71.14 ms
and system
recognition
NVIDIA Jetson Nano, 5 Watts (Nano) 7.5 Watts 0.1 s (Nano) 33.33 ms
[93] Facial ID for security Camera 94%
NVIDIA Jetson TX2 (TX2) (TX2)
Lightweight facial
[94] NVIDIA Jetson TX2 recognition for Camera 58.7% 1.4 Watts 29 ms
embedded systems
Sensors 2023, 23, 2131 18 of 55
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Monocular depth
estimation (MDE)
[88] NVIDIA Jetson TX1 (estimating depth from a Camera 78.3% 5 Watts 32.26 ms
single image or video
frame)
Collision checking for
ODROID XU4 NVIDIA FLIR thermal imaging 1.5 Watts (ODROID)
[95] small aerial vehicles 35.3% 30 ms (ODROID)
Jetson TX2 camera 7.5 Watts (TX2)
navigation
Computationally
inexpensive
1.5 Watts 4.9 Watts for
[75] ODROID XU4 misclassification D435i Depth Camera 45.8% 36.46 ms
System and Sensor
minimization for aerial
vehicles
NVIDIA Jetson Xavier
[96] Depth estimation Monocular camera 87.8% 10 Watts 0.03 s
NX
Personal fall detection Image depth camera,
[97] NVIDIA Jetson TX2 98% 7.5 Watts 66.67 ms
system RGB camera
Sensors 2023, 23, 2131 19 of 55
Figure 4. Avg. inference time in depth estimation for devices used in this application.
Figure 5. Average inference time in autonomous vehicle obstacle recognition in devices used in this
application.
Sensors 2023, 23, 2131 20 of 55
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Nighttime pedestrian
ODROID XU4 NVIDIA FLIR A325sc thermal 1.5 Watts (ODROID) 10 103 ms (ODROID) 43.3
[98] detection systems for 75.7%
Jetson Xavier camera Watts (Xavier) ms (Xavier)
cars
Lightweight real-time
NVIDIA Jetson TX1, AVT camera (only used 5 Watts (TX1) 7.5 Watts 83.3 ms (TX1) 71.4 ms
[5] traffic light detection for 99.3%
NVIDIA Jetson TX2 for data collection) (TX2) (TX2)
autonomous vehicles
Road marking detection
[1] NVIDIA Jetson TX2 Camera 96.9% 7.5 Watts 47 ms
for autonomous vehicles
Lightweight road object
[100] NVIDIA Jetson TX2 detection for Camera 80.39% 7.5 Watts 31 ms
autonomous vehicles
Lightweight Multitask
object detection and
[101] NVIDIA Jetson Xavier N/A 98.31% 10 Watts 17.36 ms
semantic segmentation
for autonomous vehicles
Path Planning for
NVIDIA Jetson Xavier
[102] self-driving vehicles and Camera 93% 10 Watts 48.57 ms
NX
robotic systems
Thermal object detection LWIR prototype thermal
[103] NVIDIA Jetson Nano 86.6% 5 Watts 333.33 ms
for assisted driving camera
NVIDIA Jetson Xavier Road obstacle detection
[104] 20 Hz stereo camera 98.1% 10 Watts 28.23 ms
NX for vehicles
Traffic sign identification
[99] NVIDIA Jetson TX1 USB webcam 96% 5 Watts 670 ms
for smart vehicles
Object detection and
N/A (can theoretically
NVIDIA Jetson AGX recognition and energy
[105] use onboard camera or 99.63% 10 Watts 260 ms
Xavier management for
radar)
autonomous vehicles
Sensors 2023, 23, 2131 21 of 55
Table 5. Cont.
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Scalable and
computationally cheap
[106] Raspberry Pi 3 Model B+ Raspberry Pi camera 97.75% 2.1 Watts 3 ms
networks for
autonomous driving
Speed bump detection
[11] Raspberry Pi 3 Model B+ Raspberry Pi camera 97.89% 2.1 Watts 104 ms
for autonomous vehicles
Algorithm review for
[107] NVIDIA Jetson Nano self-driving car Mini camera IMX-219 80.5% 5 Watts Not Stated
navigation
Real-time pedestrian
[9] NVIDIA Jetson TX1 detection for Zed Stereo camera 88.44% 5 Watts 33.3 ms
autonomous vehicles
Real-time vehicle
[108] NVIDIA Jetson TX2 detection on embedded N/A 85.6% 7.5 Watts 59.52 ms
systems
Uncertainty-based
NVIDIA Jetson AGX
[109] real-time object detection Camera 68.7% 10 Watts 14.35 ms
Xavier
for autonomous vehicles
Sensors 2023, 23, 2131 22 of 55
Figure 6. Average inference time in medicine and disability assistance in devices used in these
applications.
Figure 7. Average inference time in safety and security in devices used in these applications.
Sensors 2023, 23, 2131 23 of 55
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Visual aid system for the
[112] NVIDIA Jetson TX2 blind via real-time object Webcam 99.82% 7.5 Watts Not Stated
detection
2-CCD multi-spectral
Localize veins from color
[114] NVIDIA Jetson TX2 prism camera (JAI 78.27% 7.5 Watts 530 ms
skin images.
AD-080-CL)
Raspberry Pi 4, NVIDIA COVID Identification 4 Watts (Pi 4) 10 Watts
[115] CT Scanner 98.8% 23.3 s (Pi 4) 2.9 s (Xavier)
Jetson Xavier through chest CT scans (Xavier)
Posture recognition
[116] NVIDIA Jetson Nano system for medical RGB camera 83% 5 Watts 476 ms
surveillance
Jetson TX2 onboard
[117] NVIDIA Jetson TX2 Diabetes diagnosis 91.8% 7.5 Watts 48 ms
camera
Reading assistance for Raspberry Pi camera
[118] Raspberry Pi 3 Model B+ 100% 2.1 Watts 1s
blind people module V2
Early skin cancer
[110] Raspberry Pi 3 Model B+ IR camera 98% 2.1 Watts 62 ms
detection
Cervical cancer
[119] Raspberry Pi PiCamera 90% Not Stated 5.2 s
prevention
Dog health monitoring
[120] Raspberry Pi 4 Model B Smart camera network 100% 4 Watts 69.24 s
through posture analysis
[111] NVIDIA Jetson Nano Diabetic ulcer detection Thermal Camera 97.9% 5 Watts Unspecified
NVIDIA Jetson Xavier
[121] Colonoscopy Colonoscopy camera 100% 10 Watts Unspecified
NX
Travel assistance for the
[122] NVIDIA Jetson Nano Optical RGB camera 94.87% 5 Watts 22.22 ms
visually impaired
Activity recognition for
[123] Raspberry Pi 3 Model B+ medical monitoring and Wearable Sensor 96.63% 2.1 Watts 167.773 ms
rehab
Sensors 2023, 23, 2131 24 of 55
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Sign language
[124] Raspberry Pi Thermal camera 99.52% Not Stated 30 ms
recognition
Proposal of a fast and
NVIDIA Jetson Xavier accurate method of
[125] UAV camera 55.6% 10 Watts 3.5 ms
NX power line edge
intelligent inspection
Production safety Video Surveillance
[3] NVIDIA Jetson TX1 76.7% 5 Watts 27.25 ms
oversight in coal mines camera
Passenger safety
[126] NVIDIA Jetson Nano 360◦ view camera 85% 5 Watts Not Stated
monitoring
NVIDIA Jetson TX2, Hard hat detection on 7.5 Watts (TX2) 5 Watts 68.03 ms (TX2) 111 ms
[127] Surveillance camera 97.14%
NVIDIA Jetson Nano construction site (Nano) (Nano)
Detecting and tracking
[128] NVIDIA Jetson TX2 sinkholes via video Video camera 90.61% 7.5 Watts 17 ms
streaming
Concrete damage
[129] NVIDIA Jetson TX2 detection on the surface Logitech Camera 94.24% 7.5 Watts 33 ms
of buildings
NVIDIA Jetson AGX
[130] Railway defect detection Camera 93.5% 10 Watts 29.94 ms
Xavier
Biometric scan for entry Raspberry Pi NoIR
[131] Raspberry Pi 4 Model B 97.2% 4 Watts Not Stated
control camera
[132] Raspberry Pi 4 Real-time fire detection Camera 97.5% 4 Watts 100 ms
Violent assault Surveillance camera (no
[12] Raspberry Pi 4 92.05% 4 Watts 250 ms
recognition actual live testing)
Raspberry Pi 3 Model
[133] B+, Intel Neural Security surveillance Surveillance camera 94% 2.1 Watts 5.5 ms
Compute Stick 2
Security surveillance for
[134] NVIDIA Jetson Nano abnormal activity Logitech C270 Camera 89% 5 Watts 250 ms
detection
Sensors 2023, 23, 2131 25 of 55
Table 7. Cont.
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Security surveillance for
[135] NVIDIA Jetson Nano HD camera 97.5% 5 Watts Not Stated
unusual behavior
NVIDIA Jetson Xavier
[136] Fire and smoke detection Camera 100% 10 Watts 100 ms
NX
Monitoring vehicle
[137] NVIDIA Jetson TX2 driver tiredness in real Infrared Camera 94% 7.5 Watts 45.45 ms
time
Real-time security RaspiCam camera,
[138] NVIDIA Jetson TX2 surveillance for acts of panoramic spherical Not Stated 7.5 Watts 185 ms
violence camera
No IR filter camera,
NVIDIA Jetson Nano, Rescue operation robot 7.5 Watts (Nano) 50 ms (Nano) 500 ms
[139] LiDAR, Raspi Cam 78.6%
Raspberry Pi 3 Model B+ computer vision 2.1 Watts (Pi 3) (Pi 3)
NOIR V2.1
[140] Raspberry Pi CPU heat tracking Infrared thermal sensor 90.72% Not Stated 12.3 ms
Real-time image
NVIDIA Jetson Xavier
[141] processing for fusion Thermal image camera Not Stated 10 Watts 48.97 ms
NX
diagnostics
Automobile fog lamp
[142] NVIDIA Jetson Nano IMX219 camera 97.5% 5 Watts Not Stated
intelligent control
Rescue of natural
disaster survivors Zenmuse XT2 gimbal
[113] NVIDIA Jetson TX2 61.97% 7.5 Watts 37.6 ms
through drone object camera
detection
Power system cyber
[143] NVIDIA Jetson Nano N/A 99.96% 5 Watts Not Stated
security
Sensors 2023, 23, 2131 26 of 55
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Traffic flow detection
[14] NVIDIA Jetson TX2 Canon EOS550D camera 92% 7.5 Watts 26.39 ms
and management
Real-time metro
HD video recording
[147] NVIDIA Jetson Nano passenger volume 97.1% 5 Watts 128.2 ms
camera
enumeration
Smart Urban waste
[148] Raspberry Pi 4 Model B Pi Camera 91.76% 4 Watts 358.9598 ms
management
Garbage identification
[149] Raspberry Pi 4 Model B Camera 92.62% 4 Watts 630 ms
for recycling
Pedestrian profile FLIR Lepton thermal
[144] Raspberry Pi 3 Model B 74.63% 1.4 Watts 111 ms
recognition camera
Car counter Traffic
[150] NVIDIA Jetson Nano Logitech c922 webcam Not Stated 5 Watts Not Stated
management
Smart city traffic
[151] NVIDIA Jetson Nano Camera 90% 5 Watts 25 ms
management
N/A (most likely a
[152] NVIDIA Jetson Nano Visual garbage detection 94.56% 5 Watts 40 ms
video camera)
[153] NVIDIA Jetson Nano AI traffic light control Raspberry Pi camera 90% 5 Watts Not Stated
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
NVIDIA Jetson AGX Person detection using
[146] N/A 92.57% 10 Watts 41.67 ms
Xavier top clothing
Detecting, tracking, and
geolocating based on a
[154] NVIDIA Jetson TX1 Monocular Camera 97.6% 5 Watts 75.76 ms
monocular camera of an
aerial drone
Spherical Camera (Ricoh
[155] NVIDIA Jetson TX2 Drone detection 88.9% 5 Watts 33.33 ms
Theta S)
Sensors 2023, 23, 2131 28 of 55
Table 9. Cont.
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Resource-constrained
[156] NVIDIA Jetson TX2 N/A 55% 7.5 Watts 72.89 ms
object tracking
Object detection and
object tracking on drones
[157] NVIDIA Jetson TX2 Logitech BRIO camera 90% 7.5 Watts 243.9 ms
with limited power and
computational resources
Identifying and A Basler acA2500-14uc
detecting suitable industrial RGB camera
[145] NVIDIA Jetson Nano Not Stated 5 Watts 48 ms
grasping point on objects with Computer
for robotic limbs M3514-MP lens
Navigation for indoor Fisheye lens on the
[158] NVIDIA Jetson TX2 75.5% 7.5 Watts 34.54 ms
autonomous drones PointGrey Firefly camera
NVIDIA Jetson TX2, Object detection via 7.5 Watts (TX2) 5 Watts
[159] N/A Not Stated Not Stated
NVIDIA Jetson Nano template tracking (Nano)
Target tracking amongst
[160] NVIDIA Jetson TX2 static and dynamic Drone camera Not Stated 7.5 Watts Not Stated
obstacles
Underwater object
[161] NVIDIA Jetson TX2 ZED binocular camera Not Stated 7.5 Watts 90.09 ms
gripping point detection
Intelligent weapon
[162] NVIDIA Jetson TX2 N/A 68.9% 7.5 Watts 60 ms
targeting system
Object recognition for High-definition
NVIDIA Jetson AGX
[163] unmanned surface photoelectric vision 81.74% 10 Watts 37.36 ms
Xavier
vehicles sensor
Raspberry Pi v1.3
Drone landing
[164] Raspberry Pi 3 Model B+ camera with a fisheye Not Stated 2.1 Watts 37.36 ms
automation
lens
Sensors 2023, 23, 2131 29 of 55
Table 9. Cont.
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Image recognition for
[10] Raspberry Pi 3 model B Pi Camera v2.1 89.81% 1.4 Watts 33.33 ms
sea life
[165] Raspberry Pi 3 Model B+ Image classification N/A 83.7% 2.1 Watts 180 ms
Counting individuals
[166] Raspberry Pi within a given video Camera 90% 1.4 Watts Not Stated
feed
Fish recognition for 360 degrees panoramic
[167] Raspberry Pi 87% 1.4 Watts 6s
underwater drones camera
Identifying different
[168] NVIDIA Jetson Nano Photo camera 97.5% 5 Watts Not Stated
plant species
Nvidia Jetson Nano,
Artistic photography 5 Watts (Nano and TX1) 37 ms (Nano) 17.9 ms
[169] Nvidia Jetson TX1, N/A 91.02%
aesthetic score prediction 4 Watts (Pi 4) (TX1) 1.14 s (Pi 4)
Raspberry Pi 4
Underwater object N/A (visual camera in
[170] NVIDIA Jetson Nano 74.77% 5 Watts 125 ms
detection case of field testing)
Sensors 2023, 23, 2131 30 of 55
Figure 10. Average inference time in devices used for testing model optimization methods.
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Early cardiovascular
NVIDIA Jetson Nano, 5 Watts (Nano) 2.78 ms (Nano)
[13] disease prevention Ultrasound 90.7 %
Raspberry Pi 3 1.4 Watts (Pi 3) 6.95 ms (Pi 3)
through ultrasound
Patient anesthesia
[174] Raspberry Pi 3 Electroencephalogram 95% 1.4 Watts 20 ms
monitoring
Wireless body sensors
Human posture
[175] Raspberry Pi 3 (motion sensors, inertial 98.28% 1.4 Watts 20 ms
detection
sensors)
Epileptic seizure
[176] NVIDIA Jetson Nano Electrocardiogram 91.58% 5 Watts Not Stated
detection
Low-power multimodal Stand-alone dual-mode
[177] NVIDIA Jetson TX2 98% 7.5 Watts 1.6 ms
data classification Tongue Drive System
IMU sensor, Shimmer
Driver behavior
[178] Raspberry Pi Model 3 Version 3 wearable body 73.02% 1.4 Watts 4.357 s
monitoring
sensors
Smart Urban waste
[179] Raspberry Pi 3 Model B+ Ultrasonic sensor 88.43% 2.1 Watts 960 ms
management
Fault detection in AC
[180] Raspberry Pi 3 Model B Photoelectric sensor 99.37% 1.4 Watts 31 ms
electrical systems
Target classification at
[181] Raspberry Pi 3 Model B+ road gates with radar Radar Not Stated 2.1 Watts Not Stated
SVM
Human activity Wearable multimodal
[182] Raspberry Pi 3 Model B+ 99.21% 2.1 Watts 153 ms
recognition sensors
[183] Raspberry Pi 3B+ Speech recognition Audio sensor 96.82% 2.1 Watts 270 ms
Raspberry Pi 3B,
Psychological stress Heart rate and 1.4 Watts (Pi 3) 5 Watts 189 ms (Pi 3) 2.8 ms
[4] NVIDIA Jetson TX1, 96.7%
monitoring accelerometer sensors (TX1) 7.5 Watts (TX2) (TX1) 4.7 ms (TX2)
NVIDIA Jetson TX2
[184] Raspberry Pi 3 Model B Motor fault diagnosis Hall effect sensor 97.05% 1.4 Watts 3.4 s
Sensors 2023, 23, 2131 32 of 55
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Machine state Vibration Sensor,
[185] Raspberry Pi 4 Model B 98% 4 Watts 1.002 s
monitoring Accelerometers
SDS011 air quality
[186] Raspberry Pi Asthma risk prediction 99% 1.4 Watts Not Stated
sensor
Speech source SSL sensors,
[8] Raspberry Pi 3 Model B 89.68% 4 Watts 21 ms
identification microphones
Battery charge GY169 current converter
[187] NVIDIA Jetson Nano RMSE of 1.976 5 Watts Not Stated
management sensor module
Nuclear magnetic
[188] NVIDIA Jetson TX2 Food quality analysis resonance spectrometer, 95% 7.5 Watts 4 ms
infrared spectrometer
Pot plant species
Capacitive Soil Moisture
identification and
[189] NVIDIA Jetson Nano sensor, Water Level Not Stated 5 Watts Not Stated
watering needs
Sensor
monitoring
Radio frequency ID Universal software radio
[190] NVIDIA Jetson Nano 89.27% 5 Watts 18 min
recognition peripheral
NVIDIA Jetson Xavier Trajectory tracking for Velodyne Lite 16 Lidar
[171] 83% 10 Watts 100 ms
NX small drones sensor
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Improve the
[172] NVIDIA Jetson TX2 effectiveness of Image N/A 65.7% 7.5 Watts 230 ms
Captioning
NVIDIA Jetson TX2, Latency estimation on 96.39 % (Nano) 95.82 % 5 Watts (Nano) 7.5 Watts 13.74 ms (Nano) 6.7 ms
[191] N/A
NVIDIA Jetson Nano embedded systems (TX2)) (TX2) (TX2)
Real-time video analysis
[192] NVIDIA Jetson Nano Video camera 85% 5 Watts 11.21 ms
for edge computing
Sensors 2023, 23, 2131 33 of 55
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Low-power and
real-time deep
[193] NVIDIA Jetson TX2 5MP CSI camera N/A 7.5 Watts 100 ms
learning-based multiple
object visual tracking
[173] NVIDIA Jetson TX2 Filter Pruning DNNs N/A 93.51% 7.5 Watts 8.01 ms
Energy-efficient
NVIDIA Jetson AGX
[194] acceleration of deep N/A N/A 10 Watts Not Stated
Xavier
neural networks
Semantic Segmentation
[195] NVIDIA Jetson TX1 N/A 87.3% 5 Watts 24 ms
for autonomous vehicles
Improve semantic
segmentation
performance in contexts
[196] NVIDIA Jetson TX2 N/A 92.74% 7.5 Watts 92.46 ms
of various sizes and
types in diverse
environments
NVIDIA Jetson TX2,
Edge tensor processing
[197] unit, neural compute Fusion Pruning DNNs N/A 90.66% 7.5 Watts 4.7 ms
stick, and neural
compute stick2
Reduce computational
complexity and memory
[198] NVIDIA Jetson TX2 consumption of CNNs N/A 93% 7.5 Watts 66.14 ms
architecture on
low-power devices
Reduce computational
complexity and memory
[199] NVIDIA Jetson TX2 consumption of CNNs N/A 99.3% 7.5 Watts 894.85 ms
architecture on
low-power devices
Sensors 2023, 23, 2131 34 of 55
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Improve embedded
NVIDIA Jetson AGX
[200] system performance in N/A 98.3% 10 Watts 690 ms
Xavier
autonomous vehicles
Provide a less resource
costly object detection
[201] NVIDIA Jetson TX1 N/A 65.7% 5 Watts 135.2 ms
model for embedded
systems
Efficient video
[202] NVIDIA Jetson Nano Video camera 74.1% 5 Watts 13.51 ms
understanding
Scalable and
computationally cheap
[106] Raspberry Pi 3 Model B+ Raspberry Pi camera 75.78% 5 Watts 284 ms
networks for
autonomous driving
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
NVIDIA Jetson Nano,
Enhance learning rate for
Coral Edge TPU, custom 49.6% (Nano) 49.8% 5 Watts (Nano) 2 Watts 0.3294 s (Nano) 19.8 ms
[23] ML model with smaller N/A (Benchmark paper)
convolutional neural (TPU) (TPU) (TPU)
training datasets
network accelerator
NVIDIA Jetson Nano, USB attached video
Benchmark analysis of 5 Watts (Nano) 10 Watts 0.56 s (Nano) 47.61 ms
[20] NVIDIA Jetson AGX camera (Benchmark 70%
3d object detection (AGX) (AGX)
Xavier paper)
NVIDIA Jetson Nano, Performance analysis of
93.8 % (Nano) 93.9% 5 Watts (Nano) 7.5 Watts 58 s (Nano) 32 s (TX2)
[18] NVIDIA Jetson TX2, different hardware for N/A (Benchmark paper)
(TX2) 91.6% (Pi) (TX2) 4 Watts (Pi) 372 s (Pi)
Raspberry PI 4 object detection CNNs
Analysis of DNN
[19] NVIDIA Jetson TX1 architecture in image N/A (Benchmark paper) 69.52% 5 Watts 10.55 ms
recognition
Sensors 2023, 23, 2131 35 of 55
Paper Title Hardware Application Sensor Accuracy Power Consumption Inference Time
Presentation and
Asus Tinker Edge R, comparison of the
4.75 Watts (Tinker) 2.75 0.33 s (Tinker) 0.28 s
Raspberry Pi 4, Google performance of the
[15] N/A (Benchmark paper) 92.5% Watts (Coral) 2.1 Watts (Coral) 0.21 s (Pi) 0.137 s
Coral Dev Board, presented systems in
(Pi) 0.9 Watts (Nano) (Nano)
NVIDIA Jetson Nano terms of inference time
and power consumption
N/A (dataset acquired
Space exploration
[22] Raspberry Pi 4 from images taken by the 95% 4 Watts 89 ms
landing site selection
Mars HiRISE camera)
NVIDIA Jetson Nano,
NVIDIA Jetson TX1, Accuracy Rates Not 5 Watts (Nano & TX1) 10 94 ms (Nano) 84 ms
[21] Benchmarking paper N/A
NVIDIA Jetson AGX Stated Watts (AGX) (TX1) 46 ms (AGX)
Xavier
NVIDIA Jetson TX2, Benchmarking NVIDIA
NVIDIA Jetson Xavier Jetson systems for visual Accuracy Rates Not 7.5 Watts (TX2) 10 Watts
[17] N/A Speed Rates Not Stated
NX, and NVIDIA Jetson odometry of flying Stated (NX & AGX)
AGX Xavier drones
Sensors 2023, 23, 2131 36 of 55
Figure 11. Average inference time in devices covered in referenced benchmark papers.
8. Conclusions
Rapid advances have been made in the field of machine learning, causing an explosion
of model variety, application, and performance. While many of these models are imple-
mented on powerful stationary computer devices, there are many applications that are
faced with cost, power, and size limitations for the specific usage of their models. For this
reason, the field of embedded machine learning, which is the implementation of machine
learning on embedded computing systems, has also faced a great deal of attention recently.
The main challenges faced in embedded machine learning are caused by the severe limi-
tations of embedded system devices in terms of computational performance and power,
with different devices having different performances, power requirements, and purchasing
costs. In this review, a large collection of research work and implementation of embedded
machine learning on Raspberry Pi, NVIDIA Jetson, and a few other series of devices is
presented alongside the overall power consumption, inference time, and accuracy of these
implementations. In addition, unlike many other reviews of this topic, this paper also
includes a presentation of the overall sensing scheme present in many of the works. It was
believed that this was a major dimension of embedded machine learning study overlooked
by most other reviews on the subject matter. The hope of this review is to familiarize
interested researchers in the field of embedded machine learning by giving them a general
introduction to it.
Overall, this study contained studies of several generations of embedded systems,
specifically, the Nvidia Jetson and Raspberry Pi systems, showing that much like dedicated
computing systems, embedded devices have been experiencing steady improvements in
the fields of performance and power consumption. More recent Jetson boards such as
the TX2 have a far higher performance rate compared to the TX1 while having the same
power consumption levels. As these advances continue, it stands to reason that embedded
machine learning will see even greater attention and become even more widespread. All of
the systems discussed in this work have their own distinct advantages and disadvantages
that users would need to consider when choosing a system for their embedded machine
learning application. More robust systems with high performance and relatively efficient
power usage such as the Jetson Board and Coral Dev Board line tend to be more monetarily
expensive, while more affordable options such as the Raspberry and Banana Pi boards
tend to have far lower performances. More remote applications such as agricultural object
detection systems might need a greater number of low-power systems while not having
much emphasis on performance, while autonomous vehicle applications would have a far
greater emphasis on performance and accuracy than on cost and power usage. A general
table of all sources’ hardware, application, ML architecture, sensor is provided in Table 13
for interested readers.
Sensors 2023, 23, 2131 37 of 55
Author Contributions: Conceptualization was performed by W.T. and A.B. Validation of the research
was performed by W.T. Investigation of sources for the review was completed by A.B. Resources
were identified by W.T. and A.B. Writing of the original draft of the paper was done by A.B. Final
review and editing were completed by W.T. Supervision over the research was provided by W.T. All
authors have read and agreed to the published version of the manuscript.
Funding: This research is funded by National Science Foundation Grant ECCS-1652944 and ECCS-
2015573.
Informed Consent Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
References
1. Hoang, T.M.; Nam, S.H.; Park, K.R. Enhanced Detection and Recognition of Road Markings Based on Adaptive Region of Interest
and Deep Learning. IEEE Access 2019, 7, 109817–109832. [CrossRef]
2. Inthanon, P.; Mungsing, S. Detection of Drowsiness from Facial Images in Real-Time Video Media using Nvidia Jetson Nano. In
Proceedings of the 2020 17th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and
Information Technology (ECTI-CON), Phuket, Thailand, 24–27 June 2020; pp. 246–249. [CrossRef]
3. Xu, Z.; Li, J.; Zhang, M. A Surveillance Video Real-Time Analysis System Based on Edge-Cloud and FL-YOLO Cooperation in
Coal Mine. IEEE Access 2021, 9, 68482–68497. [CrossRef]
4. Attaran, N.; Puranik, A.; Brooks, J.; Mohsenin, T. Embedded Low-Power Processor for Personalized Stress Detection. IEEE Trans.
Circuits Syst. II Express Briefs 2018, 65, 2032–2036. [CrossRef]
5. Ouyang, Z.; Niu, J.; Liu, Y.; Guizani, M. Deep CNN-Based Real-Time Traffic Light Detector for Self-Driving Vehicles. IEEE Trans.
Mob. Comput. 2020, 19, 300–313. [CrossRef]
6. Dean, J. The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design. arXiv 2019,
arXiv:1911.05289.
7. Breland, D.S.; Dayal, A.; Jha, A.; Yalavarthy, P.K.; Pandey, O.J.; Cenkeramaddi, L.R. Robust Hand Gestures Recognition Using a
Deep CNN and Thermal Images. IEEE Sens. J. 2021, 21, 26602–26614. [CrossRef]
8. Hao, Y.; Küçük, A.; Ganguly, A.; Panahi, I.M.S. Spectral Flux-Based Convolutional Neural Network Architecture for Speech
Source Localization and its Real-Time Implementation. IEEE Access 2020, 8, 197047–197058. [CrossRef] [PubMed]
9. Harisankar, V.; Karthika, R. Real Time Pedestrian Detection Using Modified YOLO V2. In Proceedings of the 2020 5th International
Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 10–12 June 2020; pp. 855–859. [CrossRef]
10. Demir, H.S.; Christen, J.B.; Ozev, S. Energy-Efficient Image Recognition System for Marine Life. IEEE Trans. Comput. Aided Des.
Integr. Circuits Syst. 2020, 39, 3458–3466. [CrossRef]
11. Dewangan, D.K.; Sahu, S.P. Deep Learning-Based Speed Bump Detection Model for Intelligent Vehicle System Using Raspberry
Pi. IEEE Sens. J. 2021, 21, 3570–3578. [CrossRef]
12. Vieira, J.C.; Sartori, A.; Stefenon, S.F.; Perez, F.L.; de Jesus, G.S.; Leithardt, V.R.Q. Low-Cost CNN for Automatic Violence
Recognition on Embedded System. IEEE Access 2022, 10, 25190–25202. [CrossRef]
13. Sahani, A.K.; Srivastava, D.; Sivaprakasam, M.; Joseph, J. A Machine Learning Pipeline for Measurement of Arterial Stiffness in
A-Mode Ultrasound. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2022, 69, 106–113. [CrossRef] [PubMed]
14. Chen, C.; Liu, B.; Wan, S.; Qiao, P.; Pei, Q. An Edge Traffic Flow Detection Scheme Based on Deep Learning in an Intelligent
Transportation System. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1840–1852. [CrossRef]
15. Baller, S.P.; Jindal, A.; Chadha, M.; Gerndt, M. DeepEdgeBench: Benchmarking Deep Neural Networks on Edge Devices. In
Proceedings of the 2021 IEEE International Conference on Cloud Engineering (IC2E), Timisoara, Romania, 27–30 October 2021;
pp. 20–30. [CrossRef]
16. Ajani, T.S.; Imoize, A.L.; Atayero, A.A. An Overview of Machine Learning within Embedded and Mobile Devices–Optimizations
and Applications. Sensors 2021, 21, 4412. [CrossRef] [PubMed]
17. Jeon, J.; Jung, S.; Lee, E.; Choi, D.; Myung, H. Run Your Visual-Inertial Odometry on NVIDIA Jetson: Benchmark Tests on a Micro
Aerial Vehicle. IEEE Robot. Autom. Lett. 2021, 6, 5332–5339. [CrossRef]
Sensors 2023, 23, 2131 48 of 55
18. Süzen, A.A.; Duman, B.; Şen, B. Benchmark Analysis of Jetson TX2, Jetson Nano and Raspberry PI using Deep-CNN. In
Proceedings of the 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications
(HORA), Ankara, Turkey, 26–28 June 2020; pp. 1–5. [CrossRef]
19. Bianco, S.; Cadene, R.; Celona, L.; Napoletano, P. Benchmark Analysis of Representative Deep Neural Network Architectures.
IEEE Access 2018, 6, 64270–64277. [CrossRef]
20. Choe, M.; Lee, S.; Sung, N.M.; Jung, S.; Choe, C. Benchmark Analysis of Deep Learning-based 3D Object Detectors on
NVIDIA Jetson Platforms. In Proceedings of the 2021 International Conference on Information and Communication Technology
Convergence (ICTC), Jeju Island, Republic of Korea, 20–22 October 2021; pp. 10–12. [CrossRef]
21. Ullah, S.; Kim, D.H. Benchmarking Jetson Platform for 3D Point-Cloud and Hyper-Spectral Image Classification. In Proceedings
of the 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), Busan, Republic of Korea, 19–22
February 2020; pp. 477–482. [CrossRef]
22. Claudet, T.; Tomita, K.; Ho, K. Benchmark Analysis of Semantic Segmentation Algorithms for Safe Planetary Landing Site
Selection. IEEE Access 2022, 10, 41766–41775. [CrossRef]
23. Lungu, I.A.; Aimar, A.; Hu, Y.; Delbruck, T.; Liu, S.C. Siamese Networks for Few-Shot Learning on Edge Embedded Devices.
IEEE J. Emerg. Sel. Top. Circuits Syst. 2020, 10, 488–497. [CrossRef]
24. Nvidia Corporation. Jetson Nano Developer Kit; Nvidia Corporation: Santa Clara, CA, USA, 2019.
25. Nvidia Corporation. Jetson TX1 Developer Kit; Nvidia Corporation: Santa Clara, CA, USA, 2016.
26. Nvidia Corporation. Jetson TX2 Developer Kit; Nvidia Corporation: Santa Clara, CA, USA, 2019.
27. Nvidia Corporation. Jetson AGX Xavier Developer Kit; Nvidia Corporation: Santa Clara, CA, USA, 2019.
28. Nvidia Corporation. Jetson Xavier NX Developer Kit; Nvidia Corporation: Santa Clara, CA, USA, 2020.
29. Coral.ai. Get started with the Dev Board. Available online: https://coral.ai/docs/dev-board/get-started (accessed on 29 May
2022).
30. Raspberry Pi Foundation. Raspberry Pi 3 Model B; Raspberry Pi Foundation: Cambridge, UK, 2016.
31. Raspberry Pi Foundation. Raspberry Pi 4 Model B; Raspberry Pi Foundation: Cambridge, UK, 2019.
32. Hardkernel Co. ODROID XU4; Hardkernel Co.: Anyang, Gyeonggi-do, Republic of Korea, 2015.
33. SinoVoip Co., Ltd. Banana PI M2; SinoVoip Co., Ltd.: Shenzhen, China.
34. ASUSTek Computer Inc. Tinker Board S; ASUSTek Computer Inc.: Taipei, Taiwan, 2017.
35. Bigelow, S.J. TechTarget, Operating System (OS). Available online: https://www.techtarget.com/whatis/definition/operating-
system-OS (accessed on 11 July 2022).
36. Gillis, A.S. TechTarget, Device Driver. Available online: https://www.techtarget.com/searchenterprisedesktop/definition/
device-driver (accessed on 4 July 2022).
37. Chakraborty, K. Firmware. techopedia. Available online: https://www.techopedia.com/definition/2137/firmware (accessed on
27 June 2022).
38. ASUSTek Computer Inc. Tinker Edge R; ASUSTek Computer Inc.: Taipei, Taiwan, 2020.
39. Hu, Q.; Tang, X.; Tang, W. A Real-Time Patient-Specific Sleeping Posture Recognition System Using Pressure Sensitive Conductive
Sheet and Transfer Learning. IEEE Sens. J. 2021, 21, 6869–6879. [CrossRef]
40. Hu, Q.; Tang, X.; Tang, W. A Smart Chair Sitting Posture Recognition System Using Flex Sensors and FPGA Implemented
Artificial Neural Network. IEEE Sens. J. 2020, 20, 8007–8016. [CrossRef]
41. Science Learning Hub, Electricity and Sensors. Available online: https://www.sciencelearn.org.nz/resources/1602-electricity-
and-sensors (accessed on 12 July 2022).
42. Wilson, J.S. Sensor Technology Handbook; Newnes: Oxford, UK, 2004.
43. Hu, Q.; Yi, C.; Kliewer, J.; Tang, W. Asynchronous communication for wireless sensors using ultra wideband impulse radio. In
Proceedings of the 2015 IEEE 58th International Midwest Symposium on Circuits and Systems (MWSCAS), Fort Collins, CO,
USA, 2–5 August 2015; pp. 1–4. [CrossRef]
44. Hu, Q.; Tang, X.; Tang, W. Integrated Asynchronous Ultra-Wideband Impulse Radio with Intrinsic Clock and Data Recovery.
IEEE Microw. Wirel. Components Lett. 2017, 27, 416–418. [CrossRef]
45. McGrath, M.J.; Ní Scanaill, C. Key Sensor Technology Components: Hardware and Software Overview; Apress: Berkeley, CA, USA,
2014; pp. 51–77.
46. Gleason, C.J.; Im, J. Forest biomass estimation from airborne LiDAR data using machine learning approaches. Remote. Sens.
Environ. 2012, 125, 80–91. [CrossRef]
47. Infiniti Electro-Optics, Visible Imaging Sensor (RGB Color Camera). Available online: https://www.infinitioptics.com/glossary/
visible-imaging-sensor-400700nm-colour-cameras (accessed on 11 July 2022).
48. Tang, W.; Biglari, A.; Ebarb, R.; Pickett, T.; Smallidge, S.; Ward, M. A Smart Sensing System of Water Quality and Intake
Monitoring for Livestock and Wild Animals. Sensors 2021, 21, 2885. [CrossRef] [PubMed]
49. Biglari, A.; Tang, W. A Vision-Based Cattle Recognition System Using TensorFlow for Livestock Water Intake Monitoring. IEEE
Sens. Lett. 2022, 6, 1–4. [CrossRef]
50. Ibarra, V.; Araya-Salas, M.; Tang, Y.; Park, C.; Hyde, A.; Wright, T.F.; Tang, W. An RFID Based Smart Feeder for Hummingbirds.
Sensors 2015, 15, 29886. [CrossRef]
Sensors 2023, 23, 2131 49 of 55
78. Adami, D.; Ojo, M.O.; Giordano, S. Design, Development and Evaluation of an Intelligent Animal Repelling System for Crop
Protection Based on Embedded Edge-AI. IEEE Access 2021, 9, 132125–132139. [CrossRef]
79. Beegam, K.S.; Shenoy, M.V.; Chaturvedi, N. Hybrid Consensus and Recovery Block-Based Detection of Ripe Coffee Cherry
Bunches Using RGB-D Sensor. IEEE Sens. J. 2022, 22, 732–740. [CrossRef]
80. Li, N.; Zhang, X.; Zhang, C.; Guo, H.; Sun, Z.; Wu, X. Real-Time Crop Recognition in Transplanted Fields With Prominent Weed
Growth: A Visual-Attention-Based Approach. IEEE Access 2019, 7, 185310–185321. [CrossRef]
81. Sa, I.; Chen, Z.; Popović, M.; Khanna, R.; Liebisch, F.; Nieto, J.; Siegwart, R. weedNet: Dense Semantic Weed Classification Using
Multispectral Images and MAV for Smart Farming. IEEE Robot. Autom. Lett. 2018, 3, 588–595. [CrossRef]
82. Tufail, M.; Iqbal, J.; Tiwana, M.I.; Alam, M.S.; Khan, Z.A.; Khan, M.T. Identification of Tobacco Crop Based on Machine Learning
for a Precision Agricultural Sprayer. IEEE Access 2021, 9, 23814–23825. [CrossRef]
83. Xiang, A.J.; Huddin, A.B.; Ibrahim, M.F.; Hashim, F.H. An Oil Palm Loose Fruits Image Detection System using Faster R -CNN
and Jetson TX2. In Proceedings of the 2021 International Conference on Electrical Engineering and Informatics (ICEEI), Kuala
Terengganu, Malaysia, 12–13 October 2021; pp. 1–6. [CrossRef]
84. Chen, C.J.; Huang, Y.Y.; Li, Y.S.; Chen, Y.C.; Chang, C.Y.; Huang, Y.M. Identification of Fruit Tree Pests With Deep Learning on
Embedded Drone to Achieve Accurate Pesticide Spraying. IEEE Access 2021, 9, 21986–21997. [CrossRef]
85. Jarraya, I.; Ouarda, W.; Alimi, A.M. A Preliminary Investigation on Horses Recognition Using Facial Texture Features. In
Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015;
pp. 2803–2808. [CrossRef]
86. Basu, A.; Dasgupta, A.; Thyagharajan, A.; Routray, A.; Guha, R.; Mitra, P. A Portable Personality Recognizer Based on Affective
State Classification Using Spectral Fusion of Features. IEEE Trans. Affect. Comput. 2018, 9, 330–342. [CrossRef]
87. Chakraborty, S.; Singh, S.K.; Kumar, K. Facial Biometric System for Recognition Using Extended LGHP Algorithm on Raspberry
Pi. IEEE Sens. J. 2020, 20, 8117–8127. [CrossRef]
88. Papa, L.; Alati, E.; Russo, P.; Amerini, I. SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth
Estimation on Low-Resource Settings. IEEE Access 2022, 10, 44881–44890. [CrossRef]
89. Koubaa, A.; Ammar, A.; Kanhouch, A.; AlHabashi, Y. Cloud Versus Edge Deployment Strategies of Real-Time Face Recognition
Inference. IEEE Trans. Netw. Sci. Eng. 2022, 9, 143–160. [CrossRef]
90. Nguyen, D.L.; Putro, M.D.; Jo, K.H. Facemask Wearing Alert System Based on Simple Architecture with Low-Computing Devices.
IEEE Access 2022, 10, 29972–29981. [CrossRef]
91. Ab Wahab, M.N.; Nazir, A.; Zhen Ren, A.T.; Mohd Noor, M.H.; Akbar, M.F.; Mohamed, A.S.A. Efficientnet-Lite and Hybrid
CNN-KNN Implementation for Facial Expression Recognition on Raspberry Pi. IEEE Access 2021, 9, 134065–134080. [CrossRef]
92. Zarif, N.E.; Montazeri, L.; Leduc-Primeau, F.; Sawan, M. Mobile-Optimized Facial Expression Recognition Techniques. IEEE
Access 2021, 9, 101172–101185. [CrossRef]
93. Gaikwad, B.; Prakash, P.; Karmakar, A. Edge-based real-time face logging system for security applications. In Proceedings of the
2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT), Kharagpur, India,
6–8 July 2021; pp. 1–6. [CrossRef]
94. Yang, J.; Qian, T.; Zhang, F.; Khan, S.U. Real-Time Facial Expression Recognition Based on Edge Computing. IEEE Access 2021,
9, 76178–76190. [CrossRef]
95. Bucki, N.; Lee, J.; Mueller, M.W. Rectangular Pyramid Partitioning Using Integrated Depth Sensors (RAPPIDS): A Fast Planner
for Multicopter Navigation. IEEE Robot. Autom. Lett. 2020, 5, 4626–4633. [CrossRef]
96. Dao, T.T.; Pham, Q.V.; Hwang, W.J. FastMDE: A Fast CNN Architecture for Monocular Depth Estimation at High Resolution.
IEEE Access 2022, 10, 16111–16122. [CrossRef]
97. Tsai, T.H.; Hsu, C.W. Implementation of Fall Detection System Based on 3D Skeleton for Deep Learning Technique. IEEE Access
2019, 7, 153049–153059. [CrossRef]
98. Nowosielski, A.; Małecki, K.; Forczmański, P.; Smoliński, A.; Krzywicki, K. Embedded Night-Vision System for Pedestrian
Detection. IEEE Sens. J. 2020, 20, 9293–9304. [CrossRef]
99. Han, Y.; Oruklu, E. Traffic sign recognition based on the NVIDIA Jetson TX1 embedded system using convolutional neural
networks. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston,
MA, USA, 6–9 August 2017; pp. 184–187. [CrossRef]
100. Liu, Y.; Cao, S.; Lasang, P.; Shen, S. Modular Lightweight Network for Road Object Detection Using a Feature Fusion Approach.
IEEE Trans. Syst. Man, Cybern. Syst. 2021, 51, 4716–4728. [CrossRef]
101. Lai, C.Y.; Wu, B.X.; Shivanna, V.M.; Guo, J.I. MTSAN: Multi-Task Semantic Attention Network for ADAS Applications. IEEE
Access 2021, 9, 50700–50714. [CrossRef]
102. Li, Z.; Zhou, A.; Pu, J.; Yu, J. Multi-Modal Neural Feature Fusion for Automatic Driving Through Perception-Aware Path Planning.
IEEE Access 2021, 9, 142782–142794. [CrossRef]
103. Farooq, M.A.; Corcoran, P.; Rotariu, C.; Shariff, W. Object Detection in Thermal Spectrum for Advanced Driver-Assistance
Systems (ADAS). IEEE Access 2021, 9, 156465–156481. [CrossRef]
104. Sun, T.; Pan, W.; Wang, Y.; Liu, Y. Region of Interest Constrained Negative Obstacle Detection and Tracking With a Stereo Camera.
IEEE Sens. J. 2022, 22, 3616–3625. [CrossRef]
Sensors 2023, 23, 2131 51 of 55
105. Tang, X.; Chen, J.; Yang, K.; Toyoda, M.; Liu, T.; Hu, X. Visual Detection and Deep Reinforcement Learning-Based Car Following
and Energy Management for Hybrid Electric Vehicles. IEEE Trans. Transp. Electrif. 2022, 8, 2501–2515. [CrossRef]
106. Sajjad, M.; Irfan, M.; Muhammad, K.; Ser, J.D.; Sanchez-Medina, J.; Andreev, S.; Ding, W.; Lee, J.W. An Efficient and Scalable
Simulation Model for Autonomous Vehicles With Economical Hardware. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1718–1732.
[CrossRef]
107. Vijitkunsawat, W.; Chantngarm, P. Comparison of Machine Learning Algorithm’s on Self-Driving Car Navigation using Nvidia
Jetson Nano. In Proceedings of the 2020 17th International Conference on Electrical Engineering/Electronics, Computer,
Telecommunications and Information Technology (ECTI-CON), Phuket, Thailand, 24–27 June 2020; pp. 201–204. [CrossRef]
108. Nguyen, H.H.; Tran, D.N.N.; Jeon, J.W. Towards Real-Time Vehicle Detection on Edge Devices with Nvidia Jetson TX2. In
Proceedings of the 2020 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), Seoul, Republic of Korea, 1–3
November 2020; pp. 1–4. [CrossRef]
109. Choi, J.; Chun, D.; Lee, H.J.; Kim, H. Uncertainty-Based Object Detector for Autonomous Driving Embedded Platforms. In
Proceedings of the 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Phuket,
Thailand, 24–27 June 2020; pp. 16–20. [CrossRef]
110. Díaz, S.; Krohmer, T.; Moreira, Á.; Godoy, S.E.; Figueroa, M. An Instrument for Accurate and Non-Invasive Screening of Skin
Cancer Based on Multimodal Imaging. IEEE Access 2019, 7, 176646–176657. [CrossRef]
111. Prabhu, M.S.; Verma, S. A Deep Learning framework and its Implementation for Diabetic Foot Ulcer Classification. In Proceedings
of the 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)
(ICRITO), Noida, India, 3–4 September 2021; pp. 1–5. [CrossRef]
112. Chang, C.Y.; Liou, S.H. A Blind Aid System based on Jetson TX2 Embedded System and Deep Learning Technique. In Proceedings
of the 2019 8th International Conference on Innovation, Communication and Engineering (ICICE), Zhengzhou, China, 25–30
October 2019; pp. 25–29. [CrossRef]
113. Dong, J.; Ota, K.; Dong, M. UAV-Based Real-Time Survivor Detection System in Post-Disaster Search and Rescue Operations.
IEEE J. Miniaturization Air Space Syst. 2021, 2, 209–219. [CrossRef]
114. Tang, C.; Xia, S.; Qian, M.; Wang, B. Deep Learning-Based Vein Localization on Embedded System. IEEE Access 2021, 9, 27916–
27927. [CrossRef]
115. Paluru, N.; Dayal, A.; Jenssen, H.B.; Sakinis, T.; Cenkeramaddi, L.R.; Prakash, J.; Yalavarthy, P.K. Anam-Net: Anamorphic Depth
Embedding-Based Lightweight CNN for Segmentation of Anomalies in COVID-19 Chest CT Images. IEEE Trans. Neural Networks
Learn. Syst. 2021, 32, 932–946. [CrossRef] [PubMed]
116. Nguyen Huu, P.; Nguyen Thi, N.; Ngoc, T.P. Proposing Posture Recognition System Combining MobilenetV2 and LSTM for
Medical Surveillance. IEEE Access 2022, 10, 1839–1849. [CrossRef]
117. Goyal, M.; Reeves, N.D.; Rajbhandari, S.; Yap, M.H. Robust Methods for Real-Time Diabetic Foot Ulcer Detection and Localization
on Mobile Devices. IEEE J. Biomed. Health Inform. 2019, 23, 1730–1741. [CrossRef] [PubMed]
118. Khan, M.A.; Paul, P.; Rashid, M.; Hossain, M.; Ahad, M.A.R. An AI-Based Visual Aid With Integrated Reading Assistant for the
Completely Blind. IEEE Trans. Hum.-Mach. Syst. 2020, 50, 507–517. [CrossRef]
119. Parra, S.; Carranza, E.; Coole, J.; Hunt, B.; Smith, C.; Keahey, P.; Maza, M.; Schmeler, K.; Richards-Kortum, R. Development of
Low-Cost Point-of-Care Technologies for Cervical Cancer Prevention Based on a Single-Board Computer. IEEE J. Transl. Eng.
Health Med. 2020, 8, 1–10. [CrossRef] [PubMed]
120. Tsai, M.F.; Huang, J.Y. Predicting Canine Posture with Smart Camera Networks Powered by the Artificial Intelligence of Things.
IEEE Access 2020, 8, 220848–220857. [CrossRef]
121. Ciobanu, A.; Luca, M.; Barbu, T.; Drug, V.; Olteanu, A.; Vulpoi, R. Experimental Deep Learning Object Detection in Real-time
Colonoscopies. In Proceedings of the 2021 International Conference on e-Health and Bioengineering (EHB), Iasi, Romania, 18–19
November 2021; pp. 1–4. [CrossRef]
122. Joshi, R.; Tripathi, M.; Kumar, A.; Gaur, M.S. Object Recognition and Classification System for Visually Impaired. In Proceedings
of the 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 28–30 July 2020;
pp. 1568–1572. [CrossRef]
123. Wang, X.; Zhang, L.; Huang, W.; Wang, S.; Wu, H.; He, J.; Song, A. Deep Convolutional Networks with Tunable Speed–Accuracy
Tradeoff for Human Activity Recognition Using Wearables. IEEE Trans. Instrum. Meas. 2022, 71, 1–12. [CrossRef]
124. Breland, D.S.; Skriubakken, S.B.; Dayal, A.; Jha, A.; Yalavarthy, P.K.; Cenkeramaddi, L.R. Deep Learning-Based Sign Language
Digits Recognition From Thermal Images With Edge Computing System. IEEE Sens. J. 2021, 21, 10445–10453. [CrossRef]
125. Liu, M.; Li, Z.; Li, Y.; Liu, Y. A Fast and Accurate Method of Power Line Intelligent Inspection Based on Edge Computing. IEEE
Trans. Instrum. Meas. 2022, 71, 1–12. [CrossRef]
126. Saeed, K.; Adamski, M.; Klimowicz, A.; Lupinska-Dubicka, A.; Omieljanowicz, M.; Rubin, G.; Rybnik, M.; Szymkowski, M.;
Tabedzki, M.; Zienkiewicz, L. A Novel Extension for e-Safety Initiative Based on Developed Fusion of Biometric Traits. IEEE
Access 2020, 8, 149887–149898. [CrossRef]
127. Kamal, R.; Chemmanam, A.J.; Jose, B.A.; Mathews, S.; Varghese, E. Construction Safety Surveillance Using Machine Learning.
In Proceedings of the 2020 International Symposium on Networks, Computers and Communications (ISNCC), Montreal, QC,
Canada, 20–22 October 2020; pp. 1–6. [CrossRef]
Sensors 2023, 23, 2131 52 of 55
128. Vu, H.N.; Pham, C.; Dung, N.M.; Ro, S. Detecting and Tracking Sinkholes Using Multi-Level Convolutional Neural Networks
and Data Association. IEEE Access 2020, 8, 132625–132641. [CrossRef]
129. Kumar, P.; Batchu, S.; Swamy S., N.; Kota, S.R. Real-Time Concrete Damage Detection Using Deep Learning for High Rise
Structures. IEEE Access 2021, 9, 112312–112331. [CrossRef]
130. Tu, Z.; Wu, S.; Kang, G.; Lin, J. Real-Time Defect Detection of Track Components: Considering Class Imbalance and Subtle
Difference Between Classes. IEEE Trans. Instrum. Meas. 2021, 70, 1–12. [CrossRef]
131. Bhattacharya, S.; Ranjan, A.; Reza, M. A Portable Biometrics System Based on Forehead Subcutaneous Vein Pattern and Periocular
Biometric Pattern. IEEE Sens. J. 2022, 22, 7022–7033. [CrossRef]
132. Altowaijri, A.H.; Alfaifi, M.S.; Alshawi, T.A.; Ibrahim, A.B.; Alshebeili, S.A. A Privacy-Preserving Iot-Based Fire Detector. IEEE
Access 2021, 9, 51393–51402. [CrossRef]
133. Ahmed, A.A.; Echi, M. Hawk-Eye: An AI-Powered Threat Detector for Intelligent Surveillance Cameras. IEEE Access 2021,
9, 63283–63293. [CrossRef]
134. Huu, N.N.T.; Mai, L.; Minh, T.V. Detecting Abnormal and Dangerous Activities Using Artificial Intelligence on The Edge for
Smart City Application. In Proceedings of the 2021 15th International Conference on Advanced Computing and Applications
(ACOMP), Ho Chi Minh City, Vietnam, 24–26 November 2021; pp. 85–92. [CrossRef]
135. Adam, M.; Ramachandran, P.; Alex, Z.C. Human Irregularity Detection Based on Posture and Behavioral Analysis. In Proceedings
of the 2021 Innovations in Power and Advanced Computing Technologies (i-PACT), Chennai, India, 28–30 July 2021; pp. 1–6.
[CrossRef]
136. Chen, Y.C.; Fathoni, H.; Yang, C.T. Implementation of Fire and Smoke Detection using DeepStream and Edge Computing
Approachs. In Proceedings of the 2020 International Conference on Pervasive Artificial Intelligence (ICPAI), Taipei, Taiwan, 3–5
December 2020; pp. 272–275. [CrossRef]
137. Zhou, C.; Li, J. A Real-time Driver Fatigue Monitoring System Based on Lightweight Convolutional Neural Network. In
Proceedings of the 2021 33rd Chinese Control and Decision Conference (CCDC), Kunming, China, 22–24 May 2021; pp. 1548–1553.
[CrossRef]
138. Benito-Picazo, J.; Domínguez, E.; Palomo, E.J.; Ramos-Jiménez, G.; López-Rubio, E. Deep learning-based anomalous object
detection system for panoramic cameras managed by a Jetson TX2 board. In Proceedings of the 2021 International Joint
Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–7. [CrossRef]
139. Rawat, P.; Misra, T.; Mitra, S.; Sinha, A. Designing of an Amphibian Hexapod with Computer Vision for Rescue Operations. In
Proceedings of the 2020 6th International Conference on Control, Automation and Robotics (ICCAR), Singapore, 20–23 April
2020; pp. 662–668. [CrossRef]
140. Wang, N.; Li, J.Y. Efficient Multi-Channel Thermal Monitoring and Temperature Prediction Based on Improved Linear Regression.
IEEE Trans. Instrum. Meas. 2022, 71, 1–9. [CrossRef]
141. Jabłoński, B.; Makowski, D.; Perek, P. Evaluation of NVIDIA Xavier NX Platform for Real-Time Image Processing for Fusion
Diagnostics. In Proceedings of the 2021 28th International Conference on Mixed Design of Integrated Circuits and System, Lodz,
Poland, 24–26 June 2021; pp. 63–68. [CrossRef]
142. Yang, R.; Yu, S.; Yu, X.; Huang, J. The Realization of Automobile Fog Lamp Intelligent Control System Based on Jetson Nano.
In Proceedings of the 2021 5th International Conference on Automation, Control and Robots (ICACR), Nanning, China, 25–27
September 2021; pp. 108–114. [CrossRef]
143. Hong, W.C.; Huang, D.R.; Chen, C.L.; Lee, J.S. Towards Accurate and Efficient Classification of Power System Contingencies and
Cyber-Attacks Using Recurrent Neural Networks. IEEE Access 2020, 8, 123297–123309. [CrossRef]
144. Baghezza, R.; Bouchard, K.; Bouzouane, A.; Gouin-Vallerand, C. Profile Recognition for Accessibility and Inclusivity in Smart
Cities Using a Thermal Imaging Sensor in an Embedded System. IEEE Internet Things J. 2022, 9, 7491–7509. [CrossRef]
145. Dolezel, P.; Stursa, D.; Kopecky, D.; Jecha, J. Memory Efficient Grasping Point Detection of Nontrivial Objects. IEEE Access 2021,
9, 82130–82145. [CrossRef]
146. Lee, J.; Jang, J.; Lee, J.; Chun, D.; Kim, H. CNN-Based Mask-Pose Fusion for Detecting Specific Persons on Heterogeneous
Embedded Systems. IEEE Access 2021, 9, 120358–120366. [CrossRef]
147. Zheng, Z.; Liu, W.; Wang, H.; Fan, G.; Dai, Y. Real-Time Enumeration of Metro Passenger Volume Using Anchor-Free Object
Detection Network on Edge Devices. IEEE Access 2021, 9, 21593–21603. [CrossRef]
148. Sallang, N.C.A.; Islam, M.T.; Islam, M.S.; Arshad, H. A CNN-Based Smart Waste Management System Using TensorFlow Lite
and LoRa-GPS Shield in Internet of Things Environment. IEEE Access 2021, 9, 153560–153574. [CrossRef]
149. Fu, B.; Li, S.; Wei, J.; Li, Q.; Wang, Q.; Tu, J. A Novel Intelligent Garbage Classification System Based on Deep Learning and an
Embedded Linux System. IEEE Access 2021, 9, 131134–131146. [CrossRef]
150. Othman, N.A.; Saleh, Z.Z.; Ibrahim, B.R. A Low-Cost Embedded Car Counter System by using Jetson Nano Based on Computer
Vision and Internet of Things. In Proceedings of the 2022 International Conference on Decision Aid Sciences and Applications
(DASA), Chiangrai, Thailand, 23–25 March 2022; pp. 698–701. [CrossRef]
151. Minh, H.T.; Mai, L.; Minh, T.V. Performance Evaluation of Deep Learning Models on Embedded Platform for Edge AI-Based
Real time Traffic Tracking and Detecting Applications. In Proceedings of the 2021 15th International Conference on Advanced
Computing and Applications (ACOMP), Ho Chi Minh City, Vietnam, 24–26 November 2021; pp. 128–135. [CrossRef]
Sensors 2023, 23, 2131 53 of 55
152. Han, W. A YOLOV3 System for Garbage Detection Based on MobileNetV3Lite as Backbone. In Proceedings of the 2021
International Conference on Electronics, Circuits and Information Engineering (ECIE), Zhengzhou, China, 22–24 January 2021;
pp. 254–258. [CrossRef]
153. Uddin, M.I.; Alamgir, M.S.; Rahman, M.M.; Bhuiyan, M.S.; Moral, M.A. AI Traffic Control System Based on Deepstream and
IoT Using NVIDIA Jetson Nano. In Proceedings of the 2021 2nd International Conference on Robotics, Electrical and Signal
Processing Techniques (ICREST), Dhaka, Bangladesh, 5–7 January 2021; pp. 115–119. [CrossRef]
154. Zhao, X.; Pu, F.; Wang, Z.; Chen, H.; Xu, Z. Detection, Tracking, and Geolocation of Moving Vehicle From UAV Using Monocular
Camera. IEEE Access 2019, 7, 101160–101170. [CrossRef]
155. Wei Xun, D.T.; Lim, Y.L.; Srigrarom, S. Drone detection using YOLOv3 with transfer learning on NVIDIA Jetson TX2. In
Proceedings of the 2021 Second International Symposium on Instrumentation, Control, Artificial Intelligence, and Robotics
(ICA-SYMP), Bangkok, Thailand, 20–22 January 2021; pp. 1–6. [CrossRef]
156. Mao, Y.; He, Z.; Ma, Z.; Tang, X.; Wang, Z. Efficient Convolution Neural Networks for Object Tracking Using Separable
Convolution and Filter Pruning. IEEE Access 2019, 7, 106466–106474. [CrossRef]
157. Rabah, M.; Rohan, A.; Haghbayan, M.H.; Plosila, J.; Kim, S.H. Heterogeneous Parallelization for Object Detection and Tracking in
UAVs. IEEE Access 2020, 8, 42784–42793. [CrossRef]
158. Jung, S.; Hwang, S.; Shin, H.; Shim, D.H. Perception, Guidance, and Navigation for Indoor Autonomous Drone Racing Using
Deep Learning. IEEE Robot. Autom. Lett. 2018, 3, 2539–2544. [CrossRef]
159. Basulto-Lantsova, A.; Padilla-Medina, J.A.; Perez-Pinal, F.J.; Barranco-Gutierrez, A.I. Performance comparative of OpenCV
Template Matching method on Jetson TX2 and Jetson Nano developer kits. In Proceedings of the 2020 10th Annual Computing
and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2020; pp. 0812–0816. [CrossRef]
160. Masnavi, H.; Adajania, V.K.; Kruusamäe, K.; Singh, A.K. Real-Time Multi-Convex Model Predictive Control for Occlusion-Free
Target Tracking with Quadrotors. IEEE Access 2022, 10, 29009–29031. [CrossRef]
161. Wang, Y.; Tang, C.; Cai, M.; Yin, J.; Wang, S.; Cheng, L.; Wang, R.; Tan, M. Real-Time Underwater Onboard Vision Sensing System
for Robotic Gripping. IEEE Trans. Instrum. Meas. 2021, 70, 1–11. [CrossRef]
162. Zhang, F.; Fan, H.; Wang, K.; Zhao, Y.; Zhang, X.; Ma, Y. Research on Intelligent Target Recognition Integrated With Knowledge.
IEEE Access 2021, 9, 137107–137115. [CrossRef]
163. Cheng, L.; Deng, B.; Yang, Y.; Lyu, J.; Zhao, J.; Zhou, K.; Yang, C.; Wang, L.; Yang, S.; He, Y. Water Target Recognition Method and
Application for Unmanned Surface Vessels. IEEE Access 2022, 10, 421–434. [CrossRef]
164. Demirhan, M.; Premachandra, C. Development of an Automated Camera-Based Drone Landing System. IEEE Access 2020,
8, 202111–202121. [CrossRef]
165. Kumar, A.; Sharma, A.; Bharti, V.; Singh, A.K.; Singh, S.K.; Saxena, S. MobiHisNet: A Lightweight CNN in Mobile Edge
Computing for Histopathological Image Classification. IEEE Internet Things J. 2021, 8, 17778–17789. [CrossRef]
166. Parthornratt, T.; Burapanonte, N.; Gunjarueg, W. People identification and counting system using raspberry Pi (AU-PiCC: Rasp-
berry Pi customer counter). In Proceedings of the 2016 International Conference on Electronics, Information, and Communications
(ICEIC), Danang, Vietnam, 27–30 January 2016; pp. 1–5. [CrossRef]
167. Meng, L.; Hirayama, T.; Oyanagi, S. Underwater-Drone With Panoramic Camera for Automatic Fish Recognition Based on Deep
Learning. IEEE Access 2018, 6, 17880–17886. [CrossRef]
168. Chavan, S.; Ford, J.; Yu, X.; Saniie, J. Plant Species Image Recognition using Artificial Intelligence on Jetson Nano Computational
Platform. In Proceedings of the 2021 IEEE International Conference on Electro Information Technology (EIT), Mt. Pleasant, MI,
USA, 14–15 May 2021; pp. 350–354. [CrossRef]
169. Venkataswamy, P.; Ahmad, M.O.; Swamy, M. Real-time Image Aesthetic Score Prediction for Portable Devices. In Proceedings of
the 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS), Springfield, MA, USA, 9–12 August
2020; pp. 570–573. [CrossRef]
170. Wang, L.; Ye, X.; Xing, H.; Wang, Z.; Li, P. YOLO Nano Underwater: A Fast and Compact Object Detector for Embedded Device.
In Proceedings of the Global Oceans 2020: Singapore–U.S. Gulf Coast, Biloxi, MS, USA, 9–12 August 2020; pp. 1–4. [CrossRef]
171. Kulathunga, G.; Hamed, H.; Devitt, D.; Klimchik, A. Optimization-Based Trajectory Tracking Approach for Multi-Rotor Aerial
Vehicles in Unknown Environments. IEEE Robot. Autom. Lett. 2022, 7, 4598–4605. [CrossRef]
172. Zhou, Z.; Xu, L.; Wang, C.; Xie, W.; Wang, S.; Ge, S.; Zhang, Y. An Image Captioning Model Based on Bidirectional Depth
Residuals and its Application. IEEE Access 2021, 9, 25360–25370. [CrossRef]
173. Yu, F.; Cui, L.; Wang, P.; Han, C.; Huang, R.; Huang, X. EasiEdge: A Novel Global Deep Neural Networks Pruning Method for
Efficient Edge Computing. IEEE Internet Things J. 2021, 8, 1259–1271. [CrossRef]
174. Park, Y.; Han, S.H.; Byun, W.; Kim, J.H.; Lee, H.C.; Kim, S.J. A Real-Time Depth of Anesthesia Monitoring System Based on
Deep Neural Network With Large EDO Tolerant EEG Analog Front-End. IEEE Trans. Biomed. Circuits Syst. 2020, 14, 825–837.
[CrossRef] [PubMed]
175. Mascret, Q.; Gagnon-Turcotte, G.; Bielmann, M.; Fall, C.L.; Bouyer, L.J.; Gosselin, B. A Wearable Sensor Network With Embedded
Machine Learning for Real-Time Motion Analysis and Complex Posture Detection. IEEE Sens. J. 2022, 22, 7868–7876. [CrossRef]
176. Baghersalimi, S.; Teijeiro, T.; Atienza, D.; Aminifar, A. Personalized Real-Time Federated Learning for Epileptic Seizure Detection.
IEEE J. Biomed. Health Inform. 2022, 26, 898–909. [CrossRef]
Sensors 2023, 23, 2131 54 of 55
177. Jafari, A.; Ganesan, A.; Thalisetty, C.S.K.; Sivasubramanian, V.; Oates, T.; Mohsenin, T. SensorNet: A Scalable and Low-Power
Deep Convolutional Neural Network for Multimodal Data Classification. IEEE Trans. Circuits Syst. I Regul. Pap. 2019, 66, 274–287.
[CrossRef]
178. Alamri, A.; Gumaei, A.; Al-Rakhami, M.; Hassan, M.M.; Alhussein, M.; Fortino, G. An Effective Bio-Signal-Based Driver Behavior
Monitoring System Using a Generalized Deep Learning Approach. IEEE Access 2020, 8, 135037–135049. [CrossRef]
179. Sheng, T.J.; Islam, M.S.; Misran, N.; Baharuddin, M.H.; Arshad, H.; Islam, M.R.; Chowdhury, M.E.H.; Rmili, H.; Islam, M.T. An
Internet of Things Based Smart Waste Management System Using LoRa and Tensorflow Deep Learning Model. IEEE Access 2020,
8, 148793–148811. [CrossRef]
180. Wang, Y.; Hou, L.; Paul, K.C.; Ban, Y.; Chen, C.; Zhao, T. ArcNet: Series AC Arc Fault Detection Based on Raw Current and
Convolutional Neural Network. IEEE Trans. Ind. Inform. 2022, 18, 77–86. [CrossRef]
181. Rizik, A.; Tavanti, E.; Chible, H.; Caviglia, D.D.; Randazzo, A. Cost-Efficient FMCW Radar for Multi-Target Classification in
Security Gate Monitoring. IEEE Sens. J. 2021, 21, 20447–20461. [CrossRef]
182. Xu, S.; Zhang, L.; Huang, W.; Wu, H.; Song, A. Deformable Convolutional Networks for Multimodal Human Activity Recognition
Using Wearable Sensors. IEEE Trans. Instrum. Meas. 2022, 71, 1–14. [CrossRef]
183. Yang, S.; Gong, Z.; Ye, K.; Wei, Y.; Huang, Z.; Huang, Z. EdgeRNN: A Compact Speech Recognition Network With Spatio-Temporal
Features for Edge Computing. IEEE Access 2020, 8, 81468–81478. [CrossRef]
184. Lu, S.; Qian, G.; He, Q.; Liu, F.; Liu, Y.; Wang, Q. In Situ Motor Fault Diagnosis Using Enhanced Convolutional Neural Network
in an Embedded System. IEEE Sens. J. 2020, 20, 8287–8296. [CrossRef]
185. Mukherjee, I.; Tallur, S. Light-Weight CNN Enabled Edge-Based Framework for Machine Health Diagnosis. IEEE Access 2021,
9, 84375–84386. [CrossRef]
186. Bhat, G.S.; Shankar, N.; Kim, D.; Song, D.J.; Seo, S.; Panahi, I.M.S.; Tamil, L. Machine Learning-Based Asthma Risk Prediction
Using IoT and Smartphone Applications. IEEE Access 2021, 9, 118708–118715. [CrossRef]
187. Hantono, B.S.; Cahyadi, A.I.; Putu Pratama, G.N. LSTM for State of Charge Estimation of Lithium Polymer Battery on Jetson
Nano. In Proceedings of the 2021 13th International Conference on Information Technology and Electrical Engineering (ICITEE),
Chiang Mai, Thailand, 14–15 October 2021; pp. 80–85. [CrossRef]
188. Buzura, L.; Budileanu, M.L.; Potarniche, A.; Galatus, R. Python based portable system for fast characterisation of foods based
on spectral analysis. In Proceedings of the 2021 IEEE 27th International Symposium for Design and Technology in Electronic
Packaging (SIITME), Timisoara, Romania, 27–30 October 2021; pp. 275–280. [CrossRef]
189. Vadlamani, R.; Kramer, V.; Schmidt, K. Automatic watering of plants in a pot using plant recognition with CNN. In Proceedings
of the 2021 5th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India,
2–4 December 2021; pp. 911–919. [CrossRef]
190. Zheng, Y.; Zhao, C.; Lei, Y.; Chen, L. Embedded Radio Frequency Fingerprint Recognition Based on A Lightweight Network. In
Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14
December 2020; pp. 1386–1392. [CrossRef]
191. Lechner, M.; Jantsch, A. Blackthorn: Latency Estimation Framework for CNNs on Embedded Nvidia Platforms. IEEE Access 2021,
9, 110074–110084. [CrossRef]
192. Kim, J.H.; Kim, N.; Won, C.S. Deep Edge Computing for Videos. IEEE Access 2021, 9, 123348–123357. [CrossRef]
193. Blanco-Filgueira, B.; García-Lesta, D.; Fernández-Sanjurjo, M.; Brea, V.M.; López, P. Deep Learning-Based Multiple Object Visual
Tracking on Embedded System for IoT and Mobile Edge Computing Applications. IEEE Internet Things J. 2019, 6, 5423–5431.
[CrossRef]
194. Kim, B.; Lee, S.; Trivedi, A.R.; Song, W.J. Energy-Efficient Acceleration of Deep Neural Networks on Realtime-Constrained
Embedded Edge Devices. IEEE Access 2020, 8, 216259–216270. [CrossRef]
195. Romera, E.; Álvarez, J.M.; Bergasa, L.M.; Arroyo, R. ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic
Segmentation. IEEE Trans. Intell. Transp. Syst. 2018, 19, 263–272. [CrossRef]
196. Kim, D.S.; Arsalan, M.; Owais, M.; Park, K.R. ESSN: Enhanced Semantic Segmentation Network by Residual Concatenation of
Feature Maps. IEEE Access 2020, 8, 21363–21379. [CrossRef]
197. Li, G.; Ma, X.; Wang, X.; Liu, L.; Xue, J.; Feng, X. Fusion-Catalyzed Pruning for Optimizing Deep Learning on Intelligent Edge
Devices. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2020, 39, 3614–3626. [CrossRef]
198. Ma, X.; Ji, K.; Xiong, B.; Zhang, L.; Feng, S.; Kuang, G. Light-YOLOv4: An Edge-Device Oriented Target Detection Method for
Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2021, 14, 10808–10820. [CrossRef]
199. Haut, J.M.; Bernabé, S.; Paoletti, M.E.; Fernandez-Beltran, R.; Plaza, A.; Plaza, J. Low–High-Power Consumption Architectures
for Deep-Learning Models Applied to Hyperspectral Image Classification. IEEE Geosci. Remote. Sens. Lett. 2019, 16, 776–780.
[CrossRef]
200. Lim, C.; Kim, M. ODMDEF: On-Device Multi-DNN Execution Framework Utilizing Adaptive Layer-Allocation on General
Purpose Cores and Accelerators. IEEE Access 2021, 9, 85403–85417. [CrossRef]
201. Fang, W.; Wang, L.; Ren, P. Tinier-YOLO: A Real-Time Object Detection Method for Constrained Environments. IEEE Access 2020,
8, 1935–1944. [CrossRef]
202. Lin, J.; Gan, C.; Wang, K.; Han, S. TSM: Temporal Shift Module for Efficient and Scalable Video Understanding on Edge Devices.
IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 2760–2774. [CrossRef]
Sensors 2023, 23, 2131 55 of 55
203. Borrego-Carazo, J.; Castells-Rufas, D.; Biempica, E.; Carrabina, J. Resource-Constrained Machine Learning for ADAS: A Systematic
Review. IEEE Access 2020, 8, 40573–40598. [CrossRef]
204. Matsubara, Y.; Callegaro, D.; Baidya, S.; Levorato, M.; Singh, S. Head Network Distillation: Splitting Distilled Deep Neural
Networks for Resource-Constrained Edge Computing Systems. IEEE Access 2020, 8, 212177–212193. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.