TinyML Algorithms for Big Data
TinyML Algorithms for Big Data
Article
TinyML Algorithms for Big Data Management in Large-Scale
IoT Systems
Aristeidis Karras 1, * , Anastasios Giannaros 1 , Christos Karras 1, * , Leonidas Theodorakopoulos 2 ,
Constantinos S. Mammassis 3 , George A. Krimpas 1 and Spyros Sioutas 1
1 Computer Engineering and Informatics Department, University of Patras, 26504 Patras, Greece;
[email protected] (A.G.); [email protected] (G.A.K.); [email protected] (S.S.)
2 Department of Management Science and Technology, University of Patras, 26334 Patras, Greece;
[email protected]
3 Department of Industrial Management and Technology, University of Piraeus, 18534 Piraeus, Greece;
[email protected]
* Correspondence: [email protected] (A.K.); [email protected] (C.K.)
Abstract: In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big
Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive
data produced by numerous connected devices. Our study introduces a set of TinyML algorithms de-
signed and developed to improve Big Data management in large-scale IoT systems. These algorithms,
named TinyCleanEDF, EdgeClusterML, CompressEdgeML, CacheEdgeML, and TinyHybridSenseQ,
operate together to enhance data processing, storage, and quality control in IoT networks, utilizing
the capabilities of Edge AI. In particular, TinyCleanEDF applies federated learning for Edge-based
data cleaning and anomaly detection. EdgeClusterML combines reinforcement learning with self-
organizing maps for effective data clustering. CompressEdgeML uses neural networks for adaptive
data compression. CacheEdgeML employs predictive analytics for smart data caching, and TinyHy-
bridSenseQ concentrates on data quality evaluation and hybrid storage strategies. Our experimental
evaluation of the proposed techniques includes executing all the algorithms in various numbers
of Raspberry Pi devices ranging from one to ten. The experimental results are promising as we
outperform similar methods across various evaluation metrics. Ultimately, we anticipate that the
Citation: Karras, A.; Giannaros, A.; proposed algorithms offer a comprehensive and efficient approach to managing the complexities of
Karras, C.; Theodorakopoulos, L.; IoT, Big Data, and Edge AI.
Mammassis, C.S.; Krimpas, G.A.;
Sioutas, S. TinyML Algorithms for Big Keywords: TinyML; Edge AI; IoT; IoT data engineering; IoT Big Data management; IoT systems
Data Management in Large-Scale IoT
Systems. Future Internet 2024, 16, 42.
https://doi.org/10.3390/fi16020042
on IoT devices, reducing the reliance on central systems and minimizing data transmis-
sion needs. By incorporating advanced techniques such as federated learning, anomaly
detection, adaptive data compression, strategic caching, and detailed data quality assess-
ment, these algorithms jointly enhance the overall efficiency, security, and reliability of
data management within IoT networks. The comparative analysis provided in this study
underscores the distinct functionalities and advantages of each algorithm, highlighting the
necessity and versatility of TinyML in handling data in the increasingly complex landscape
of IoT systems.
The remainder of this study is organized as follows: Section 2 provides comprehensive
background on TinyML, highlighting its emergence as a pivotal tool in managing Big
Data within IoT environments and discussing its application in large-scale IoT systems
and embedded devices. Section 3 outlines our methodology, covering our approach’s
advantages, framework design, hardware setup, and dataset configuration for TinyML
evaluation. Section 3.7 details the proposed algorithms we developed to utilize TinyML in
IoT contexts. The experimental results of the proposed algorithms are thoroughly presented
in Section 4, demonstrating the practical implications and effectiveness of our approach.
Finally, the study concludes in Section 5, summarizing the key findings and discussing
future research directions, emphasizing TinyML’s impact on IoT data management.
These challenges, which highlight the wider complexities that IoT introduces to Big
Data management, are outlined in Table 1.
Challenge Description
The increase in interconnected devices leads to unprecedented
Data Volume data generation, surpassing the capacity of conventional storage
and processing systems.
Continuous data generation in IoT necessitates real-time analysis
Data Velocity
and response, stressing the need for prompt processing solutions.
Diverse data sources in IoT range from structured to unstructured
Data Variety
formats, posing integration and analytical complexities.
The accuracy, authenticity, and reliability of data from varied
Data Veracity
devices present significant challenges in data verification.
Consolidating data from heterogeneous sources while preserving
Data Integration
integrity and context remains complex.
Increased interconnectivity broadens the risk of cyber attacks,
Security
necessitating robust security protocols.
Balancing the protection of sensitive data within extensive
Privacy
datasets, while maintaining utility, is crucial.
Processing or transmission delays can affect the timeliness and
Latency
relevance of insights, impacting decisionmaking.
2.2. TinyML
Tiny Machine Learning (TinyML) has emerged as a growing field in machine learning,
characterized by its application in highly constrained Internet of Things (IoT) devices
such as microcontrollers (MCUs) [51]. This technology facilitates the use of deep learning
models across a multitude of IoT devices, thereby broadening the range of potential
applications and enabling ubiquitous computational intelligence. The implementation of
TinyML is challenging, primarily due to the limited memory resources of these devices and
the necessity for simultaneous algorithm and system stack design. Attracting substantial
interest in both research and development areas, numerous studies have been conducted,
focusing on the challenges, applications, and advantages of TinyML [52,53].
An essential goal of TinyML is to bring machine learning capabilities to battery-
powered intelligent devices, allowing them to locally process data without necessitating
cloud connectivity. This ability to operate independently from cloud services not only
enhances functionality but also provides a more cost-effective solution for IoT applica-
tions [3,54–56]. The academic community has thoroughly examined TinyML, with sys-
tematic reviews, surveys, and research papers delving into aspects such as its hardware
requirements, frameworks, datasets, use cases, algorithms/models, and broader applica-
tions. Notably, the development of specialized TinyML frameworks and libraries, coupled
with its integration with networking technologies, has been explored to facilitate its deploy-
ment in various sectors, including healthcare, smart agriculture, environmental monitoring,
and anomaly detection. One practical application of TinyML is in the development of soft
sensors for economical vehicular emission monitoring, showcasing its real-world applica-
bility [57]. In essence, TinyML marks a significant progression in the domain of machine
learning, enabling the execution of machine learning tasks on resource-constrained IoT
devices and microcontrollers, thus laying the groundwork for an expansive ecosystem
surrounding this technology.
Numerous devices scattered across different locations TinyML facilitates edge computation, reducing latency
Distributed Topology
lead to data decentralization and increased latency. and ensuring real-time insights.
On-device TinyML prioritizes, compresses, and filters
Continuous data generation can overwhelm storage and
Voluminous Data Streams data, managing storage and reducing transmission
transmission channels.
needs.
Variety in device types introduces inconsistency in data TinyML standardizes data processing at source,
Diverse Device Landscape
formats and communication protocols. ensuring unified data representation across devices.
Power and Resource Devices, especially battery-operated ones, have limited TinyML models maximize computational efficiency,
Constraints computational resources. conserving device resources.
Delays in data processing can hinder time-sensitive TinyML ensures rapid on-device processing for
Real-time Processing Needs
applications. immediate responses to data changes.
The varied landscape of IoT devices, each with different data formats and commu-
nication protocols, is harmonized by TinyML, which standardizes data processing and
extraction at the source, ensuring consistent data representation across diverse device types.
Power and resource constraints, especially in battery-operated devices, pose significant chal-
lenges in IoT systems. TinyML models are designed for optimal computational efficiency,
performing tasks effectively without draining device resources. Finally, in applications
that require real-time processing, such as health monitoring or predictive maintenance,
delays in processing can be critical. TinyML enables rapid on-device processing, allowing
immediate responses to changing data patterns, thus enhancing the overall functionality
and effectiveness of large-scale IoT systems.
face recognition technologies in embedded devices benefit from TinyML through faster
localized processing, enhancing reliability and privacy. TinyML also plays a crucial role
in energy management within smart grids and home automation, optimizing energy use
for cost and environmental benefits. In urban development, it contributes to traffic flow
optimization by analyzing real-time vehicle and pedestrian movements, improving urban
mobility. These examples showcase TinyML’s significant impact in enhancing operational
efficiency, user experience, and sustainable practices across various industries.
Real-time analysis of sensor data for early fault detection in machinery, reducing downtime and
Predictive Maintenance
maintenance costs.
Continuous health monitoring with wearables for vital signs and anomaly detection, enhancing
Health Monitoring
preventative healthcare.
Smart Agriculture Adaptive agriculture practices based on sensor data, optimizing resource use for better crop yield.
Voice Recognition Local processing of voice commands for quicker privacy-focused responses.
Face Recognition Low-latency facial recognition for secure access control and personalization.
Anomaly Detection Immediate detection of irregular patterns in industrial and environmental data for proactive response.
Gesture Control Touch-free device control via gesture recognition, improving user interaction and accessibility.
Energy Management Intelligent energy use in smart grids and homes based on usage patterns and predictive analytics.
Real-time traffic analysis for dynamic routing and light sequencing, enhancing urban traffic
Traffic Flow Optimization
management.
Environmental Monitoring Continuous monitoring of environmental conditions, with real-time adjustments and alerts.
Smart Retail Analysis of customer behavior for tailored retail experiences and store management.
Concrete Materials Damage Lightweight CNN on MCU for damage recognition in concrete materials, showing TinyML’s
[63]
Classification potential in structural health.
TinyML for predictive maintenance in hydraulic systems, improving service quality, performance,
Predictive Maintenance [64]
and sustainability.
TinyML for efficient keyword detection in voice-enabled devices, reducing processing costs and
Keyword Spotting [65]
enhancing privacy.
ML hardware accelerators for real-time analysis of time-series data in IoT, optimizing neural
Time-Series Analysis [66]
networks for on-device processing.
Asset Activity Monitoring TinyML for continuous monitoring of tool usage, identifying usage patterns and potential misuses. [67]
TinyML for monitoring environmental factors like air quality, contributing to smart systems for
Environmental Monitoring [68]
sustainability.
Deep Neural Network Reduced Precision Optimization for DNN on-device learning in
[75]
Optimization MCUs.
Unsupervised Online Adaptive TinyML algorithm for driver behavior analysis in
[76]
Learning automotive IoT.
TinyML algorithm for anomaly detection in Industry 4.0 using
Anomaly Detection [77]
extreme values theory.
Low Precision Empirical study on quantization techniques for TinyML
[78]
Quantization efficiency.
Development of RAMAN, a re-configurable and sparse tinyML
Sparse tinyML Accelerator [79]
accelerator for edge inference.
TinyReptile: federated meta-learning algorithm for TinyML on
Federated Meta-Learning [80]
MCUs.
management also plays a crucial role in IoT systems, particularly those reliant on compact
devices and smart networks. This often includes the adoption of low-power communica-
tion protocols and the integration of autonomous power systems, which are frequently
powered by renewable energy sources [87]. Furthermore, AI-based analytics, processed in
the cloud, are increasingly being utilized for healthcare-related data management, such as
systems designed for managing diabetic patient data [88].
Table 8 illustrates how TinyML is revolutionizing data management techniques in IoT
systems, bringing efficiency and accuracy to various processes. Techniques like predictive
imputation and adaptive data quantization exemplify this transformation. Predictive
imputation, using TinyML, maintains data integrity by filling in missing values based
on historical and neighboring data, thereby ensuring dataset completeness. Adaptive
data quantization, on the other hand, optimizes data storage and transmission. TinyML’s
role here is to analyze current data trends and dynamically adjust quantization levels for
optimal data representation.
3. Methodology
In this study, we adopt a structured methodology to investigate the application of
TinyML in handling Big Data challenges within extensive IoT systems. Initially, our
approach involves integrating IoT devices with Raspberry Pi units, which are crucial
for managing the complexities of Big Data characterized by high volume, rapid velocity,
and diverse variety while ensuring accuracy and value extraction.
Subsequently, we concentrate on the technical deployment of TinyML on Raspberry
Pis, focusing on essential tasks such as data cleaning, anomaly detection, and feature
extraction. The effectiveness of these processes is comprehensively evaluated through a
series of tests, ensuring that our approach aligns with the desired outcomes. Moreover, we
introduce a feedback mechanism linked to the central Big Data system, enabling continuous
updates and enhancements to the TinyML models on Raspberry Pis. This methodology is
designed to create an efficient and adaptable system capable of addressing the dynamic
needs of Big Data management in large-scale IoT applications and systems.
Our approach involves deploying these algorithms on Raspberry Pi units, utilizing
their strengths and capabilities in federated learning, anomaly detection, data compression,
caching strategies, and data quality assessment. We systematically evaluate each algo-
rithm’s performance in real-time IoT scenarios, focusing on their efficiency in processing
and managing data. This includes assessing the scalability, responsiveness, and accuracy of
each algorithm in handling the unique data streams generated by IoT devices. By incorpo-
rating these algorithms into our methodology, we aim to provide a comprehensive solution
for Big Data challenges in IoT systems, ensuring robust and efficient data management.
tasks are undertaken: data cleaning to remove inconsistencies, anomaly detection to identify
unusual patterns, and feature extraction to select relevant data attributes. Once processed,
the refined data are transmitted to the centralized Big Data system via the communication
layer. Notably, the volume of data being transmitted is reduced due to the preliminary
processing at the Raspberry Pi level. At the top layer, the centralized system performs
further storage, analytics, and processing tasks. A feedback mechanism is incorporated,
allowing the centralized system to send updates to the Raspberry Pis, ensuring continuous
optimization. Overall, this architecture presents a structured methodology for efficient data
processing and management in large-scale IoT settings. The illustration of this architecture
is represented in Figure 1.
Model updates or
Centralized Big configuration changes
Data System
Data transmission
Communication
Processing at the Raspberry Pi level
Layer
Anomaly Feature
Data Cleaning
Detection Extraction
Raspberry Pi and
TinyML Layer
IoT Layer
This hardware ensemble, consisting of Raspberry Pi devices and a diverse set of sen-
sors, constitutes the core of our IoT network. It is adeptly designed to handle a wide spec-
trum of data collection and processing operations. The Raspberry Pi 4 models, with their
advanced capabilities, are integral for more demanding computational tasks. In contrast,
the Raspberry Pi Zero W units offer a compact energy-efficient solution for simpler ac-
tivities. The assortment of sensors capture a broad range of environmental and physical
parameters, which are vital for the thorough deployment and effectiveness of the TinyML
algorithms central to our research.
• Data Volume:
– The dataset encompasses over 1 terabyte of collected raw sensor data, providing
a substantial foundation for algorithmic testing and optimization.
• Data Collection Frequency:
– Sensor readings are captured at varying intervals, ranging from high-frequency
real-time data streams to periodic updates. This variability simulates different
real-world operational scenarios, ensuring robust algorithm testing.
• Data Preparation:
– Prior to analysis, the data were subjected to essential preprocessing steps, in-
cluding cleaning and normalization, to ensure consistency and reliability for
subsequent TinyML processing.
This dataset, with its rich variety and significant volume, plays a crucial role in the
assessment of our TinyML algorithms. It not only provides a realistic environment for
testing but also ensures that the algorithms are evaluated across a range of conditions
reflective of real-world IoT systems. The frequency of data collection, in particular, allows
us to examine the algorithms’ performance under various data flow scenarios, which is
critical for their application in diverse IoT settings.
Algorithm 1 TinyCleanEDF: Federated Learning for Data Cleaning and Anomaly Detection
with Autoencoder-based Feature Extraction
1: procedure T INY C LEAN EDF (dataStream)
2: Step 1: Initialize Federated Learning Models
3: Partition dataStream into subsets { D1 , D2 , . . . , Dn } for distributed processing
4: Deploy federated learning models { M1 , M2 , . . . , Mn } on edge devices
5: Train each model Mi with its subset Di
6: Models periodically execute Mi → Sync( Mi ) with central server
7: Central server performs Aggregate({ M1 , M2 , . . . , Mn })
8: Step 2: Federated Model for Data Cleaning and Anomaly Detection
9: Apply f anomaly ( x; Mi ) to detect and clean anomalies locally
10: Anomalies identified as x ∈ / ExpectedPattern( Mi )
11: Cleaned data {C1 , C2 , . . . , Cn } sent to central server
12: Step 3: Deploy Autoencoder for Feature Extraction
13: Implement autoencoder AEi at each node i
14: Train AEi to reconstruct input x from compressed representation z
15: Feature extraction: f features ( x; AEi ) = HiddenLayer( AEi ( x ))
16: Step 4: Continuous Adaptation and Feature Extraction
17: for each dataPoint in dataStream do
18: cleanDataPoint ← f clean (dataPoint; Mi )
19: anomaly ← f anomaly (cleanDataPoint; Mi )
20: f eatures ← f features (cleanDataPoint; AEi )
21: Update Mi and AEi with dataPoint for continuous learning
22: Store (cleanDataPoint, anomaly, f eatures)
23: end for
24: end procedure
understanding the unique capabilities and functionalities that each algorithm brings to IoT
Big Data management.
4. Experimental Results
4.1. Overview
In this section, we assess the performance of our five proposed algorithms—TinyCleanEDF,
EdgeClusterML, CompressEdgeML, CacheEdgeML, and TinyHybridSenseQ—across mul-
tiple key performance metrics. The evaluation is conducted by varying the number of
Raspberry Pi devices used in the deployment, ranging from one to ten. Each algorithm’s
performance is measured across elements such as accuracy, compression efficiency, data
processing time (ms), training time (ms), overall efficiency, and scalability. The metrics
utilized in this work are provided in detail in Section 4.2 below.
1. Data Processing Time (ms): To measure the data processing time in a distributed
system such as the one proposed where we have multiple Raspberry Pi devices, we
can consider the maximum time taken by any single device as well as the average
time across all devices. The equation is provided in Equation (1).
∑in=1 Ti
Data Processing Timetotal = max( T1 , T2 , . . . , Tn ) and Data Processing Timeavg = (1)
n
2. Model Training Time (ms): For the model training time, we want to measure both
the total cumulative time and the longest individual training time across all devices.
The calculation is provided in Equation (2).
n
Model Training Timetotal = ∑ Ti and Model Training Timemax = max( T1 , T2 , . . . , Tn ) (2)
i =1
3. Anomaly Detection Accuracy: For a distributed system, we want to consider not only
the overall accuracy but also the consistency of anomaly detection across different
nodes. A weighted approach is used where the accuracy of each node is weighted by
the number of instances it processes. This is provided in Equation (3).
Throughput at n nodes
Scalability =
Throughput at a single node
Response Time at n nodes
Response Time Ratio =
Response Time at a single node
∑in=1 Load on Node i
Load Balancing Efficiency = (5)
Ideal Load per Node × n
Total Processed Load
System Capacity Utilization =
Total System Capacity
Total System Cost at n nodes
Cost-Effectiveness Ratio =
Performance Improvement Factor
Future Internet 2024, 16, 42 19 of 29
Starting with the evaluation of the first Algorithm 1, the results are shown in Figure 2.
Log-scaled Values
101
Anomaly Detection Accuracy Data Processing Time (ms) Model Training Time (ms) Communication Efficiency Scalability
Evaluation Metrics
Incorporating the TinyCleanEDF algorithm into an IoT data management system has
demonstrated quantifiable improvements in several key performance metrics. As shown
in the preceding Figure, the deployment of this algorithm across an increasing number of
nodes—from 1 to 10—has yielded substantial benefits. Specifically, the anomaly detection
accuracy improved with more nodes. For instance, there was a 10% increase in the accuracy
on a single node compared to ten nodes. This improvement highlights the algorithm’s
enhanced capability to identify and respond to data anomalies as the collaborative network
of nodes expands.
Moreover, the data processing and model training times, both critical for the efficient
operation of IoT systems, show a decreasing trend as more nodes are engaged. Log-
scaled values indicate that processing time decreased fourfold when the number of nodes
increased from 1 to 10, which suggests a notable enhancement in the speed of data handling.
Communication efficiency also saw a rise, which is particularly relevant in scenarios where
network bandwidth is a limiting factor. This increase indicates a more optimal use of
available resources, allowing for smoother data transfer between nodes and the central
server. Lastly, scalability, which is also a significant metric, reflects the algorithm’s ability
to maintain performance despite the growing scale of the network. The consistent upward
trend across nodes validates that TinyCleanEDF is well-suited for environments where
expansion is anticipated, ensuring that the system not only sustains its performance but
actually improves as it scales.
These results underscore the effectiveness of TinyCleanEDF in enhancing data quality
and system robustness, making it a compelling choice for federated learning applications
in distributed networks. Moving on to Algorithm 2, the results are presented in Figure 3.
The integration of the EdgeClusterML algorithm within an edge computing framework
such as FL has yielded remarkable improvements in critical performance metrics. Notably,
the algorithm achieved an impressive accuracy rate of approximately 90% when applied to
real-world data streams. This represents a significant enhancement in the precision of data
clustering, making it well-suited for applications like anomaly detection and data-driven
decisionmaking. The observed increase in accuracy is particularly noteworthy as it directly
impacts the algorithm’s ability to effectively group datapoints.
Future Internet 2024, 16, 42 20 of 29
101
Clustering Accuracy (%) Clustering Speed (ms) Resource Utilization (%) Adaptability Score (%)
Evaluation Metrics
Furthermore, our analysis reveals significant reductions in the clusterind speed. Specif-
ically, the algorithm exhibited a time reduction of approximately 10% in the clustering
speed when transitioning from one to ten nodes. These reductions are crucial in edge
computing scenarios, ensuring real-time responsiveness and rapid adaptation to changing
data patterns. The improvement in resource utilization and the adaptability score, with a
roughly 15% increase as nodes scaled, signifies more efficient resource utilization and data
transfer, particularly valuable in resource-constrained edge environments.
In conclusion, EdgeClusterML emerges as a robust solution for edge computing en-
vironments, offering concrete benefits in terms of accuracy, clustering speed, resource
utilization, and adaptability. Its reinforcement-learning-driven dynamic clustering ap-
proach positions it as a valuable asset for real-time data analysis and decisionmaking in
dynamic edge scenarios. In the next steps, we evaluate Algorithm 3 in Figure 4.
101
Compression Efficiency (%) Compression Speed (ms) Data Integrity Post-Compression (%) Resource Utilization (%)
Evaluation Metrics
which reduced logarithmically to around 300 ms for 10 devices. This reduction showcases
the algorithm’s capability to handle larger data streams more efficiently, a critical attribute
in real-time edge computing scenarios.
Data integrity post-compression was maintained above 90% across all device configu-
rations, peaking at 98% in a single-device setup. This metric underscores the algorithm’s
reliability in preserving essential data characteristics during the compression process. Re-
source utilization also showed a positive trend, with efficiency increasing from 70% in a
single-device scenario to 85% in a 10-device configuration. This improvement indicates
the algorithm’s scalability and its efficient use of computational resources, which is vital in
resource-constrained edge environments.
In summary, CompressEdgeML demonstrates robust performance in adaptive data
compression, marked by high compression efficiency, accelerated processing speeds, reli-
able data integrity, and efficient resource utilization. Its adaptability and scalability make it
well-suited for diverse edge computing applications. The next algorithm is Algorithm 4,
which is evaluated in Figure 5.
101
Cache Hit Rate (%) Cache Update Speed (ms) Cloud Synchronization Latency (ms) Data Retrieval Efficiency (%)
Evaluation Metrics
In our assessment of the CacheEdgeML algorithm, tailored for predictive and tiered
data caching in edge computing settings, we observed substantial enhancements in pivotal
performance metrics. The algorithm exhibited a cache hit rate of 85% in a single-device
environment, which progressively increased to 92% with the addition of more devices.
This upward trend signifies the algorithm’s enhanced accuracy in predicting data requests,
a crucial factor in reducing redundant data retrieval operations.
The cache update speed, a critical measure in dynamic environments, improved
logarithmically from 400 ms for one device to 250 ms for ten devices. This acceleration high-
lights the algorithm’s efficiency in adapting to changing data patterns, thereby optimizing
caching strategies in real time. Cloud synchronization latency, through the HPC server, is
essential for maintaining data consistency between edge and cloud storage, and it was also
optimized. It decreased from 95 ms to 99 ms as the number of devices increased, demon-
strating the algorithm’s effectiveness in synchronizing large volumes of data swiftly across
distributed networks. Data retrieval efficiency, indicative of the algorithm’s performance in
providing timely access to cached data, showed, however, a negative trajectory, decreasing
from 200% in single-device setups to 140% in scenarios involving ten devices. This decrease
shows that the algorithm requires more time to streamline data access, particularly in
multi-device edge computing networks where data are distributed.
In summary, CacheEdgeML emerges as a robust and adaptive solution for data caching
in edge computing environments. Its strengths lie in its high cache hit rate, improved cache
update speed, and reduced cloud synchronization latency. These attributes collectively
Future Internet 2024, 16, 42 22 of 29
101
Data Quality Score (%) Anomaly Detection Speed (ms) Storage Efficiency (%) Data Transfer Latency (ms)
Evaluation Metrics
80
60
Cache Hit Rate (%)
40
Methods
PPO
20 MCMC
LFU
LRU
DRL
CacheEdgeML
0
1
10
Number of Devices
As can be seen from the preceding figure, the cache hit rate of the proposed CacheEdgeML
method outperforms the other five methods in terms of cache hit rate, reaching above
80% across all assessments. Additionally, we assess our proposed method named Com-
pressEdgeML with a similar method presented in [72] named TAC. The results for the
compression efficiency are provided in Figure 8, while the compression speed is provided
in Figure 9.
Compression Efficiency: CompressEdgeML vs Similar Method
100 CompressEdgeML
TAC
80
Compression Efficiency (%)
60
40
20
0
1
10
Number of Devices
2.5
Compression Speed (Log-scaled)
2.0
1.5
1.0
0.5
0.0
1
10
Number of Devices
As can be seen from Figure 8, the compression efficiency of our method is lower
than TAC in the beginning; however, it reaches the performance of TAC when utilized
on five devices and finally overtakes it using ten devices. Note that the TAC method in
the original method is implemented only once and its performance remains stable across
replication on our devices. As per the compression speed, for the assessment, we replicated
the TAC method in our devices and assessed the compression while increasing the size
of data in both methods. As per the compression speed, as can be seen from Figure 9,
the compression speed of CompressEdgeML is lower, meaning faster compression while
maintaining a good speed across all devices.
Future Work
For future research, several key areas have been identified to further enhance the
capabilities of these algorithms:
1. Anomaly Detection: There is space for incorporating more advanced machine learning
models to enhance the accuracy and speed of anomaly detection, especially in envi-
ronments with complex or noisy data. This will allow for more precise identification
of irregularities, enhancing the overall data integrity.
2. Energy Efficiency: Optimizing the energy consumption of these algorithms is crucial,
particularly in environments where energy resources are limited. Research should
focus on developing energy-efficient methods that reduce the overall energy demand
of the system without sacrificing performance.
3. Cloud–Edge Integration: Enhancing the interaction between edge and cloud platforms
is essential for improved data synchronization and storage efficiency. This involves
developing methods for more seamless data processing and management in hybrid
cloud–edge environments.
Future Internet 2024, 16, 42 25 of 29
Author Contributions: A.K., A.G., C.K., L.T., C.S.M., G.A.K. and S.S., conceived the idea, designed
and performed the experiments, analyzed the results, drafted the initial manuscript, and revised the
final manuscript. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Data Availability Statement: Data are contained within the article.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Mashayekhy, Y.; Babaei, A.; Yuan, X.M.; Xue, A. Impact of Internet of Things (IoT) on Inventory Management: A Literature
Survey. Logistics 2022, 6, 33. [CrossRef]
2. Vonitsanos, G.; Panagiotakopoulos, T.; Kanavos, A. Issues and challenges of using blockchain for iot data management in smart
healthcare. Biomed. J. Sci. Tech. Res. 2021, 40, 32052–32057.
3. Zaidi, S.A.R.; Hayajneh, A.M.; Hafeez, M.; Ahmed, Q.Z. Unlocking Edge Intelligence Through Tiny Machine Learning (TinyML).
IEEE Access 2022, 10, 100867–100877. [CrossRef]
4. Ersoy, M.; Şansal, U. Analyze Performance of Embedded Systems with Machine Learning Algorithms. In Proceedings of the
Trends in Data Engineering Methods for Intelligent Systems: Proceedings of the International Conference on Artificial Intelligence
and Applied Mathematics in Engineering (ICAIAME 2020), Antalya, Turkey, 18–20 April 2020; Springer: Berlin/Heidelberg,
Germany, 2021; pp. 231–236.
5. Khobragade, P.; Ghutke, P.; Kalbande, V.P.; Purohit, N. Advancement in Internet of Things (IoT) Based Solar Collector for Thermal
Energy Storage System Devices: A Review. In Proceedings of the 2022 2nd International Conference on Power Electronics & IoT
Applications in Renewable Energy and its Control (PARC), Mathura, India, 21–22 January 2022; pp. 1–5. [CrossRef]
6. Ayub Khan, A.; Laghari, A.A.; Shaikh, Z.A.; Dacko-Pikiewicz, Z.; Kot, S. Internet of Things (IoT) Security with Blockchain
Technology: A State-of-the-Art Review. IEEE Access 2022, 10, 122679–122695. [CrossRef]
7. Chauhan, C.; Ramaiya, M.K. Advanced Model for Improving IoT Security Using Blockchain Technology. In Proceedings of the
2022 4th International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 January 2022;
pp. 83–89. [CrossRef]
8. Mohanta, B.K.; Jena, D.; Satapathy, U.; Patnaik, S. Survey on IoT security: Challenges and solution using machine learning,
artificial intelligence and blockchain technology. Internet Things 2020, 11, 100227. [CrossRef]
9. Jiang, Y.; Wang, C.; Wang, Y.; Gao, L. A Cross-Chain Solution to Integrating Multiple Blockchains for IoT Data Management.
Sensors 2019, 19, 2042. [CrossRef]
10. Ren, H.; Anicic, D.; Runkler, T. How to Manage Tiny Machine Learning at Scale: An Industrial Perspective. arXiv 2022,
arXiv:2202.09113.
11. Keserwani, P.K.; Govil, M.C.; Pilli, E.S.; Govil, P. A smart anomaly-based intrusion detection system for the Internet of Things
(IoT) network using GWO–PSO–RF model. J. Reliab. Intell. Environ. 2021, 7, 3–21. [CrossRef]
12. Gibbs, M.; Woodward, K.; Kanjo, E. Combining Multiple tinyML Models for Multimodal Context-Aware Stress Recognition on
Constrained Microcontrollers. IEEE Micro 2023, 1–9. [CrossRef]
Future Internet 2024, 16, 42 26 of 29
13. Chen, Z.; Gao, Y.; Liang, J. LOPdM: A Low-power On-device Predictive Maintenance System Based on Self-powered Sensing and
TinyML. IEEE Trans. Instrum. Meas. 2023, 72, 2525213. [CrossRef]
14. Savanna, R.L.; Hanyurwimfura, D.; Nsenga, J.; Rwigema, J. A Wearable Device for Respiratory Diseases Monitoring in Crowded
Spaces. Case Study of COVID-19. In Proceedings of the International Congress on Information and Communication Technology,
London, UK, 20–23 February 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 515–528.
15. Nguyen, H.T.; Mai, N.D.; Lee, B.G.; Chung, W.Y. Behind-the-Ear EEG-Based Wearable Driver Drowsiness Detection System Using
Embedded Tiny Neural Networks. IEEE Sens. J. 2023, 23, 23875–23892. [CrossRef]
16. Hussein, D.; Bhat, G. SensorGAN: A Novel Data Recovery Approach for Wearable Human Activity Recognition. ACM Trans.
Embed. Comput. Syst. 2023. [CrossRef]
17. Zacharia, A.; Zacharia, D.; Karras, A.; Karras, C.; Giannoukou, I.; Giotopoulos, K.C.; Sioutas, S. An Intelligent Microprocessor
Integrating TinyML in Smart Hotels for Rapid Accident Prevention. In Proceedings of the 2022 7th South-East Europe Design
Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), Ioannina, Greece,
23–25 September 2022; pp. 1–7. [CrossRef]
18. Atanane, O.; Mourhir, A.; Benamar, N.; Zennaro, M. Smart Buildings: Water Leakage Detection Using TinyML. Sensors 2023, 23,
9210. [CrossRef]
19. Malche, T.; Maheshwary, P.; Tiwari, P.K.; Alkhayyat, A.H.; Bansal, A.; Kumar, R. Efficient solid waste inspection through
drone-based aerial imagery and TinyML vision model. Trans. Emerg. Telecommun. Technol. 2023, e4878. [CrossRef]
20. Hammad, S.S.; Iskandaryan, D.; Trilles, S. An unsupervised TinyML approach applied to the detection of urban noise anomalies
under the smart cities environment. Internet Things 2023, 23, 100848. [CrossRef]
21. Priya, S.K.; Balaganesh, N.; Karthika, K.P. Integration of AI, Blockchain, and IoT Technologies for Sustainable and Secured Indian
Public Distribution System. In AI Models for Blockchain-Based Intelligent Networks in IoT Systems: Concepts, Methodologies, Tools, and
Applications; Springer: Berlin/Heidelberg, Germany, 2023; pp. 347–371.
22. Flores, T.; Silva, M.; Azevedo, M.; Medeiros, T.; Medeiros, M.; Silva, I.; Dias Santos, M.M.; Costa, D.G. TinyML for Safe Driving:
The Use of Embedded Machine Learning for Detecting Driver Distraction. In Proceedings of the 2023 IEEE International
Workshop on Metrology for Automotive (MetroAutomotive), Modena, Italy, 28–30 June 2023; pp. 62–66. [CrossRef]
23. Nkuba, C.K.; Woo, S.; Lee, H.; Dietrich, S. ZMAD: Lightweight Model-Based Anomaly Detection for the Structured Z-Wave
Protocol. IEEE Access 2023, 11, 60562–60577. [CrossRef]
24. Shabir, M.Y.; Torta, G.; Basso, A.; Damiani, F. Toward Secure TinyML on a Standardized AI Architecture. In Device-Edge-Cloud
Continuum: Paradigms, Architectures and Applications; Springer: Berlin/Heidelberg, Germany, 2023; pp. 121–139.
25. Tsoukas, V.; Gkogkidis, A.; Boumpa, E.; Papafotikas, S.; Kakarountas, A. A Gas Leakage Detection Device Based on the Technology
of TinyML. Technologies 2023, 11, 45. [CrossRef]
26. Hayajneh, A.M.; Aldalahmeh, S.A.; Alasali, F.; Al-Obiedollah, H.; Zaidi, S.A.; McLernon, D. Tiny machine learning on the edge: A
framework for transfer learning empowered unmanned aerial vehicle assisted smart farming. IET Smart Cities 2023. [CrossRef]
27. Adeola, J.O.; Degila, J.; Zennaro, M. Recent Advances in Plant Diseases Detection With Machine Learning: Solution for Developing
Countries. In Proceedings of the 2022 IEEE International Conference on Smart Computing (SMARTCOMP), Helsinki, Finland,
20–24 June 2022; pp. 374–380. [CrossRef]
28. Tsoukas, V.; Gkogkidis, A.; Kakarountas, A. A TinyML-Based System for Smart Agriculture. In Proceedings of the 26th
Pan-Hellenic Conference on Informatics, New York, NY, USA, 25–27 November 2023; pp. 207–212. [CrossRef]
29. Nicolas, C.; Naila, B.; Amar, R.C. TinyML Smart Sensor for Energy Saving in Internet of Things Precision Agriculture platform.
In Proceedings of the 2022 Thirteenth International Conference on Ubiquitous and Future Networks (ICUFN), Barcelona, Spain,
5–8 July 2022; pp. 256–259. [CrossRef]
30. Nicolas, C.; Naila, B.; Amar, R.C. Energy efficient Firmware over the Air Update for TinyML models in LoRaWAN agricultural
networks. In Proceedings of the 2022 32nd International Telecommunication Networks and Applications Conference (ITNAC),
Wellington, New Zealand, 30 November–2 December 2022; pp. 21–27. [CrossRef]
31. Viswanatha, V.; Ramachandra, A.C.; Hegde, P.T.; Raghunatha Reddy, M.V.; Hegde, V.; Sabhahit, V. Implementation of Smart
Security System in Agriculture fields Using Embedded Machine Learning. In Proceedings of the 2023 International Conference
on Applied Intelligence and Sustainable Computing (ICAISC), Zakopane, Poland, 18–22 June 2023; pp. 1–6. [CrossRef]
32. Botero-Valencia, J.; Barrantes-Toro, C.; Marquez-Viloria, D.; Pearce, J.M. Low-cost air, noise, and light pollution measuring station
with wireless communication and tinyML. HardwareX 2023, 16, e00477. [CrossRef]
33. Li, T.; Luo, J.; Liang, K.; Yi, C.; Ma, L. Synergy of Patent and Open-Source-Driven Sustainable Climate Governance under Green
AI: A Case Study of TinyML. Sustainability 2023, 15, 13779. [CrossRef]
34. Ihoume, I.; Tadili, R.; Arbaoui, N.; Benchrifa, M.; Idrissi, A.; Daoudi, M. Developing a TinyML-Oriented Deep Learning Model
for an Intelligent Greenhouse Microclimate Control from Multivariate Sensed Data. In Intelligent Sustainable Systems: Selected
Papers of WorldS4 2022; Springer: Berlin/Heidelberg, Germany, 2023; Volume 2, pp. 283–291.
35. Prakash, S.; Stewart, M.; Banbury, C.; Mazumder, M.; Warden, P.; Plancher, B.; Reddi, V.J. Is TinyML Sustainable? Assessing the
Environmental Impacts of Machine Learning on Microcontrollers. arXiv 2023, arXiv:2301.11899.
36. Soni, S.; Khurshid, A.; Minase, A.M.; Bonkinpelliwar, A. A TinyML Approach for Quantification of BOD and COD in Water.
In Proceedings of the 2023 2nd International Conference on Paradigm Shifts in Communications Embedded Systems, Machine
Learning and Signal Processing (PCEMS), Nagpur, India, 5–6 April 2023; pp. 1–6. [CrossRef]
Future Internet 2024, 16, 42 27 of 29
37. Arratia, B.; Prades, J.; Peña-Haro, S.; Cecilia, J.M.; Manzoni, P. BODOQUE: An Energy-Efficient Flow Monitoring System for
Ephemeral Streams. In Proceedings of the Twenty-fourth International Symposium on Theory, Algorithmic Foundations, and
Protocol Design for Mobile Networks and Mobile Computing, Washington, DC, USA, 23–26 October 2023; pp. 358–363.
38. Wardana, I.N.K.; Fahmy, S.A.; Gardner, J.W. TinyML Models for a Low-Cost Air Quality Monitoring Device. IEEE Sens. Lett.
2023, 7, 1–4. [CrossRef]
39. Sanchez-Iborra, R. LPWAN and Embedded Machine Learning as Enablers for the Next Generation of Wearable Devices. Sensors
2021, 21, 5218. [CrossRef]
40. Hussein, M.; Mohammed, Y.S.; Galal, A.I.; Abd-Elrahman, E.; Zorkany, M. Smart Cognitive IoT Devices Using Multi-Layer
Perception Neural Network on Limited Microcontroller. Sensors 2022, 22, 5106. [CrossRef]
41. Prakash, S.; Callahan, T.; Bushagour, J.; Banbury, C.; Green, A.V.; Warden, P.; Ansell, T.; Reddi, V.J. CFU Playground: Full-Stack
Open-Source Framework for Tiny Machine Learning (TinyML) Acceleration on FPGAs. In Proceedings of the 2023 IEEE
International Symposium on Performance Analysis of Systems and Software (ISPASS), Raleigh, NC, USA, 23–25 April 2023;
pp. 157–167. [CrossRef]
42. Gibbs, M.; Kanjo, E. Realising the Power of Edge Intelligence: Addressing the Challenges in AI and tinyML Applications for
Edge Computing. In Proceedings of the 2023 IEEE International Conference on Edge Computing and Communications (EDGE),
Chicago, IL, USA, 2–8 July 2023; pp. 337–343. [CrossRef]
43. Shafique, M.; Theocharides, T.; Reddy, V.J.; Murmann, B. TinyML: Current Progress, Research Challenges, and Future Roadmap.
In Proceedings of the 2021 58th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 5–9 December 2021;
pp. 1303–1306. [CrossRef]
44. Banbury, C.R.; Reddi, V.J.; Lam, M.; Fu, W.; Fazel, A.; Holleman, J.; Huang, X.; Hurtado, R.; Kanter, D.; Lokhmotov, A.; et al.
Benchmarking tinyml systems: Challenges and direction. arXiv 2020, arXiv:2003.04821.
45. Ooko, S.O.; Muyonga Ogore, M.; Nsenga, J.; Zennaro, M. TinyML in Africa: Opportunities and Challenges. In Proceedings of the
2021 IEEE Globecom Workshops (GC Wkshps), Madrid, Spain, 7–11 December 2021; pp. 1–6. [CrossRef]
46. Sanchez-Iborra, R.; Skarmeta, A.F. TinyML-Enabled Frugal Smart Objects: Challenges and Opportunities. IEEE Circuits Syst.
Mag. 2020, 20, 4–18. [CrossRef]
47. Mishra, N.; Lin, C.C.; Chang, H.T. A Cognitive Oriented Framework for IoT Big-data Management Prospective. In Proceedings
of the 2014 IEEE International Conference on Communiction Problem-Solving, Beijing, China, 5–7 December 2014; pp. 124–127.
[CrossRef]
48. Mishra, N.; Lin, C.C.; Chang, H.T. A cognitive adopted framework for IoT big-data management and knowledge discovery
prospective. Int. J. Distrib. Sens. Netw. 2015, 11, 718390. [CrossRef]
49. Huang, X.; Fan, J.; Deng, Z.; Yan, J.; Li, J.; Wang, L. Efficient IoT data management for geological disasters based on big
data-turbocharged data lake architecture. ISPRS Int. J. Geo-Inf. 2021, 10, 743. [CrossRef]
50. Oktian, Y.E.; Lee, S.G.; Lee, B.G. Blockchain-based continued integrity service for IoT big data management: A comprehensive
design. Electronics 2020, 9, 1434. [CrossRef]
51. Lê, M.T.; Arbel, J. TinyMLOps for real-time ultra-low power MCUs applied to frame-based event classification. In Proceedings of
the 3rd Workshop on Machine Learning and Systems, Rome, Italy, 8 May 2023; pp. 148–153.
52. Doyu, H.; Morabito, R.; Brachmann, M. A TinyMLaaS Ecosystem for Machine Learning in IoT: Overview and Research
Challenges. In Proceedings of the 2021 International Symposium on VLSI Design, Automation and Test (VLSI-DAT), Hsinchu,
Taiwan, 19–22 April 2021; pp. 1–5. [CrossRef]
53. Lin, J.; Zhu, L.; Chen, W.M.; Wang, W.C.; Han, S. Tiny Machine Learning: Progress and Futures [Feature]. IEEE Circuits Syst. Mag.
2023, 23, 8–34. [CrossRef]
54. Schizas, N.; Karras, A.; Karras, C.; Sioutas, S. TinyML for Ultra-Low Power AI and Large Scale IoT Deployments: A Systematic
Review. Future Internet 2022, 14, 363. [CrossRef]
55. Alajlan, N.N.; Ibrahim, D.M. TinyML: Enabling of Inference Deep Learning Models on Ultra-Low-Power IoT Edge Devices for AI
Applications. Micromachines 2022, 13, 851. [CrossRef]
56. Han, H.; Siebert, J. TinyML: A Systematic Review and Synthesis of Existing Research. In Proceedings of the 2022 International
Conference on Artificial Intelligence in Information and Communication (ICAIIC), Jeju Island, Republic of Korea, 21–24 February
2022; pp. 269–274. [CrossRef]
57. Andrade, P.; Silva, I.; Silva, M.; Flores, T.; Cassiano, J.; Costa, D.G. A TinyML Soft-Sensor Approach for Low-Cost Detection and
Monitoring of Vehicular Emissions. Sensors 2022, 22, 3838. [CrossRef]
58. Wongthongtham, P.; Kaur, J.; Potdar, V.; Das, A. Big data challenges for the Internet of Things (IoT) paradigm. In Connected
Environments for the Internet of Things: Challenges and Solutions; Springer: Berlin/Heidelberg, Germany, 2017; pp. 41–62.
59. Shu, L.; Mukherjee, M.; Pecht, M.; Crespi, N.; Han, S.N. Challenges and Research Issues of Data Management in IoT for
Large-Scale Petrochemical Plants. IEEE Syst. J. 2018, 12, 2509–2523. [CrossRef]
60. Gore, R.; Valsan, S.P. Big Data challenges in smart Grid IoT (WAMS) deployment. In Proceedings of the 2016 8th International
Conference on Communication Systems and Networks (COMSNETS), Bangalore, India, 5–10 January 2016; pp. 1–6. [CrossRef]
61. Touqeer, H.; Zaman, S.; Amin, R.; Hussain, M.; Al-Turjman, F.; Bilal, M. Smart home security: Challenges, issues and solutions at
different IoT layers. J. Supercomput. 2021, 77, 14053–14089. [CrossRef]
Future Internet 2024, 16, 42 28 of 29
62. Kumari, K.; Mrunalini, M. A Framework for Analysis of Incompleteness and Security Challenges in IoT Big Data. Int. J. Inf. Secur.
Priv. (IJISP) 2022, 16, 1–13. [CrossRef]
63. Zhang, Y.; Adin, V.; Bader, S.; Oelmann, B. Leveraging Acoustic Emission and Machine Learning for Concrete Materials Damage
Classification on Embedded Devices. IEEE Trans. Instrum. Meas. 2023, 72, 2525108. [CrossRef]
64. Moin, A.; Challenger, M.; Badii, A.; Günnemann, S. Supporting AI Engineering on the IoT Edge through Model-Driven TinyML.
In Proceedings of the 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC), Los Alamitos, CA,
USA, 27 June–1 July 2022; pp. 884–893. [CrossRef]
65. David, R.; Duke, J.; Jain, A.; Janapa Reddi, V.; Jeffries, N.; Li, J.; Kreeger, N.; Nappier, I.; Natraj, M.; Wang, T.; et al. Tensorflow lite
micro: Embedded machine learning for tinyml systems. Proc. Mach. Learn. Syst. 2021, 3, 800–811.
66. Qian, C.; Einhaus, L.; Schiele, G. ElasticAI-Creator: Optimizing Neural Networks for Time-Series-Analysis for on-Device Machine
Learning in IoT Systems. In Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems; Association for
Computing Machinery: New York, NY, USA, 2023; pp. 941–946. [CrossRef]
67. Giordano, M.; Baumann, N.; Crabolu, M.; Fischer, R.; Bellusci, G.; Magno, M. Design and Performance Evaluation of an
Ultralow-Power Smart IoT Device with Embedded TinyML for Asset Activity Monitoring. IEEE Trans. Instrum. Meas. 2022,
71, 2510711. [CrossRef]
68. Bamoumen, H.; Temouden, A.; Benamar, N.; Chtouki, Y. How TinyML Can be Leveraged to Solve Environmental Problems:
A Survey. In Proceedings of the 2022 International Conference on Innovation and Intelligence for Informatics, Computing, and
Technologies (3ICT), Sakheer, Bahrain, 20–21 November 2022; pp. 338–343. [CrossRef]
69. Athanasakis, G.; Filios, G.; Katsidimas, I.; Nikoletseas, S.; Panagiotou, S.H. TinyML-based approach for Remaining Useful Life
Prediction of Turbofan Engines. In Proceedings of the 2022 IEEE 27th International Conference on Emerging Technologies and
Factory Automation (ETFA), Stuttgart, Germany, 6–9 September 2022; pp. 1–8. [CrossRef]
70. Silva, M.; Signoretti, G.; Flores, T.; Andrade, P.; Silva, J.; Silva, I.; Sisinni, E.; Ferrari, P. A data-stream TinyML compression
algorithm for vehicular applications: A case study. In Proceedings of the 2022 IEEE International Workshop on Metrology for
Industry 4.0 & IoT (MetroInd4.0&IoT), Trento, Italy, 7–9 June 2022; pp. 408–413. [CrossRef]
71. Ostrovan, E. TinyML On-Device Neural Network Training. Master’s Thesis, Politecnico di Milano, Milan, Italy, 2022.
72. Signoretti, G.; Silva, M.; Andrade, P.; Silva, I.; Sisinni, E.; Ferrari, P. An Evolving TinyML Compression Algorithm for IoT
Environments Based on Data Eccentricity. Sensors 2021, 21, 4153. [CrossRef]
73. Sharif, U.; Mueller-Gritschneder, D.; Stahl, R.; Schlichtmann, U. Efficient Software-Implemented HW Fault Tolerance for TinyML
Inference in Safety-critical Applications. In Proceedings of the 2023 Design, Automation & Test in Europe Conference & Exhibition
(DATE), Antwerp, Belgium, 17–19 April 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6.
74. Fedorov, I.; Matas, R.; Tann, H.; Zhou, C.; Mattina, M.; Whatmough, P. UDC: Unified DNAS for compressible TinyML models.
arXiv 2022, arXiv:2201.05842.
75. Nadalini, D.; Rusci, M.; Benini, L.; Conti, F. Reduced Precision Floating-Point Optimization for Deep Neural Network On-Device
Learning on MicroControllers. arXiv 2023, arXiv:2305.19167.
76. Silva, M.; Medeiros, T.; Azevedo, M.; Medeiros, M.; Themoteo, M.; Gois, T.; Silva, I.; Costa, D.G. An Adaptive TinyML
Unsupervised Online Learning Algorithm for Driver Behavior Analysis. In Proceedings of the 2023 IEEE International Workshop
on Metrology for Automotive (MetroAutomotive), Modena, Italy, 28–30 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 199–204.
77. Pereira, E.S.; Marcondes, L.S.; Silva, J.M. On-Device Tiny Machine Learning for Anomaly Detection Based on the Extreme Values
Theory. IEEE Micro 2023, 43, 58–65. [CrossRef]
78. Zhuo, S.; Chen, H.; Ramakrishnan, R.K.; Chen, T.; Feng, C.; Lin, Y.; Zhang, P.; Shen, L. An empirical study of low precision
quantization for tinyml. arXiv 2022, arXiv:2203.05492.
79. Krishna, A.; Nudurupati, S.R.; Dwivedi, P.; van Schaik, A.; Mehendale, M.; Thakur, C.S. RAMAN: A Re-configurable and Sparse
tinyML Accelerator for Inference on Edge. arXiv 2023, arXiv:2306.06493.
80. Ren, H.; Anicic, D.; Runkler, T.A. TinyReptile: TinyML with Federated Meta-Learning. arXiv 2023, arXiv:2304.05201.
81. Ren, H.; Anicic, D.; Runkler, T.A. Towards Semantic Management of On-Device Applications in Industrial IoT. ACM Trans.
Internet Technol. 2022, 22, 1–30. [CrossRef]
82. Chen, J.I.Z.; Huang, P.F.; Pi, C.S. The implementation and performance evaluation for a smart robot with edge computing
algorithms. Ind. Robot. Int. J. Robot. Res. Appl. 2023, 50, 581–594. [CrossRef]
83. Mohammed, M.; Srinivasagan, R.; Alzahrani, A.; Alqahtani, N.K. Machine-Learning-Based Spectroscopic Technique for Non-
Destructive Estimation of Shelf Life and Quality of Fresh Fruits Packaged under Modified Atmospheres. Sustainability 2023, 15,
12871. [CrossRef]
84. Koufos, K.; EI Haloui, K.; Dianati, M.; Higgins, M.; Elmirghani, J.; Imran, M.A.; Tafazolli, R. Trends in Intelligent Communication
Systems: Review of Standards, Major Research Projects, and Identification of Research Gaps. J. Sens. Actuator Netw. 2021, 10, 60.
[CrossRef]
85. Ahad, M.A.; Tripathi, G.; Zafar, S.; Doja, F. IoT Data Management—Security Aspects of Information Linkage in IoT Systems.
In Principles of Internet of Things (IoT) Ecosystem: Insight Paradigm; Peng, S.L., Pal, S., Huang, L., Eds.; Springer International
Publishing: Cham, Switzerland, 2020; pp. 439–464. [CrossRef]
86. Liu, R.W.; Nie, J.; Garg, S.; Xiong, Z.; Zhang, Y.; Hossain, M.S. Data-Driven Trajectory Quality Improvement for Promoting
Intelligent Vessel Traffic Services in 6G-Enabled Maritime IoT Systems. IEEE Internet Things J. 2021, 8, 5374–5385. [CrossRef]
Future Internet 2024, 16, 42 29 of 29
87. Hnatiuc, B.; Paun, M.; Sintea, S.; Hnatiuc, M. Power management for supply of IoT Systems. In Proceedings of the 2022
26th International Conference on Circuits, Systems, Communications and Computers (CSCC), Crete, Greece, 19–22 July 2022;
pp. 216–221. [CrossRef]
88. Rajeswari, S.; Ponnusamy, V. AI-Based IoT analytics on the cloud for diabetic data management system. In Integrating AI in IoT
Analytics on the Cloud for Healthcare Applications; IGI Global: Hershey, PA, USA, 2022; pp. 143–161.
89. Karras, A.; Karras, C.; Giotopoulos, K.C.; Tsolis, D.; Oikonomou, K.; Sioutas, S. Peer to peer federated learning: Towards
decentralized machine learning on edge devices. In Proceedings of the 2022 7th South-East Europe Design Automation, Computer
Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), Ioannina, Greece, 23–25 September 2022;
IEEE: Piscataway, NJ, USA, 2022; pp. 1–9.
90. Karras, A.; Karras, C.; Giotopoulos, K.C.; Tsolis, D.; Oikonomou, K.; Sioutas, S. Federated Edge Intelligence and Edge Caching
Mechanisms. Information 2023, 14, 414. [CrossRef]
91. Karras, A.; Karras, C.; Karydis, I.; Avlonitis, M.; Sioutas, S. An Adaptive, Energy-Efficient DRL-Based and MCMC-Based Caching
Strategy for IoT Systems. In Algorithmic Aspects of Cloud Computing; Chatzigiannakis, I., Karydis, I., Eds.; Springer: Cham,
Switzerland, 2024; pp. 66–85.
92. Meddeb, M.; Dhraief, A.; Belghith, A.; Monteil, T.; Drira, K. How to cache in ICN-based IoT environments? In Proceedings
of the 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), Hammamet, Tunisia,
30 October–3 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1117–1124.
93. Wang, S.; Zhang, X.; Zhang, Y.; Wang, L.; Yang, J.; Wang, W. A survey on mobile edge networks: Convergence of computing,
caching and communications. IEEE Access 2017, 5, 6757–6779. [CrossRef]
94. Zhong, C.; Gursoy, M.C.; Velipasalar, S. A deep reinforcement learning-based framework for content caching. In Proceedings
of the 2018 52nd Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 21–23 March 2018;
IEEE: Piscataway, NJ, USA, 2018; pp. 1–6.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
Reproduced with permission of copyright owner. Further reproduction
prohibited without permission.