0% found this document useful (0 votes)
36 views5 pages

Tiny ML Based Compressive Sensing

Tiny ML can be used for efficient data compression in IoT applications. Previous work reconstructed signals from sources using compressive sensing but some signals were not reconstructed perfectly due to the high computation required. This document proposes using Tiny ML to generate the sensing matrix for compressive sensing. By optimizing the sensing matrix with a DNN model, Tiny ML could allow for perfect reconstruction of signals with minimal information loss and computation requirements suitable for low-power IoT devices.

Uploaded by

lucy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views5 pages

Tiny ML Based Compressive Sensing

Tiny ML can be used for efficient data compression in IoT applications. Previous work reconstructed signals from sources using compressive sensing but some signals were not reconstructed perfectly due to the high computation required. This document proposes using Tiny ML to generate the sensing matrix for compressive sensing. By optimizing the sensing matrix with a DNN model, Tiny ML could allow for perfect reconstruction of signals with minimal information loss and computation requirements suitable for low-power IoT devices.

Uploaded by

lucy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Tiny ML-based Compressive Sensing

İdil Bensu Çılbır (Student)


August 2022

1 Introduction
Connecting ”things” to the Internet gave rise to the term ”Internet of Things”
(IoT). It is focused on connecting strange gadgets together in a network. The
widespread expansion of Internet of Things applications has been made possible
by recent advancements in computer vision technologies. IoT makes it possible
to continuously monitor equipment and environments with small, inexpensive
sensors. From this perspective, advancements in sensors, microcontrollers, and
communication protocols have sped up the process of developing new IoT plat-
forms and bringing them to market. IoT has become a sought-after market due
to the rise in global deployments, which has numerous advantages for a variety
of application fields.
Although some computer vision tasks can be completed locally, many tasks
require the enormous computation and storage power that the cloud offers.
These applications frequently involve intensive calculation and data process-
ing. Transferring data efficiently is essential for many IoT devices. Due to
low transmit power, high path loss, and collision, current solutions rely on LP-
WANs (Low Power Wide Area Networks), such as 5G IoT, which often have
very little bandwidth and a high packet loss rate. It is possible to increase the
network’s dependability. However, this results in increased complexity and data
retransmission, which sharply reduces the overall network capacity.
So far, it has been worked on how data can be received from sources with the
least information loss and how the data can be reconstructed by compressing
with optimal capacity, which is calculated by a simple learning technique. This
case is simulated and it is observed that, some sources were not successfully
reconstructed. The reason for that may be the requirement of high computation
for optimizing the capacity. That brings the question of what techniques can
be used for low-power but efficient compression. Tiny Machine Learning (Tiny-
ML) is the answer for this. The details of how Tiny-ML can be applied for
compression will be explained in detail later. First, Tiny-ML will be defined
and its related works will be discussed.

1
2 Background
This section gives the definition of Tiny-ML and how its mechanism works,
comparing with traditional ML.

2.1 Tiny-ML
Tiny Machine Learning, as the name suggests, gives extremely resource-constrained
IoT devices like microcontroller units cognitive capabilities. For machine learn-
ing (ML) applications that use miniature models for incredibly energy-efficient
inference, the term ”Tiny-ML” was developed. Additionally, the absence of a
resource-rich Operating Systems (OS) on MCUs broadens the definition to in-
clude constrained-IoT devices without ML. The development of EdgeML, which
lessens the infrastructure requirements for persistent data transport and recep-
tion in traditional IoT, is specifically where Tiny-ML’s roots can be found.
Similar to how Tiny-ML is not seen as a replacement for the Fog or Cloud
paradigms but as an additional tool that complements them.
Training, optimizing, and deploying are the three main parts of the Tiny-
ML process. The architecture can be decomposed into conventional ML and
TinyML components, though. The first includes the de-facto standard phases
of classic ML techniques, such as data collection, algorithm selection, model
training, and optimization. The model porting and deploying phases make up
the latter, which focuses on TinyML. [6]
TinyML can be used to investigate methods for implementing machine learn-
ing models in low-power devices. The next section explains the previous works
related to Tiny-ML’s applications, and Tiny-ML based compression approaches.

3 Related Works
Tiny-ML’s inception has opened up new IoT-driven ML research directions.
Recent research has achieved state-of-the-art status in Tiny-ML as a result of
relevant research, which has grown in importance and promise. This section
gives a brief detail of such researches of Tiny-ML. Neural networks are used
to model the image transform in deep learning-based systems. Recent research
specifically uses autoencoders, which are composed of an encoder and a decoder,
to map a signal to a quantized form. Similar to that, the original signal is recre-
ated using the decoder. Continuous relaxation techniques have been suggested
to deal with the nondifferentiable character of quantization in order to permit
end-to-end training, while others have just suggested replacing the derivative
with a smooth approximation. This section also expresses specific approaches
of machine learning based compression applications.
The article [2] offers TinyFedTL1, the first open-sourced implementation of
federated learning (FL) at the most resource-constrained level in the IoT, which
includes microcontroller unit (MCU) and small CPU based devices, in light of
these difficulties. It is shown that privacy-centric FL on devices with a mod-

2
est memory footprint (less than 1MB) is not only feasible but also efficient by
using transfer learning as a representative task. Although tempting, machine
learning on tiny IoT devices based on MCUs is difficult because even mobile
phones have memory that is 2-3 orders of magnitude lower than that of mi-
crocontrollers. In order to enable ImageNet-scale inference on microcontrollers,
the article [4] present MCUNet, a framework that jointly designs the effective
neural architecture and the lightweight inference engine.
Other than training and optimizing MCUs, Tiny-ML is prefered for image
compression, and this part is where gives an inspiration for the proposal. The ar-
ticle [1] suggests Starfish, an architecture that has improved compression ratios
and handles packet loss gracefully. The Deep Neural Networking architecture
is meticulously planned, and Tiny-ML models that run on incredibly low power
and low cost AI for IoT accelerators are sought after using an AutoML tech-
nique. A similar problem is studied in [3]. In this study, the first compact
generative image compression technique intended exclusively for picture com-
pression on micro-controllers is proposed. The decoder side is kept fixed, and
the encoder is based on the well-known MobileNetV2 network design. The im-
pact of various training procedures is examined to deal with the compression
pipeline’s consequent unbalanced design.
The purpose of the article [7] is to assess the effects of the compression
algorithm (Tiny Anomaly Compress - TAC) on the functionality of a microcon-
troller when used in the context of actual car applications. It was discovered as
a result that the microcontroller processing time is not significantly impacted
by the embedded algorithm. In another paper [5], a Tiny-ML approach for
hardware-efficient channel estimation and signal detection is presented. The
key innovation of Tiny-ML is that each large dense layer of DNN is replaced
with three small cascading sub-layers, resulting in the computation and storage
of a large matrix being replaced by that of small ones. A novel rank-restricted
back-propagation algorithm is also designed to enable lightweight training of
this Tiny-ML.
As it is seen, the usage are of Tiny-ML is getting wider nowadays. Now, let’s
get focused on how Tiny-ML can be related to compressing signals which they
are coming from sources (base stations) to centre (Software Defined Network)
efficiently on low cost and less computation.

4 Proposed Approach
In the previous work, the information of signals coming from sources is recon-
strcuted. The priority of these reconstructed signal is decided by calculating the
entropy of each information in signals. According to numerical results of this
work, some signals are reconstructed perfectly while others are reconstructed
poorly. In other words, some reconstructed signals have lost information. This
happened because of the possibility of the necessity of high computation for
optimizing the maximum capacity of compressive sensing.
Reconstructing a signal takes input from all bytes in representation rather

3
than relying on specific bytes. The signal transmission between the source
and the central server happens without any loss. If lossless transmission is
not considered, is utilized to generate a loss-resilient, unstructured compressed
representation that degrades gracefully. In application space, the ability to
tolerate packet loss has a considerable positive impact on network bandwidth
and energy use.
If Tiny-ML makes aware of the content and objectives in sending compressed
signals, then it is also aware of the compression technique. There are many works
that focused on applying machine learning to compressive sensing, such as de-
signing a sensing matrix via maximization. Now, the plan here is to embedding
a Tiny-ML method to compressive sensing process.
If the equation of compressive sensing is examined, the generation of the
sensing matrix is the most important point in order to reconstruct all signals
perfectly. The previous work used generative model technique for this aim,
but since this required a higher computation, not all sparse signals were recon-
structed perfectly. Thus, another method proposed for perfect sensing matrix
that reconstructing the signal flawlessly is Tiny-ML by using DNN architec-
ture. In other words, the objective here is to make a perfectly reconstruction of
the communication signal by optimizing the sensing matrix which is aware of
information loss.
In order to describe this mathematically, the training is made by minimizing
L(x, x′ ) where L(.) is loss function, x is the original signal and x′ = R(M , x)
is the reconstructed signal which can be defined as linear reconstruction oper-
ator including sensing matrix M and original signal. The loss function can be
expressed as the distance between the original signal and reconstructed signal
at time t. The aim of the Tiny-ML will be minimizing the optimization problem
for sensing matrix as shown in equation 1.
T
1X
M = arg min L(R(M , x), x) (1)
M T i=t
The way of how the loss function is calculated may be the known L2 nor-
malization method. If this optimization is minimized with Tiny-ML method,
then all signals may be reconstructed perfectly with low-cost, less needed com-
putation but more efficient performance.

5 Conclusion
To sum up, the aim of this report is to propose an approach that maximize
the efficiency of the performance of compressive sensing. A brand new machine
learning technique called Tiny-ML is based to compressive sensing. Some similar
and related papers were mentioned of Tiny-ML, and then the way how Tiny-
ML can be applied to compressive sensing is declared in both theoretically and
mathematically. An objective is defined related to the aim and it is expressed.
If this method is approved and applicable, then steps of the process can be

4
expressed with mathematical formulation, and a simple simulation can be done
in order to see numerical results and its analysis.

References
[1] Pan Hu et al. “Starfish: resilient image compression for AIoT cameras”. In:
Nov. 2020, pp. 395–408. doi: 10.1145/3384419.3430769.
[2] Kavya Kopparapu et al. “TinyFedTL: Federated Transfer Learning on
Ubiquitous Tiny IoT Devices”. In: 2022 IEEE International Conference
on Pervasive Computing and Communications Workshops and other Af-
filiated Events (PerCom Workshops). 2022, pp. 79–81. doi: 10 . 1109 /
PerComWorkshops53856.2022.9767250.
[3] Nikolai Körber et al. “Tiny Generative Image Compression for Bandwidth-
Constrained Sensor Applications”. In: 2021 20th IEEE International Con-
ference on Machine Learning and Applications (ICMLA). 2021, pp. 564–
569. doi: 10.1109/ICMLA52953.2021.00094.
[4] Ji Lin et al. “Mcunet: Tiny deep learning on iot devices”. In: Advances in
Neural Information Processing Systems 33 (2020), pp. 11711–11722.
[5] Hongfu Liu et al. “Tiny Machine Learning (Tiny-ML) for Efficient Channel
Estimation and Signal Detection”. In: IEEE Transactions on Vehicular
Technology 71.6 (2022), pp. 6795–6800. doi: 10.1109/TVT.2022.3163786.
[6] Visal Rajapakse, Ishan Karunanayake, and Nadeem Ahmed. “Intelligence
at the Extreme Edge: A Survey on Reformable TinyML”. In: arXiv preprint
arXiv:2204.00827 (2022).
[7] Marianne Silva et al. “A data-stream TinyML compression algorithm for
vehicular applications: a case study”. In: 2022 IEEE International Work-
shop on Metrology for Industry 4.0 IoT (MetroInd4.0IoT). 2022, pp. 408–
413. doi: 10.1109/MetroInd4.0IoT54413.2022.9831606.

You might also like