0% found this document useful (0 votes)
63 views7 pages

Machine Learning at Resource Constraint Edge Device Using Bonsai Algorithm

Uploaded by

Erickson Ong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views7 pages

Machine Learning at Resource Constraint Edge Device Using Bonsai Algorithm

Uploaded by

Erickson Ong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/349120290

Machine Learning at Resource Constraint Edge Device Using Bonsai Algorithm

Conference Paper · December 2022


DOI: 10.1109/ICAECC50550.2020.9339514

CITATIONS READS

2 601

2 authors:

Soumyalatha Naveen Manjunath R Kounte


Reva University Reva University
20 PUBLICATIONS   98 CITATIONS    55 PUBLICATIONS   247 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Computational Intelligence View project

BlockChain Technology View project

All content following this page was uploaded by Manjunath R Kounte on 08 February 2021.

The user has requested enhancement of the downloaded file.


MACHINE LEARNING AT
RESOURCE CONSTRAINT EDGE
DEVICE USING BONSAI ALGORITHM
2020 Third International Conference on Advances in Electronics, Computers and Communications (ICAECC) | 978-1-7281-9183-6/20/$31.00 ©2020 IEEE | DOI: 10.1109/ICAECC50550.2020.9339514

Soumyalatha Naveen Manjunath R Kounte


Research Scholar Associate Professor
School of Computing & IT School of Electronics and Communication Engineering
REVA University, REVA University
Bangalore, Karnataka, India Bangalore, Karnataka, India
[email protected] [email protected]

Abstract- In the worldwide billions of devices connected which carryout computation of substantial amount of
each other to interact with the surrounding environment data is referred to as Fog computing. In Edge
to collect the data based on the context. Using machine computing, computational process is enabled within
learning algorithm intelligence can be incorporated in the device or very close to the end user for storing and
these Internet of Things (IoT) devices to get valuable processing rather than transmitting the data to massive
insights from these data for accurate predictions. computational services provided by Cloud services.
Machine learning model is deployed onto the devices for
making the decisions locally. This enables fast, accurate
prediction within few milliseconds by evading data A. Motivation
transmission to the cloud and makes perfectly applicable
for real time applications. In this paper, the experiment Machine learning applications are successful in
is conducted with publicly available dataset with Bonsai various domains. Edge computing is a promising
algorithm. This algorithm is implemented in Linux technology pushing the computation to the End user.
environment with core i5 processor in python 2.7 and
achieved 92% accuracy with model size of 6.25KB, which Though IoT devices has limited computational power,
can be easily deployed on resource constraint IoT energy, and memory resources for deploying the
devices. machine learning on IoT devices, lot of research going
on embed intelligence to IoT devices.
Keywords-Machine Learning, Resource constraint, Edge
Computing, IoT, Bonsai
As shown in Fig.1 literature is conducted on IEEE
I. INTRODUCTION publications with the keyword “Edge computing and
Machine Learning” clearly depicts the in the last few
Internet of Things applications are used in Plethora of years dramatic increase in the interest of research
domain such as Healthcare, Smart Cities, smart
community for deploying machine learning in
precision agriculture etc. These IoT devices collects
resource constraint edge devices is the motivation for
data based on the context and transmits to the cloud
writing this paper.
for decision making. As decision making requires time
for processing the data and to produce the insights, this
approach will not be suitable for real time applications
wherein data needs to be processed and has to produce
the results within few milliseconds.

For real time applications, to reduce the prediction


time an alternative approach is to process the data in
between the end device and cloud services which leads
to Fog computing or to process in edge device itself
which leads to Edge computing paradigm. In
distributed computing environment, device having
more computational power, will act as Fog server Fig 1. Edge Computing with Machine learning interest (IEEE)

978-1-7281-9183-6/20/$31.00 ©2020 IEEE

Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 08,2021 at 18:26:57 UTC from IEEE Xplore. Restrictions apply.
Outline of the article as follows. Section II introduces Designing machine learning models [8,9] to embed
Machine learning at IoT Edge Devices, and various within a IoT devices for providing fast accurate result
architectural devices. Following that, we discussed with reduced latency for resource constrained devices.
On this context few architectural designs [10] and
Bonsai algorithm followed by applications. Finally, strategies are developed for time stringent applications
we present the result followed by conclusion in the last are discussed below.
section.
1) Device centric computation (On-device
II. EDGE MACHINE LEARNING Computation)
The enormous growth of the IoT devices such as smart Machine learning models are trained in the local
devices and smart phones generates huge amount of system or cloud. Training takes place offline. This is
varieties of data. These smart devices deployed at referred as On-device computation [11] or Edge
network edge are called edge devices. Commonly use computing [12]. As shown in Fig. 3, later trained
Edge devices with more computational capability is model is deployed on to the Edge device for faster
called as Edge servers [1]. Edge computing [2,3] refers prediction of model through mathematical operations.
to the computing capabilities of Edge devices and
Edge servers near End users [4]. This intelligent
processing and data analytics happen through various
Machine learning techniques[5,6] at the IoT edge
devices closer to the data source. Machine Learning at
Edge is combination of two paradigms as shown in
Fig. 2.

Fig. 3. Edge Computing

2) Edge- server based computation (Fog


Computing)

Data generated from IoT edge devices are sent


from edge devices to the edge server for
processing the data as shown in Fig. 4. Edge
device with more computational power will act as
Fig. 2. Machine learning with Edge Computing Edge servers. As major processing happens at
Edge servers again latency will be reduced
A. Edge Devices compared to the Response time Taken by cloud
Machine Learning with Edge computing approach is services for processing. This kind of architecture
used in various applications uses various devices as is termed as fog computing [13,14] and also makes
Edge devices or Edge servers. Few of the Edge devices it suitable for time stringent applications.
[7] are listed in Table I, usually these devices have
limited memory and computational power however
produces large volume of data.
TABLE I: Commonly used Edge devices

B. Architectural design for Fast Inference


Fig. 4. Fog Computing

Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 08,2021 at 18:26:57 UTC from IEEE Xplore. Restrictions apply.
3) Distributed Computing (Joint Computing)

Instead of Aforementioned centralized architectures


we can also have distributed architecture where
multiple end devices communicate with edge server.
Though data processing happens in Edge server, huge
amount of data will also be in cloud processing as
shown in Fig. 5.

Fig. 6. Pruning in Machine learning

Fig. 5. Distributed Computing

4) Model complexity reduction Techniques

As machine learning models are moving closer to the


end devices which has constraint on computing
capability and memory and energy. Few algorithms Fig. 7. Weight Pruning
aim to design a resource efficient technique play for
resource constraint devices. Few techniques such as b) Compression
pruning, quantization, regularization, Hyper As machine learning models are computational and
parameters Tuning, Lightweight [15] framework are memory intensive makes difficult to deploy on
the some of the techniques enables to increase the embedded systems. Compression algorithm can be
throughput for embedding trained neural network on used to reduce the storage without accuracy loss.
resource constraint device.
c) Quantization
a) Pruning
Quantization technique are applied during training and
Pruning is one of the optimization techniques used in reduces number of bits used for model weights and
machine learning to reduce the size of the model and activations and aids to improve in inference speed
to improve performance. The process of pruning is
shown in Fig. 6. d) Hyperparameter Tuning

Main objective of this technique size of the network Hyperparameters such as number of epochs, learning
by removing weigh connection of the network which rate, activation function is tuned for maximising the
does not have much influence on the output as shown performance.
in Fig.7. Though pruning is older concept, it has
gained lot of attention during the on-device or Edge 5) Performance Metrics
computing paradigm due to the deployment of smaller For most of the AI driven Edge device applications,
machine learning models on to the mobile device or designing a machine learning capability or inference
IoT devices to increase the speed and to decrease the model on edge devices for high accuracy has many
model size with high accuracy. additional constraints imposed by devices resources
Pre-trained network along with the weights will be the that effect the performance parameters [16] as shown in
input for pruning. This includes removing the Table II.
connections and parameters of the model that does not
affect the final output. Pruned model needs to be
recompiled.

Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 08,2021 at 18:26:57 UTC from IEEE Xplore. Restrictions apply.
Table II: Performance Metrics

Parameter Description Objective


Latency The amount of time taken for single input and Reduce the Inference
its response Latency
Energy Edge devices such as smart phones etc. are Minimize the energy
battery powered. consumption or Prolong the
battery lifetime.
Network Bandwidth Amount of data transmitted over internet Minimal usage of network
bandwidth
Accuracy Correctly predicted observation by total High
number of observations
Memory Amount of storage required for deploying the Optimum (Minimal)
model and store the model parameters.
Privacy Maintain the privacy of the data High
Throughput Rate at which input data is processed Maximum

III. IMPLEMENTTION OF BONSAI edge device itself with microseconds to milliseconds


time for providing the result. This is suitable for
Due to the exponential growth in the development of
Low latency applications.
efficient machine learning algorithm, reduced cost
of IoT devices and effective programming Training must be done in laptop and model has to be
technologies the applications of Microcontrollers deployed onto IoT devices. Predictions/ Inference is
and other low power devices such as Arduino, made locally on-device in milliseconds with
Raspberry Pie, etc. has exponentially increased. minimal memory with lower battery consumptions.
So, this best suits for resource constraint IoT
For accurate interpretation and for classification and
applications.
for decision making machine learning models can be
used with IoT devices. Machine learning model has A. Bonsai Algorithm
major footprint on storage, prediction latency and
energy. Real time prediction can be made locally in Bonsai is a machine learning model aims to push the
IoT devices without connecting to cloud. This kind intelligence to resource constrained IoT devices for
of infrastructure can be used when there is a lack of making faster prediction, privacy preserving,
connectivity and low latency, low energy and data energy-efficient suitable for real time applications,
privacy requirements of applications. for further details paper [18] can be referred.

Generally, Tree algorithms can be used for Bonsai algorithm can be trained on laptop or cloud,
classification and regression kind of problems. that can be deployed onto severely resource
Though this is suitable for IoT applications, pruning constrained IoT devices.
trees to fit in to a resource constraint IoT device
B. Flowchart
leads to poor accuracy.
To reduce the model size, Bonsai learns a single
BONSAI[18] is one of the machine learning models
Shallow. This process leads to decrease in accuracy.
designed for deploying to resource constraint edge
To retain accuracy algorithm allows each node to
device considering the storage and latency
predict score and to represent in vector. Final
requirement for real time applications with
predicted vector is summation of all individual
minimum model size to make it fit for small IoT
predicted vector as shown in Fi 8. Input vector can
devices. .
be represented into low dimensional space using
Bonsai is a string and shallow non-linear tree-based learnt sparse projection matrix. In the flowchart Ik(x)
classifier model for supervised learning tasks such represents the Indicator function, Wk and Vk
as binary and multi-class classification, regression, represents the sparse predictor leant at node k, Z
ranking. Bonsai algorithm [19] is designed to fit in represents Sparse projection matrix and б
few KB of memory in small IoT devices. Bonsai can represents Tunable hyper parameters.
be trained in cloud or local system but can make
predictions locally on the edge resource constraint

Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 08,2021 at 18:26:57 UTC from IEEE Xplore. Restrictions apply.
Fig.9. Experiment result Screenshot

This clearly depicts that Bonsai is an efficient


machine learning algorithm especially designed for
severely resource constraint applications which
requires small model to fit into small sized memory.

CONCLUSION

Bonsai can be deployed on tiny devices such as


Arduino, Raspberry Pi, small IoT devices,
Fig. 8. Flowchart of Bonsai Algorithm microcontrollers and predictions, decisions can be
made locally one the device accurately and
IV. RESULTS AND PERFORMANCE efficiently and better suitable for low latency
EVALUATION applications. Experiments are conducted and this
algorithm generates 6.25KB of model size
Experiments are conducted in publicly available
maintaining 92% of accuracy in our experimental
USPS [20] dataset and we used workstation
environment setup. Training can be done in local
equipped with Intel Core i5 CPU. Program is
system and trained model can be deployed on to
executed in Linux environment with python 2.7 is
devices for low-latency applications.
used. Program is executed in many iterations by
varying parameters such as depth of Bonsai,
projection dimension, regularizes, sparsity and
number of epochs. The details of the dataset is REFERENCES
described in the below Table. 1) Congfeng Jaing, Tiantian Fan, Honghao Gao,
Weisong Shi, Laingkai Liu, Christophe Cerin, Jain
Table III. Dataset description
Wan, “Edge aware edge computing: A survey”,
Elsevier , Jan 2020
Datas Descripti Traini Testi
Image 2) Wazir Zada Khan, Ejaz Ahmed, Saqib Hakak, Ibrar
et on ng ng
Resoluti Yaqoob, Arif Ahmed, “ Edge Computing: A survey”,
on Elsevier, February 2019
USPS Handwritt 7291 2007 16*16 3) Huansheng Ning, Yunfei Li, Feifei Shi, Laurence T.
en digits Grey Yang, “Heterogeneous edge computing open
dataset scale platforms and tools for internet of things”, Future
Generation Computer Systems, Elsevier, vol.106,
For the depth of Bonsai to 2, Projection dimension pp.67-76, 2020
to 10, with total number of epochs to 80 with 4) Soumyalatha Naveen, Manjunath R Kounte, “Key
different values for regularize and Sparsity Technologies and challenges in IoT Edge
parameters we achieved accuracy of 92% with Computing”, Third International conference on I-
SMAC (IoT in Social, Mobile, Analytics and Cloud)
6.25KB of Model Size. As shown in below Fig. 9. (I-SMAC), IEEE, 2019

Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 08,2021 at 18:26:57 UTC from IEEE Xplore. Restrictions apply.
5) Manjunath R Kounte, Pratyush Kumar Tripathy, for Internet of Things”, Proceedings of the 34th
Pramod P, Harshit Bajpai, "Analysis of Intelligent International Conference on Machine Learning, 2017
Machine using Deep Learning and Natural Language 19) https://github.com/Microsoft/EdgeML/wiki/Bonsai
Processing", 4th International Conference on Trends 20) https://www.kaggle.com/bistaumanga/usps-dataset
in Electronics and Informatics (ICEI), IEEE, 2020
6) Shridevi J Kamble, Manjunath R Kounte, " On Raod
Intelligent Vehicle Path Prediction and Clustering
using Machine Learning Approach", Third
International conference on I-SMAC(IoT in Socail,
Mobile, Analytics and Cloud), IEEE, 2019
7) Nandan Jha, Khaled Alwasel, Areeb Alshoshan,
Xianghua Huang, Ranesh Kumar Naha, Sudheer
Kumar Battula, Saurabj Garg, Deepak Puthal, Philip
James, Albert Zomaya, Schahram Dustdar, Rajiv
Ranjan, “IoTSim-Edge: A simulation Framework for
Modeling the Behaviour of IoT and Edge Computing
Environments”, Journal of Software: Practice
and Experience, pp. 844-867, Jan 2020
8) M.G.Sarwar Murshed, Christopher Murphy, Daqing
Hou, Nazar Khan, Ganesh Ananthanarayanan, Faraz
Hussain, “Machine Learning at the Network Edge: A
survey” 2020
9) Tomasz Szydlo, Joanna Sendorek, Robert Brzoza
woch, “Enabling machine learning on resource
constrained devices by source code generation of the
learned models”, International Conference on
Computational Science, Lecture Notes in Computer
Science book series, vol.10861, pp.682-694, 2018
10) Massimo Merenda, Carlo Porcaro, Demetrio Iero,
“Edge Machine Learning for AI-Enabled IoT
Devices: A Review”, Sensors(Basel)-Open Access
Journal, April 2020
11) Sauptik Dhar, Junyao Guo, Jiayi (Jason) Liu, Samarth
Tripathi, Unmesh Kurup, Mohak Shah, “ On-Device
Machine Learning: An Algorithms and Learning
Theory Perspectives”, vol.1, no.1, ACM, 2020
12) Sergej Svorobej, Patricia Takako Endo, Malika
Bendechache, Christos Fielis-Papadopoulos,
Konstantinos M. Giannotakis, George A.Gravvanis,
Dimitrios Tzovaras, James Byrne, Theo Lynn, “
Simulating Fog and Edge Computing Scenarios: An
Overview and Research Challenges”, Special Issues
Cloud Computing and Internet of Things, Future
Internet, vol.11, no.3, 2019
13) Hai Lin, Sherali Zeadally, Zhihong chen, Houda
Labiod, Lusheng Wang, “A Survey on Computation
Offloading Modeling for Edge Computing”, Journal
of Network and Computer Applications, Elsevier,
vol.169, 2020
14) Soumyalatha Naveen, Manjunath R Kounte, “In
Search of the Future Technologies: Fusion of Machine
Learning, Fog and Edge Computing in the Internet of
Things”, Lecture Notes on Data Engineering and
Communications Technologies, Springer, vol.31,
2019
15) Mohit Kumar, Xingzhou Zhang, Laingkai Liu, Yifan
Wang, Weisong Shi, “Energy-Efficient Machine
Learning on the Edges”, IEEE International Parallel
and distributed Processing Symposium Workshops,
2020
16) Jiasi Chen, Xukan Ran, “Deep Learning With Edge
Computing: A Review”, Proceedings of the IEEE,
vol.107, no.8, August 2019
17) Fangxin Wang, Miao Zhang, Xiangxiang Wang,
Xiaoqianga Ma, And Jiangchuan Liu, “Deep Learning
for Edge Computing Applications: A state of the Art
Survey”, IEEE Access, vol.8, pp.58322-58336, March
2020
18) Ashish Kumar, Saurabh Goyal, Manik Varma,
“Resource-efficient Machine Learning in 2KB RAM

Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 08,2021 at 18:26:57 UTC from IEEE Xplore. Restrictions apply.
View publication stats

You might also like