0% found this document useful (0 votes)
36 views14 pages

2 An Efficient Machine Learning-Based Resource Allocation Scheme For SDN-Enabled Fog Computing Environment

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views14 pages

2 An Efficient Machine Learning-Based Resource Allocation Scheme For SDN-Enabled Fog Computing Environment

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

8004 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO.

6, JUNE 2023

An Efficient Machine Learning-Based Resource


Allocation Scheme for SDN-Enabled
Fog Computing Environment
Jagdeep Singh , Parminder Singh , Mustapha Hedabou , and Neeraj Kumar , Senior Member, IEEE

Abstract—Fog computing is an emerging technology which en- I. INTRODUCTION


ables computing resources accessibility close to the end-users. It
N RECENT years, industries and scientific applications have
overcomes the drawbacks of available network bandwidth and
delay in accessing the computing resources as observed in cloud
computing environment. Resource allocation plays an important
I produced a large amount of data, as a result of which the local
systems require the extension of resources outside the premises
role in resource management in a fog computing environment. for storage and processing this huge amount of data. A solution
However, the existing traditional resource allocation techniques in
fog computing do not guarantee less execution time, reduced energy
to this issue is cloud computing which provides infinite resources
consumption, and low latency requirements which is a pre-requisite from different cloud service providers. However, latency-aware
for most of the modern fog computing-based applications. The applications such as healthcare require a real-time response. A
complex fog computing environment requires a robust resource al- slight delay in the response from cloud computing can raise a
location technique to ensure the quality and optimal resource usage. serious problem. According to a survey [1], real-time responses
Motivated from the aforementioned challenges and constraints,
in this article, we propose a resource allocation technique for are required for intervention, surgery, and long-term tracking
SDN-enabled fog computing with Collaborative Machine Learning for chronic disease, as well as for remote diagnosis and e-health
(CML). The proposed CML model is integrated with the resource care using the smart robots. It includes the usage of technolo-
allocation technique for the SDN-enabled fog computing environ- gies such as robotics, Artificial Intelligence (AI), monitoring
ment. The FogBus and iFogSim are deployed to test the results systems, and data analysis, so as to have quick response and low
of the proposed technique using various performance evaluation
metrics such as bandwidth usage, power consumption, latency, latency. In IoT, most of the applications are delay-sensitive, like
delay, and execution time. The results obtained are compared with intelligent transportation systems, healthcare, mission-critical
other existing state-of-the-art techniques using the aforementioned applications etc. Therefore, the performance metrics like latency,
performance evaluation metrics. The results obtained show that response time, execution time, time delay, network bandwidth,
the proposed scheme reduces 19.35% processing time, 18.14% and energy consumption become most critical parameters which
response time, and 25.29% time delay. Moreover, compared to the
existing techniques, it reduces 21% execution time, 9% network needs to be improved in fog computing compared to cloud
usage, and 7% energy consumption. computing. Fog computing creates a computational environment
near the data origin. It helps to implement the latency-aware and
Index Terms—Collaborative machine learning (CML), fog
high-throughput applications easily in a fog environment. Many
computing, software defined network (SDN), resource allocation.
research articles discussed resource allocation techniques, but
Manuscript received 10 May 2022; revised 9 September 2022; accepted 27 in most of these, the dynamic behavior of users is neglected,
January 2023. Date of publication 9 February 2023; date of current version 20 which requires the available resources for allocation in a fog
June 2023. The review of this article was coordinated by Dr. Tomaso De Cola. computing environment. To minimize the challenges related to
(Corresponding author: Neeraj Kumar.)
Jagdeep Singh is with the School of Computer Science and Engineering, resource allocation, an optimal resource allocation scheme is
Lovely Professional University, Phagwara 144411, India, and also with the proposed in this article for fog computing using the concept of
Department of Information Technology, Guru Nanak Dev Engineering College, SDN with a CML model [2].
Ludhiana, Punjab 141006, India (e-mail: [email protected]).
Parminder Singh is with the School of Computer Science, University Mo- In recent years, various studies formulated CML to create an
hammed VI Polytechnic, Benguerir 43512, Morocco, and also with the School of analytical model by enabling end-users to train the local model
Computer Science and Engineering, Lovely Professional University, Phagwara on the dataset at fog nodes and IoT devices [3], [4]. The end-users
144411, India (e-mail: [email protected]).
Mustapha Hedabou is with the School of Computer Science, Uni- train the local model with the private data and share the trained
versity Mohammed VI Polytechnic, Benguerir 43512, Morocco (e-mail: local model to the cloud. The update request is sent back to the
[email protected]). central server when the local model changes. The privacy of the
Neeraj Kumar is with the Department of Computer Science and En-
gineering, Thapar Institute of Engineering & Technology, Patiala 147004, local data is preserved between the IoT devices and fog nodes.
India. He is also associated with Labenese American University, Beirut The central server is responsible for the aggregation to update the
1102-2801, Lebanon, also with the School of Computer Sciecne, Univer- global model that has to be sent back to local models for training.
sity of Petroleum and Energy Studies, Dehradun, Uttarakhand 222001, In-
dia, also with the King Abdul Aziz University, Jeddah 21589, Saudi Arabia, This shared training concept is an iterative process and repeats
and also with the Chandigarh University, Gharuan, Mohali, India (e-mail: until the accuracy of the global model improves. It is known as
[email protected];[email protected]). Federated learning (FL), which is a machine learning approach
Digital Object Identifier 10.1109/TVT.2023.3242585

0018-9545 © 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8005

that utilizes several distributed edge-fog nodes and servers to to achieve the goal of resource allocation in a fog computing en-
hold local datasets to train an algorithm without transferring the vironment. Following are the major contributions of this article.
data samples. The main advantages of using CML are that it r We propose a resource allocation scheme with SDN in the
consumes less power, preserves privacy protection, and is non- scalable fog computing environment.
interference with device performance. r We then implemented the model for an optimal resource
Resource allocation is used to control data centers, which allocation with CML for the IoT-fog-cloud computing
reduces carbon emissions and enhances resource utilization. applications.
Resource allocation is of two types: optimization-based and r We compare and analyze the results with other existing
auction-based. The auction-based resource allocation is related techniques in the literature and found that the proposed
to the market pricing technique. It extends the demand and scheme outperformed the existing techniques using various
quantity of fog nodes with the support of bid requests [5], [6]. performance evaluation metrics.
Regarding optimized ways, resource allocation is utilized as a Rest of the article is organized as follows. Section II dis-
double-matching concern because fog nodes and cloud servers cusses the related work based on the allocation of resources in
are connected to end-users [7]. fog computing environment. The system model is discussed in
In this article, an optimal resource allocation technique is Section III. Section IV elaborates the proposed scheme in detail.
proposed using CML in an SDN-enabled fog computing en- In Section V, the implementation and evaluation of the proposed
vironment. The unification of both helps to improve results with scheme are discussed in different network scenarios alongwith
respect to network bandwidth usage, energy consumption, and the dataset used in experiments performed, experimental setup,
time-based attributes. The fog servers are programmed to train and performance evaluation using various parameters. Lastly,
the model locally, to utilize the cloud service to train the model if Section VI concludes the article.
the local server is overwhelmed. The fundamental idea is to cre-
ate a global model shared by all nodes by training local model(s)
on local datasets and periodically transferring information and II. RELATED WORK
attributes between these local nodes. In this direction, SDN also
Agarwal et al. [11] discussed the elastic resource allocation
plays an important role in networking to automatically re-route
architecture in fog computing. The main focus of the proposed
around an outage and keep the connections up and running as per
technique was to minimize response time, maximize throughput,
the requirements. It also enables us to enhance the effectiveness
and optimize resource usage to reduce the limitations of cloud
and performance of the network. In addition, it is more flexible,
computing. Ni et al. [5] proposed a dynamic allocation of
scalable, and efficient than conventional networking. It allows
resources method for the fog computing related to the Priced
the fog environment to save time, energy, and performance of
Timed Petri Nets to enhance fog resources and users’ Quality of
the overall system.
Services (QoS) requirements. The proposed technique achieved
more efficiency than the static allocation of resources planning
A. Motivation
in price and job completion time.
In the fog environment, the devices are heterogeneous in na- Xu et al. [7] proposed a Dynamic Resource Allocation Method
ture, as it uses different types of network connections to connect (DRAM) to balance the load of user tasks in fog computing.
different devices. It makes the resource allocation process more The framework was designed for dynamic migration of service
complex in comparison to the cloud computing environment. and static resource allocation to manage load balance for fog
Resource allocation problem is challenging in fog computing systems. Birhanie et al. [12] recommended vehicular fog com-
due to the dynamic behavior and unstable characteristics of puting architecture for vehicle parking to serve the user’s de-
the fog environment [8]. For IoT applications, the resource mands as the available resources. The Markov Decision Process
requirement for computing nodes has different demands in terms (MDP) and dynamic programming are applied to resolve the
of storage capacity, bandwidth and computing power. Due to issue of resource allocation in the fog computing environment.
the dynamic resource demand at the IoT layer, it is essential Wu et al. [13] also proposed VFC architecture to decrease the
to estimate the resource allocation of the network. Various computing time of different vehicular applications.
authors proposed different techniques for resource allocation Jia et al. [14] examined the issue of resource allocation
and management in fog computing. But, most of the proposals in three-tiered fog networks. The authors suggested a double
evaluated different parameters related to fog computing while matching technique related to the cost efficiency for resource
implementing resource allocation techniques but did not con- allocation in a fog network. Zhang et al. [15] presented the
sider the integration of SDN with CML. Various network layers security-related hurdles in current resource allocation proce-
can be considered using SDN to enable the network to be fully dures and suggested robust privacy-preserving resource allo-
programmable and adaptable while reducing network costs and cation schemes for the fog framework. Jiang et al. [16] also
congestion [9], [10]. CML is used to cooperate on model training proposed a secure computing framework for resource allocation
within fog nodes even when the cloud is not available. This framework capable of treating computing requests and allocat-
integration of both improves the results in terms of bandwidth ing resources at the fog server. The method could hold the
usage, power consumption, and time-based attributes. To fill security intrusion of particular nodes, fog servers, and other
these research gaps, this article proposes an optimal technique computing devices.

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
8006 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO. 6, JUNE 2023

Bashir et al. [17] suggested a framework to express the


ranks of fog nodes using “Technique for Order of Preference
by Similarity to Ideal Solution (TOPSIS)” to detect nearly
suited fog nodes for the new request. The authors also used
logistic regression to recognize the suitable fog node for the
incoming demand and measure the load of each fog node. The
authors [17], [18] proposed resource allocation techniques based
on the multi-criteria in fog computing. Li et al. [19] proposed
the “Energy-Efficient Computation Offloading and Resource
Allocation (ECORA)” techniques to reduce the overall cost
of the system. Authors in [20], [21], [22] proposed suitable
resource allocation techniques for residential buildings, con-
sumers’ power requests, and time-sensitive IoT-fog applications
in a fog computing environment, respectively.
Chen et al. [23] worked on the user communication and joint
computation resource allocation complication for the mixed
task based on the user in fog computing. The authors applied
the “User-Weighted Energy Efficiency (UWEE)” technique for
resource allocation and also implemented the “User Concerned
Mechanism (UCM).” Naha et al. [24] proposed a resource allo- Fig. 1. Application scenario of collaborative machine learning in fog comput-
cation algorithm by employing the concept of provisioning and ing.
ranking the resources in the hierarchical and hybrid modes. The
author extended the functioning of the CloudSim toolkit to find
the experimental results in a fog environment. Mani et al. [25] Job First (SJF), First-Come-First-Serve (FCFS), and Practical
mainly focused on the QoS using suitable resource allocation Swarm Optimization (PSO).
techniques for allotting virtual resources. The author proposed a Zaw et al. [4] proposed the energy-aware resource allocation
resource allocation algorithm and performed it in the CloudSim for “Multi-access Edge Computing (MEC)” enabled Federated
tool to find experimental results. Learning (FL) that mutually reduced the total time consumption.
Gu et al. [26] proposed resource allocation techniques based The authors work on the local and global training model and
on the reputation scheme for a community-based fog environ- analyze the influence of computing resource allocation and
ment. The user service request enabled reliable resources found dataset offloading concerning the loss, power consumption, and
on the reputation to make fog computing more authentic. Tong time. Gao et al.s [32] proposed a framework called “Federated
et al. [27] proposed a low-complexity two-step optimal inter- Sensing” that enabled edge-fog-cloud computing and collabo-
active algorithm called the “User-Centric User Association and rative learning from de-centralized data sensors. In addition, the
Resource Allocation (UCAA)” algorithm to enhance user-end authors introduced the n-soft sync algorithm that significantly
occurrence and overall performance of the system within the decreased training time by aggregating all the local models with
fog computing environment. Huang et al. [28] worked on the a global model. The authors perform experimentation on the
energy-efficient problem based on the allocation of resources for real datasets of Beijing and Los Angeles, and the results are
a computing network. The authors proposed a fog node-based compared with existing FL methods [33], [34].
energy-efficient resource allocation algorithm to reduce energy The research mentioned above emphasizes minimizing re-
consumption. sponse time, maximizing throughput, and maximizing resource
Chang et al. [29] developed an algorithm based on Lyapunov usages like CPU, storage, and networking in the fog computing
optimization using various Mobile Devices (MDs) that can environment. As per the above discussion, none of the existing
dynamically coordinate and allocate computational resources proposals focused on the tasks, resources classification, using
per user demand for fog computing. Cao et al. [30] proposed SDN to connect different fog regions to reduce task migration
a 5 G Internet of Vehicles (IoV) architecture using the concept rate to the cloud with CML [35]. Hence, this proposed work
of fog computing and a Software-defined Network (SDN). The does optimal resource allocation for each task by considering the
author focused on the issue of guaranteeing QoS using het- parameters like latency, energy consumption, and others within
erogeneous computing resources. H. Rafique [31] proposed a a scalable fog environment.
“Novel Bio-Inspired Hybrid Algorithm (NBIHA)” using a com-
bination of “Modified Particle Swarm Optimization (MPSO)” III. SYSTEM MODEL
and “Modified Cat Swarm Optimization (MCSO).” Both MPSO
and MCSO are used in the proposed method to monitor and A. Network Model and Problem Definition
allocate resources at the fog layer and schedule jobs among fog The network and system model used in the proposed scheme
nodes. They tested the technique using iFogSim by comparing are shown in Figs. 1 and 2. The data collected from various
the results with benchmark scheduling algorithms like Shortest devices or smart sensors is stored in the fog servers deployed

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8007

has a defined network bandwidth, and it is not easy to do both


things in parallel. The bandwidth required for dataset offloading
is represented in (2), and the bandwidth needed for weight
upload is described in (3).
 cg pw
i i
DRiof f load = φφ̃i log2 +1 (2)
noise
 cg pw
i i
DRiupload = φφ̂i log2 +1 (3)
noise
The available bandwidth is represented with φ for transmis-
sion in uplink. The end-user power of transmission is denoted
with pwi . The noise symbol represents the Gaussian noise.
2) Collaborative Machine Learning: It is an advanced ma-
chine learning model based on distributed computing and paral-
lelism. The data owned by the user or clients are trained in the
user domain rather than distributing the centralized data to com-
puting nodes [36]. CML is beneficial where the data is sensitive,
and the users don’t want to share the data with a third party. The
training is taken near the data origin in the user’s environment.
For example, various smart systems are reluctant to share en-
ergy consumption data to preserve the privacy of their clients.
Therefore, the CML-based frameworks play an important role in
training the models in a privacy-preserving manner. Consider the
energy consumption data {EC1 , EC2 , . . ., ECN } is distributed
Fig. 2. Proposed model for resource allocation. in various Fog Area Networks (FANs) {F1 , F2 , . . ., FN }. In the
CML process, the dataset never leaves the fog area network.
near the IoT devices or sensors. The proposed model ensures In this work, we employed the servers in FAN to train the
the reduction in offloading for optimal usage of resources. CML model. This helps to preserve the privacy of energy consumption
helps to train the local model at fog servers, and the collective data. The distributed trained local models at various FANs are
model is aggregated on the cloud server. The end-users are repre- connected to a central global model that may be placed in the
sented with E = (e1 , e2 , . . ., en ). These are distributed over a ge- cloud. The global representation of the model is generated using
ographical fog area and generally covered with fog nodes. Also, the CML-based parameters.
the set of fog nodes F is represented with F = (f1 , f2 , . . ., fn ). 3) Software Defined Networking (SDN): “SDN is the phys-
Each fog node serves as a subset of E. Fog nodes consist of ical division of the network control plane from the forwarding
various devices D. The number of devices various in different plane, where the control plane plays a role in controlling different
fog nodes represented with D = (d1 , d2 , . . .., dn ). Each device IoT, edge, and fog devices.” Typical SDN architecture has three
resources denoted with RA, where RA = (r1 , r2 , . . ., rn ). For components controller, network devices, and applications. The
the Resources Available (RA) at fog node can be represented “controller” decides the route of the data packet, “Network
in terms of processing/computation (C1 , C2 , . . ., Cn ), Network devices” receive messages from the controller to move the data,
(N1 , N2 , . . ., Nn ), and memory (S1 , S2 , . . ., Sn ) for the user and “Applications” communicate resource requests related to the
task. Let us (T1 , T2 , . . ., Tn ) are the end-user tasks that want to whole network. In SDN, programmers control the traffic flow
use resources to complete the task. The resources are represented using a programming or software-based controller instead of a
vertically, and tasks are shown horizontally for the matrix M  particular hardware device. The administration of networking
as in (1). is more flexible while connecting different network devices
⎡ ⎤⎡ ⎤ instead of the traditional networking approach. The network
C 1 N1 S 1 T1 administrator can easily allocate and configure virtual resources
⎢C ⎥ ⎢ ⎥
⎢ 2 N2 S 2 ⎥ ⎢ T 2 ⎥ to switch network infrastructure from one central location.
M = ⎢ ⎥⎢ ⎥ (1) SDN provides speed and flexibility to the fog-IoT computing
⎣ . . . . . . . . . ⎦ ⎣. . . ⎦
technologies, which need to transport data easily and quickly
Cn Nn Sn Tn
between different remote locations. In a fog environment, SDN
1) Network Bandwidth: Orthogonal Frequency Division allows for an optimization of data flow and prioritizing of the
Multiple Access (OFDMA) is used for the weight transmission network resources as per the requirement of applications and
and dataset offloading with Bluetooth, WiFi interfaces, and availability of other resources. It also provides robust security
low-power cellular networks. The data size depends upon the for the entire network while creating separate zones for differ-
IoT devices and sensors in the fog nodes. For an optimal re- ent devices while communicating with each other without any
source allocation, the locally trained model’s weight and dataset interference that may infect the whole network. Suppose F Z is
offloading are performed in an iterative process. Every end-user the one for network zone that can only provide communication

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
8008 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO. 6, JUNE 2023

network with {R1 , R2 , . . .Rn } regions. Those fog regions are TABLE I
LIST OF NOTATIONS
connected with cloud C1 through different F Z network zone.
4) Local and Fog Training Model: The end users generally
have devices with low computation and storage. The end-user i
has a local dataset and wants to offload a part dataset di from
the whole dataset Di to the fog server. The dataset left with the
end-user is processed in the end-user device, and ωi represents
the weight vector. The goal of user i is to minimise the training
loss (l), which is described in (4), in order to maximise the weight
vector wi :
min(ωi ∈ Rn ) = l(ωi , xk , yk ) (4)
k∈D̂i

where xk ∈ Rn and yk ∈ R are features vector and label of


the dataset k ∈ Di , n represent the dimension of feature vector.
The local model is trained with the help of dataset D̂i . The local
model is trained in the end-users device. The weight vector ωi of
the trained model is uploaded to the fog server that describes the
local model. Thus, user i performs two separate tasks: sending
weight to the fog server and doing local training. The amount of
time required by user i to complete the two steps is specified as
per (5):
f (|Di |)(1 − di )η f (|ωi |)
T imelocal = + (5)
i
Γi γ i γ̂DRi
f (|Di |) is a linear function of |Di | that specifies the current
size of a user i local set of data, η is the total number of
CPU cycles needed to train a single byte of a model, γi is the
percentage of CPU sources utilised for training model, Γi is the
total amount of CPU sources at the user i’s disposal, and f (|wi |)
is a linear function of the weight vector wi that specify the size
of the weight vector. resources to each task. The list of notations used is shown in
Similarly, for the fog training model, the user i is allowed to Table I.
offload the part of its dataset Di to the fog servers. Here the Fig. 3 shows the flowchart of the proposed model for CML-
model is trained on the offload dataset at the fog layer to recover based optimal resource allocation. Firstly, the user requests
the training loss (l) per (6). resources for the various tasks at the fog level. At the fog layer,
heterogeneous resources, like memory, computing, storage, and
min(ωF ∈ Rn ) = l(ωF , xk , yk ) (6) network, are available to handle user requests for task execution.
k∈D̂F The fog server manager is vital in resource allocation per user
where DF is the offload dataset at the fog layer for the user i. task demand. For this purpose, the CML-based trained local
The dataset is provided to the fog server, where the fog takes model is available at the fog layer to check the resource re-
random samples from the dataset. The fog server plays a dual quirement for each task. The classification techniques are used
role: First, it trains the local dataset offloaded from the end-user. to identify each task’s resource requirement and can also help
Second, it performs weight optimization. Therefore, the time train the local model at the fog layer. The fog server maintains
taken by the fog server is as per (7). Here, ΓF represents the fog all the records of available free resources within for region.
server’s allocated CPU resources. Each fog region has heterogeneous resources connected with
a fog-enabled software-defined network. SDN controls all com-
(|Di |)di f (f (|Di |)di f )η
T imeF og = maxi∈τ + i∈τ munication process of fog regions and is also associated with
wi R i ΓF cloud services. The CML-based global model is available at
(7)
the cloud layer and is updated and trained by the local model
and vice- versa. The global and local models are uploaded and
IV. CML-BASED OPTIMAL RESOURCE ALLOCATION: THE downloaded as per updation received from any end. The training
PROPOSED SCHEME
and updation of the local and global models is an iterative
The main objective is to allocate the available resources process. Finally, aggregation of the local’s model updates is
R(i, j) for each T i task. Algorithm 1 is proposed for CML-based applied at the global model end, which is updated at all local
optimal resource allocation to achieve this objective. The CML models.
model is used at each fog node and server manager to allocate The complete process of Algorithm 1 is explained as follows:

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8009

Fig. 3. Flowchart for optimal resource allocation and CML model.

Firstly, the Fog Servers (FS) are deployed in each fog region are subdivided as per available resources because the fog node
to maintain the record of available resources in a scalable fog en- may be less capacity to handle complete one job. The priority
vironment. FS has a Fog Server Manager (FSM) to keep a record list of each sub-task is prepared as per Algorithm 2.
of fog resources and its capabilities to check the availability of The FSM allocates the resources to tasks or jobs in the
resources. FSM checks the availability of resources within the following condition a) if the resources are accessible as per the
fog region and takes care of the complete matching of tasks job, then FS allocates resources, processes the job, and ACK
concerning available resources. The different fog regions and to the user. b) if some resources are available, the whole job is
devices are connected using the SDN and share resources per divided into sub-tasks per resource. Priority is assigned to each
requirement. sub-tasks. High-priority tasks are processed first, and other tasks
Secondly, classification and ML techniques are used to clas- can wait for some time within the fog region. c) if all sub-divided
sify the user tasks and prepare the local model using CML. In the tasks have high priority, then some tasks are migrated to the near
next step, identify how many resources (like memory, storage, neighbor region for processing with the help of SDN services
computing, bandwidth, etc.) are required for each job completion and processed the sub-tasks and ACK to SDN after process
and each resource’s execution time. The records of classified jobs completion. SDN can ACK to FSM to complete the job d) If
and resource consumption are maintained at FSM. The mapping resources are unavailable at the fog layer, then FSM migrated the
of required resources for each job is done as per the need. Jobs job to the Cloud Service Provider (CSP). This may increase the

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
8010 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO. 6, JUNE 2023

response time and latency. CSP provides the resources directly


Algorithm 1: Optimal Resource Allocation.
to a job, processes the request to increase response time, and
sends ACK to the user. SDN maintains the activities tasks up to
complete the job cycle. Also, the networking of fog devices can
be handled through SDN. The FSM take-care of the complete
matching of tasks and resources.
CML consists of three main components a) Global Model
b) Aggregation c) Local Model for model design. These are
described as follows.
1) Global Model: The local model uploaded from the fog
nodes is assembled in the cloud. The aggregation process is
performed to design the global model from the properties of the
local model. The model parameters are wk and WG . Here, we
consider the cloud server, IoT Node, and fog structure shares
the same global model. The fog nodes and IoT collectively play
an important role in developing the global model in the cloud
by local model’s (W(1...n) ) aggregation. Moreover, the volume
is a key factor in deciding the share in the aggregation process
of particular fog and IoT nodes in the global model WG . The
number of the dataset is defined with n, and |S| determines the
dataset capacity. The comparative global model is mentioned in
(8).

N
|Sn |
WG ←− wk (8)
n=1
|S|

2) Local Model: The weights of global model WG are highly


dependent on the amount of data produced at each node. In addi-
tion, the local data and node rules are responsible for designing
the updated regulations and characteristics of the local model.
The local model is capable enough to dynamically adjust the
global model influence. The local and global models follow the
iterative process of training and updating to develop the final
model.
The n-th layer weight is described as WGn & wkn . Here, the
(wk ) depicts the local model, whereas (WG ) defines the global
model. Furthermore, the optimal wk is determined by the loss
function F(wk ; (ai , bi )). Here, the random is narrated as (ai , bi )
and mini-batch is denoted with “x”.
3) Aggregation: To develop the Global model, the local
nodes send their updates to the cloud server, where the aggrega-
tion is iteratively taken place till the optimal solution is reached.
The local updates are aggregated with the global weights in the
looping process. Algorithm 4 shows the aggregation process of
the local parameters with the global model aggregation process.
Here, En denotes the training round number “n,” and Bn nar-
rated the number “n” size of batch in the training round.
An algorithm’s complexity calculates the volume of space
and time needed by an algorithm for a given input size and is
linked with the number iteration process. The complexity of the
proposed algorithms is expressed as a function of the input size
using Big-O notation. The input size is denoted by n, while O
denotes the worst-case scenarios rate of growth function. The
complexity is calculated for Algorithm 1 = O(n2 ), Algorithm
2 = O(n), Algorithms 3 and 4 = O(n3 ).

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8011

Algorithm 2: Prepare Priority of Tasks Lists Algorithm. Algorithm 4: Process of Global-Model Aggregation With
Local Nodes.

Algorithm 3: Training of Local Model and Update Algo-


rithm.

The main software components included the broker services,


resource & data manager, SDN controller, and resource monitor.
To test the setup of the proposed model, the Liver Disease data
was used to predict liver disease. In the initial phase, the “Liver
Patient Dataset (ILPD)” from the UCI Repository was used to
predict disease. The machine learning model was trained and
tested with ILPD data set with the implementation of different
algorithms like Logistic Regression, Naive Bayes, SMO, IBk,
J48, and Random forest as per our previous work done in [38].
The best result was achieved using Logistic Regression with
feature selection techniques.
In the fog environment, the liver disease data is input from
IoT devices, and the results are sent back, including whether the
patient had liver disease or not. The web or android interface
was used to take the liver disease data sent to the worker/ broker
node through the gateway. The application scenario of CML
in the Fog-IoT computing environment was shown in Fig. 1.
CML allows Fog and IoT nodes to collectively receive a shared
prediction model while retaining training data on the fog node
V. IMPLEMENTATION AND EVALUATION and IoT device. The heavy data storage and processing for the
For the implementation of the proposed scheme, we created machine learning model was done in the cloud. The cloud server
two Scenarios: a) a Real testbed setup and b) a Simulation shared the global model with fog nodes and made the local model
environment to evaluate its performance. for predicting diseases on devices by inducing model training to
the node.
Dataset: For the experimental results within the fog comput-
A. Scenario I ing environment, the data was taken for liver disease to detect the
In the first scenario, we created the real testbed setup to extend presence of disease [39]. This data set has 11 important variables,
the functionality of the FogBus Framework with SDN [37]. For including class attributes used to find the liver disease:
the fog computing environment setup, heterogeneous hardware 1) Age: patient age
devices are integrated with software components. The hardware 2) Gender: having two enum values ‘M’ or ‘F’
devices included IoT devices, fog gateway nodes, broker nodes, 3) DB “Direct Bilirubin”: having a range of 0.1 to 19.7
general computing nodes, repository nodes, and fog servers. 4) TB “Total Bilirubin”: having a range of 0.4 to 75

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
8012 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO. 6, JUNE 2023

TABLE II
SAMPLE DATA

Fig. 4. Numerical parameters of liver disease dataset.

5) Sgpt “Alamine Aminotransferase”: having range 10 to r IoT Device : Use an IoT device to record the Health data
2000 as input.
6) Alkphos “Alkaline Phosphatase”: having range 63 to r Gateway: SDN Configured Controller With WiFi, LAN
2110 Connectivity, Smart mobiles, Router, etc.
7) Sgot “Aspartate aminotransferase”: having range 10 to r Fog Server Manager / Master Node: Lenovo ThinkPad
4929 T440p, Intel Core i5 4th Gen 4300 M, CPU @ 2.6 GHz
8) TP “Total Proteins”: having a range of 2.7 to 9.6 4 GB DDR3 RAM, Intel Integrated HD Graphics 4600,
9) A/G Ratio “Albumin & Globulin Ratio”: having a range Connectivity options include Wi-Fi 802.11 b/g64-bit,
of 0.3 to 2.8 Ubuntu 20.04, Apache Server 2.4.41, MySQL 8.0.23, Java-
10) ALB “Albumin” : having range 0.9 to 5.5 SE Runtime Environment.
11) ”class” (selector): represented as the disease with value r Cloud: Setup of Cloud server for Heavy computing.
“1” means liver disease present, and “2” signifies liver r Worker / Fog Nodes: Raspberry-Pi 3B, Quad-C 1.2 GHz
disease not present. Broadcom BCM 2837 64 b CPU 1 GB LPDDR2 SDRAM,
Table II illustrates the details of sample data for 10 patients, wireless-LAN, Bluetooth Low Energy, Raspberry Pi OS
and the entire dataset plot is depicted in Fig. 4. The graphical with desktop Kernel version: 5.10, Apache Server 2.4.41,
representation of dataset values are shown to analyze how the JRE and MySQL 8.0.23.
importance of different attributes like TB, DB, Sgpt, Alkphos, To test the performance of the proposed scheme at fog-node
Sgot, TP, ALB, and A/G Ratio having different range varies. The with CML, the dataset was divided into two parts. The first 70%
values of Total Bilirubin vary between 0.4 to 75, Direct Bilirubin of dataset was used to train the model, and the rest 30% was
varies between 0.1 to 19.7, Alamine Aminotransferase varies used to test model parameters. The numerical parameters used
between 10 to 2000, Aspartate Aminotransferase varies between are shown in Fig. 4. During the experiments, the performance
10 to 4929, Total Proteins varies between 2.7 to 9.6, Albumin of the different characteristics was evaluated using the liver
& Globulin Ratio varies between 0.3 to 2.8 and Albumin vary disease data within the proposed fog computing environment
between 0.9 to 5.5. such as time-based attributes, power consumption, and network
Experimental Setup: A real testbed setup was used for the bandwidth usage.
implementation of the proposed method. Following hardware The time-based attributes are the time delay, execution time,
devices were used in the experiments. and latency were compared for different fog settings: with

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8013

Fig. 5. Time Delay(ms) in different fog environment setups.

Fig. 8. Network bandwidth consumption (kbps) in different fog environment


setups.

Fig. 6. Execution Time (ms) in different fog environment setups.

Fig. 9. Power consumption in different fog environment setups.

attributes in the fog environment setup, i.e., only master node,


one fog region, more than one fog region, and cloud computing
infrastructures. Fig. 8 shows the comparison of the network
bandwidth consumption in (kbps) of various fog computing
setups. The results are analyzed to identify the bandwidth usage
with a different configuration of the proposed model. The power
consumption is a promising decision to adopt the fog computing
approach with the cloud. We also performed experiments based
on the power consumption by the proposed model with different
fog environments, i.e., a master node, one fog region, more
Fig. 7. Latency (ms) in different environment setups. than one fog region, and cloud computing infrastructures. Fig. 9
shows the comparison of the power consumption in watts (W)
Master Node only, one Fog Region, more than one Fog Re- of different fog computing setups.
gions, and only cloud computing infrastructure. Fig. 5 shows the
comparison of time delay in milliseconds for different fog com-
puting scenarios. Fig. 6 shows the comparison of the execution B. Scenario II
time (ms) of different fog computing environment setups. Simi- In the second scenario, we use the iFogSim open source
larly, Fig. 7 shows the comparison of the latency (ms). The effect simulation toolkit to evaluate the performance of the proposed
of network bandwidth usage is explored similar to time-based model. It incorporates resource management approaches, which

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
8014 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO. 6, JUNE 2023

TABLE III
CONFIGURATION OF DIFFERENT DEVICES IN IFOGSIM

are customized based on the proposed model. FogSim collab-


orates with CloudSim, a popular library for simulating cloud-
based environments to manage the resources. iFogSim is also
used at the CloudSim layer to manage occurrences among fog
computing modules. It allows the simulation of various wide
range of applications, including software-defined networks and
its interoperability with fog and cloud computing. It has the
SDN Support library to support the SDN-integrated fog-cloud Fig. 10. Compare the results of energy consumption with different techniques.
computing environment. It required various classes such as
Sensor, Fog device, Tuple, Actuator, Monitoring edge, Resource
management service, and Application to simulate the fog net-
work. It also supports the evaluation of various parameters like
energy, cost, network, and time.
We created the topology of the network as a software ap-
plication to evaluate the proposed model using the simulation.
Table III presents the experimental configuration settings of the
simulation setup. We used a Core i5-1135G7 11th Gen Intel
@ 2.40 GHz × 8 and 16 GB RAM to run the simulation in
our system. Table III displays the MIPS (million instructions
per second), upBw (upload bandwidth), downBW (download
bandwidth), and RAM in GB (gigabyte) as a default config-
uration for fog and IoT devices. The managed and controlled
environment of the simulated environment allows for the ex-
perimentation process. Moreover, simulation makes it easier Fig. 11. Compare the results of network usage with different techniques.
to repeat experimentation in various settings. The experiments
gradually increased the total number of applications and user
requests. Within a simulation, we measured delay, processing proposed model. If the number of nodes is 10, the proposed
time, and energy consumption to assess the performance of the model uses more energy than NBIHA and almost the same
proposed algorithm. The most important parameters were delay energy as the MPSO technique. The proposed model, in com-
and processing time because the fog computing environment is parison to SJF, FCFS, and TRAM, has low energy consumption.
designed explicitly for time-sensitive applications. When the number of fogs nodes increases to 20, 30, or more,
Energy Consumption: All hosts are capable of determining then the energy consumption is less than other techniques.
the energy of the fog nodes within the allotted execution time. Network Usage: The effect of network bandwidth usage is
Equation 9 shows the current energy usage as Ec and the energy explored as similar to time-based attributes within the fog en-
consumed by the fog device as EF N . According to resource vironment. For network bandwidth utilization, Nu parameter is
consumption, Et represents the time of the most recent resource used for evaluation. The number of devices and network usage
use, Etc represents the time currently needed to finish the job, rises, leading to network congestion. The proposed technique
and Ph represents the most recent host power usage. The last reduces network congestion by distributing the load on other
utilization (U) is measured as U = min (H/A). Here, ‘H’ repre- fog nodes. Network usage Nu is calculated using network size
sents host-allocated MIPS, and ‘A’ represents the total MIPS of m, latency La as shown in (10). It is computed for the ith data
all devices. item from the total data item set Td ; i.e Latency (La) = m i=1
(Expected execution time - Actual execution time).
EF N = Ec + (Et − Etc ) Ph (9) Td

Fig. 10 shows the energy usage of fog nodes. The x-axis Nu = Lai × m (10)
displays the number of fog nodes used in the simulation, and i=1

the y-axis shows the amount of energy consumed. Fig. 10 also Fig. 11 represents the comparison of network usage of the
illustrates the energy used by the NBA, SJF, FCFS, MPSO, proposed model with other existing approaches. The x-axis
TRAM and proposed model [31]. The comparison is made based shows the number of fog nodes used in the simulation, and the
on different techniques proposed by different authors with the y-axis shows the amount of network bandwidth used. Fig. 11

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8015

Fig. 15. Time consumption for local and global model training concerning the
fraction of offloading dataset.
Fig. 12. Compare the results of execution time with different techniques.

Fig. 13. Fraction of computing resources allocation concerning power con-


sumption, time, and loss.
Fig. 16. Compare results based on the response time.

Fig. 17. Compare results based on the time delay.


Fig. 14. Fraction of offloading data concerning power consumption, time, and
loss.

shows the network usage by the SJF, TRAM, FCFS, MPSO, and
proposed model. If the number of nodes is 10, then the proposed
model uses more network usage than SJF and is almost the same
as other techniques. When the numbers of fogs nodes increase
to 20, 30, or more, then the network usage is less than TRAM,
MPSO, SJF, and FCFS.
Execution Time: The amount of time, the process spent to
carry out a specific task, including time spent to carry out
run-time or device services on its behalf, is referred as the task’s
execution time. It’s the difference between a task’s start and
Fig. 18. Compare results based on the processing time.
end time to complete the whole process. Fig. 12 represents

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
8016 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO. 6, JUNE 2023

the comparison of the execution time of the proposed model is more than the transference rate. However, the total time is
with other existing approaches. The x-axis displays the number raised concerning offloading data. Fig. 15 shows the time taken
of fog nodes used in the simulation, and the y-axis shows the for the global and local training to train the CML model at the
amount of execution time (In Milliseconds). Fig. 12 shows the Fog and cloud layer with respect to the fraction of the offloading
comparison of the execution time of SJF, TRAM, FCFS, and dataset. Local model training is done at the fog & IoT level,
MPSO with the proposed model. If the number of nodes is 10, and Global model training is done at the Cloud layer. The time
then the execution time of almost the same with MPSO and less consumption for local model training is less compared to Global
than TRAM, FCFS, and SJF. When the number of fog nodes model training.
increases to 20, 30, or more, then the execution time of our
model remains less than others. VI. CONCLUSION
This article proposes an optimal resource allocation technique
C. Result Analysis and Discussions using collaborative machine learning in an SDN-enabled fog
In this section, we discussed the results of the proposed model. computing environment. The fog servers are enabled to train
The experimental testing was performed based on two scenarios. the model locally first; if overloaded, otherwise, use the cloud
In the First scenario, the testing was done on the real testbed setup server to train the local model. It helps the fog environment to
using the FogBus framework. The liver disease dataset was used save time and energy for better performance. The time, energy
to train the CML base model for allocating resources within consumption, and extra network usage are recovered using the
the fog environment. The performance was evaluated based on proposed optimal resource allocation technique. The proposed
the different characteristics: time delay, execution time, latency, system model has considered the uplinks and dataset offloading
network bandwidth consumption, and power consumption. Each in the fog-cloud ecosystem. The implementation of the proposed
characteristic’s performance was compared with different fog work is performed using the FogBus and iFogSim simulation for
settings with a master node only, one fog region, more than the fog computing framework to evaluate and validate the per-
one fog region, and only cloud computing infrastructure. We formance. The results are evaluated based on the network band-
compared the results with other authors’ techniques used for width usage, energy consumption, and time-based attributes like
resource allocation in fog computing environments. The results latency, delay, and execution time. Finally, the results are also
of different techniques like TACRM [40], Logistic Regression, compared with the other existing techniques. The experimental
and Multi-criteria [18] are compared to the response time (ms) results show that the overall system’s performance is superior in
with the proposed model and shown in Fig. 16. The results are terms of various evalaution metrics. To overcome the limitation
also compared based on the time delay (ms) with 10 number of the proposed work, testing can be done on a large-scale fog
application requests as the authors’ proposed techniques like environment by adding more fog regions and devices at the
Resource ranking and Provisioning (ReRaP) algorithms [24], fog and IoT layer. In future work, the proposed techniques will
Resource-aware algorithms [41], Latency-Aware [42], Multi- be tested on the different datasets by adding more sensors and
criteria (MC) based, Quality of Experience (QoE)-aware [43]. devices at the fog and IoT layer in a real testbed environment.
Fig. 17 shows the comparison concerning time delay with the
proposed model. The processing time of the proposed model REFERENCES
with 10 number applications requests is reduced as compared [1] E. Liu, E. Effiok, and J. Hitchcock, “Survey on health care applications in
to the results of ReRaP algorithms [24], Resource-Aware algo- 5G networks,” IET Commun., vol. 14, no. 7, pp. 1073–1080, 2020.
[2] N. Kumar, S. Misra, and M. S. Obaidat, “Collaborative learning automata-
rithms [41], Latency-Aware [42] techniques used for resource based routing for rescue operations in dense urban regions using vehicular
allocation in fog computing. Fig. 18 shows the comparison of sensor networks,” IEEE Syst. J., vol. 9, no. 3, pp. 1081–1090, Sep. 2015.
processing time (ms) with other authors’ techniques. [3] R. Yu and P. Li, “Toward resource-efficient federated learning in mobile
edge computing,” IEEE Netw., vol. 35, no. 1, pp. 148–155, Jan./Feb. 2021.
In the second scenario, the iFogSim was used to implement and [4] C. W. Zaw, S. R. Pandey, K. Kim, and C. S. Hong, “Energy-aware resource
evaluate the performance of the proposed model. The experiment management for federated learning in multi-access edge computing sys-
was conducted on the simulation to measure processing time, tems,” IEEE Access, vol. 9, pp. 34938–34950, 2021.
[5] L. Ni, J. Zhang, C. Jiang, C. Yan, and K. Yu, “Resource allocation strategy
delay, and energy consumption. The results were computed in fog computing based on priced timed petri nets,” IEEE Internet Things
using the different number of fog nodes used in simulation envi- J., vol. 4, no. 5, pp. 1216–1228, Oct. 2017.
ronments. The processing time, delay, and energy consumption [6] N. Kumar, J. J. P. C. Rodrigues, and N. Chilamkurti, “Bayesian coalition
game as-a-service for content distribution in internet of vehicles,” IEEE
results were compared with the other author’s techniques like Internet Things J., vol. 1, no. 6, pp. 544–555, Dec. 2014.
NBIHA, SJF, FCFS, MPSO, and TRAM. Fig. 13 shows the CML [7] X. Xu et al., “Dynamic resource allocation for load balancing in fog
model fraction for computing resources allocation concerning environment,” Wireless Commun. Mobile Comput., vol. 2018, pp. 1–15,
2018.
Power consumption, Time, and Loss. The time consumption [8] C. Mouradian, D. Naboulsi, S. Yangui, R. H. Glitho, M. J. Morrow, and
and loss are minimized when more resources are utilized for P. A. Polakos, “A comprehensive survey on fog computing: State-of-the-
data training. On the other hand, power consumption increases art and research challenges,” IEEE Commun. Surv. Tut., vol. 20, no. 1,
pp. 416–464, Jan.–Mar. 2018.
the concern of computing resource allotment. Fig. 14 shows the [9] J. Gao et al., “A blockchain-SDN-enabled internet of vehicles environment
power consumption and loss reduced as the amount of offloaded for fog computing and 5G networks,” IEEE Internet Things J., vol. 7, no. 5,
data rose because the power consumption for the model training pp. 4278–4291, May 2020.

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8017

[10] R. Chaudhary, N. Kumar, and S. Zeadally, “Network service chaining in [27] S. Tong, Y. Liu, M. Cheriet, M. Kadoch, and B. Shen, “UCAA: User-centric
fog and cloud computing for the 5G environment: Data management and user association and resource allocation in fog computing networks,” IEEE
security challenges,” IEEE Commun. Mag., vol. 55, no. 11, pp. 114–122, Access, vol. 8, pp. 10671–10685, 2020.
Nov. 2017. [28] X. Huang, W. Fan, Q. Chen, and J. Zhang, “Energy-efficient re-
[11] S. Agarwal, S. Yadav, and A. K. Yadav, “An architecture for elastic source allocation in fog computing networks with the candidate mech-
resource allocation in fog computing,” Int. J. Comput. Sci. Commun., anism,” IEEE Internet Things J., vol. 7, no. 9, pp. 8502–8512,
vol. 6, no. 2, pp. 201–207, 2015. [Online]. Available: http://csjournals. Sep. 2020.
com/IJCSC/PDF6-2/31.Swati.pdf [29] Z. Chang, L. Liu, X. Guo, and Q. Sheng, “Dynamic resource allocation
[12] H. M. Birhanie, M. A. Messous, S. M. Senouci, E. H. Aglzim, and A. M. and computation offloading for IoT fog computing system,” IEEE Trans.
Ahmed, “MDP-Based resource allocation scheme towards a vehicular fog Ind. Inform., vol. 17, no. 5, pp. 3348–3357, May 2021.
computing with energy constraints,” in Proc. IEEE Glob. Commun. Conf., [30] B. Cao, Z. Sun, J. Zhang, and Y. Gu, “Resource allocation in 5G IoV
2018, pp. 1–6. architecture based on SDN and fog-cloud computing,” IEEE Trans. Intell.
[13] X. Wu, S. Zhao, R. Zhang, and L. Yang, “Mobility prediction-based joint Transp. Syst., vol. 22, no. 6, pp. 3832–3840, Jun. 2021.
task assignment and resource allocation in vehicular fog computing,” in [31] H. Rafique, M. A. Shah, S. U. Islam, T. Maqsood, S. Khan, and C. Maple,
Proc. IEEE Conf. Wireless Commun. Netw., 2020, pp. 1–6. “A novel bio-inspired hybrid algorithm (NBIHA) for efficient resource
[14] B. Jia, H. Hu, Y. Zeng, T. Xu, and Y. Yang, “Double-matching resource management in fog computing,” IEEE Access, vol. 7, pp. 115760–115773,
allocation strategy in fog computing networks based on cost efficiency,” 2019.
J. Commun. Netw., vol. 20, no. 3, pp. 237–246, 2018. [32] Y. Gao, L. Liu, X. Zheng, C. Zhang, and H. Ma, “Federated sensing:
[15] L. Zhang and J. Li, “Enabling robust and privacy-preserving resource Edge-cloud elastic collaborative learning for intelligent sensing,” IEEE
allocation in fog computing,” IEEE Access, vol. 6, pp. 50384–50393, Internet Things J., vol. 8, no. 14, pp. 11100–11111, Jul. 2021.
2018. [33] R. S. Bali and N. Kumar, “Secure clustering for efficient data dissemination
[16] J. Jiang, L. Tang, K. Gu, and W. Jia, “Secure computing resource allo- in vehicular cyber–physical systems,” Future Gener. Comput. Syst., vol. 56,
cation framework for open fog computing,” Comput. J., vol. 63, no. 4, pp. 476–492, 2016.
pp. 567–592, 2020. [34] N. Kumar, R. Iqbal, S. Misra, and J. J. Rodrigues, “Bayesian coalition
[17] H. Bashir, S. Lee, and K. H. Kim, “Resource allocation through logistic re- game for contention-aware reliable data forwarding in vehicular mobile
gression and multicriteria decision making method in IoT fog computing,” cloud,” Future Gener. Comput. Syst., vol. 48, pp. 60–72, 2015.
Trans. Emerg. Telecommun. Technol., 2019, vol. 33, no. 2, Art. no. e3824. [35] E. Ahvar, S. Ahvar, S. M. Raza, J. Manuel Sanchez Vilchez, and G. M. Lee,
[18] R. K. Naha and S. Garg, “Multi-criteria-based dynamic user behaviour “Next generation of SDN in cloud-fog for 5G and beyond-enabled appli-
aware resource allocation in fog computing,” vol. 1, no. 1, pp. 1–31, 2019. cations: Opportunities and challenges,” Network, vol. 1, no. 1, pp. 28–49,
[Online]. Available: http://arxiv.org/abs/1912.08319 2021. [Online]. Available: https://www.mdpi.com/2673-8732/1/1/4
[19] Q. Li, J. Zhao, Y. Gong, and Q. Zhang, “Energy-efficient computation [36] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas,
offloading and resource allocation in fog computing for internet of every- “Communication-efficient learning of deep networks from decentralized
thing,” China Commun., vol. 16, no. 3, pp. 32–41, 2019. data,” in Proc. Artif. Intell. Statist., 2017, pp. 1273–1282.
[20] R. Bukhsh, N. Javaid, S. Javaid, M. Ilahi, and I. Fatima, “Efficient resource [37] S. Tuli, R. Mahmud, S. Tuli, and R. Buyya, “FogBus: A blockchain-based
allocation for consumers’ power requests in cloud-fog-based system,” Int. lightweight framework for edge and fog computing,” J. Syst. Softw.,
J. Web Grid Serv., vol. 15, no. 2, pp. 159–190, 2019. vol. 154, pp. 22–36, 2019.
[21] L. H. Kazem, “Efficient resource allocation for time-sensitive IoT applica- [38] J. Singh, S. Bagga, and R. Kaur, “Software-based prediction of liver disease
tions in cloud and fog environments,” Int. J. Recent Technol. Eng., vol. 8, with feature selection and classification techniques,” Procedia Comput.
no. 3, pp. 2356–2363, 2019. Sci., vol. 167, no. 2019, pp. 1970–1980, 2020. [Online]. Available: https:
[22] A. Fatima, N. Javaid, M. Waheed, T. Nazar, S. Shabbir, and T. Sultana, //doi.org/10.1016/j.procs.2020.03.226
“Efficient resource allocation model for residential buildings in smart grid [39] D. Dua and C. Graff, “UCI machine learning repository,” 2019. [Online].
using fog and cloud computing,” Adv. Intell. Syst. Comput., vol. 773, Available: http://archive.ics.uci.edu/ml
pp. 289–298, 2019. [40] W. B. Daoud, M. S. Obaidat, A. Meddeb-Makhlouf, F. Zarai, and K. F.
[23] X. Chen, Y. Zhou, L. Yang, and L. Lv, “User satisfaction oriented resource Hsiao, “TACRM: Trust access control and resource management mech-
allocation for fog computing: A mixed-task paradigm,” IEEE Trans. anism in fog computing,” Hum.-centric Comput. Inf. Sci., vol. 9, no. 1,
Commun., vol. 68, no. 10, pp. 6470–6482, Oct. 2020. pp. 1–18, 2019. [Online]. Available: https://doi.org/10.1186/s13673-019-
[24] R. K. Naha, S. Garg, A. Chan, and S. K. Battula, “Deadline-based 0188-3
dynamic resource allocation and provisioning algorithms in fog-cloud [41] M. Taneja and A. Davy, “Resource aware placement of IoT application
environment,” Future Gener. Comput. Syst., vol. 104, pp. 131–141, 2020. modules in fog-cloud computing paradigm,” in Proc. IFIP/IEEE Symp.
[Online]. Available: https://doi.org/10.1016/j.future.2019.10.018 Integr. Netw. Serv. Manage., 2017, pp. 1222–1228.
[25] S. K. Mani and I. Meenakshisundaram, “Improving quality-of-service [42] A. R. Benamer, H. Teyeb, and N. B. Hadj-Alouane, “Latency-aware place-
in fog computing through efficient resource allocation,” Comput. Intell., ment heuristic in fog computing environment,” in Proc. Move Meaningful
vol. 36, no. 4, pp. 1527–1547, 2020. Internet Syst. OTM Confederated Int. Conf., 2018, pp. 241–257.
[26] K. Gu, L. Tang, J. Jiang, and W. Jia, “Resource allocation scheme [43] R. Mahmud, S. N. Srirama, K. Ramamohanarao, and R. Buyya, “Quality
for community-based fog computing based on reputation mechanism,” of experience (QOE)-aware placement of applications in Fog comput-
IEEE Trans. Computat. Social Syst., vol. 7, no. 5, pp. 1246–1263, Oct. ing environments,” J. Parallel Distrib. Comput., vol. 132, pp. 190–203,
2020. 2019.

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.

You might also like