A Collaborative Computation and Offloading For Compute-Intensive and Latency-Sensitive Dependency-Aware Tasks in Dew-Enabled Vehicular Fog Computing A Federated Deep Q-Learning Approach

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

4600 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 20, NO.

4, DECEMBER 2023

A Collaborative Computation and Offloading


for Compute-Intensive and Latency-Sensitive
Dependency-Aware Tasks in Dew-Enabled
Vehicular Fog Computing: A Federated
Deep Q-Learning Approach
Kaushik Mishra, Member, IEEE, Goluguri N. V. Rajareddy , Umashankar Ghugar, Member, IEEE,
Gurpreet Singh Chhabra , and Amir H. Gandomi , Senior Member, IEEE

Abstract—The demand for vehicular networks is prolifically the Internet of Things (IoT). These vehicular networks facil-
emerging as it supports advancing in capabilities and quali- itate numerous advanced yet complex applications, such as
ties of vehicle services. However, this vehicular network cannot automatic driving, crash detection, AR&VR-enabled intel-
solely carry out latency-sensitive and compute-intensive tasks,
as the slightest delay may cause any catastrophe. Therefore, ligent applications, and other interactive modules for pas-
fog computing can be a viable solution as an integration to sengers. These applications need intensive computations on
address the aforementioned challenges. Moreover, it comple- resources and interactive communications, which are criti-
ments Cloud computing as it reduces the incurred latency and cal challenges for vehicular networks requiring rich complex
ingress traffic by shifting the computing resources to the edge services. Besides, the ingress traffic on the road makes the
of the network. This work investigated task offloading methods
in Vehicular Fog Computing (VFC) networks and proposes a computations/communications difficult with the limited capa-
Federated learning-supported Deep Q-Learning-based (FedDQL) bilities of vehicles. These applications on vehicles are required
technique for optimal offloading of tasks in a collaborative to handle latency-sensitive data without delay, for which the
computing paradigm. The proposed offloading method in the network connectivity must be stable and accelerated to han-
VFC network performs computations, communications, offload- dle such tasks within a deterministic span of time. However,
ing, and resource utilization considering the latency and energy
consumption. The trade-offs between latency and computing and the slightest delay in communication between vehicles may
communication constraints were considered in this scenario. The cause a catastrophe. Therefore, the vehicular network cannot
FedDQL scheme was validated for dependent task sets to analyze solely be responsible for these latency-sensitive and compute-
the efficacy of this method. Finally, the results of extensive simu- intensive tasks, thus a high-end computing paradigm-based
lations provide evidence that the proposed method outperforms vehicular network is required.
others with an average improvement of 49%, 34.3%, 29.2%,
16.2% and 8.21%, respectively. Cloud computing has been viewed as a viable solution
to address these issues for vehicular networks [1], [2]. On
Index Terms—Deep reinforcement learning, federated learning, this Cloud-based vehicular network, tasks are offloaded to
mobile fog computing, q-learning, vehicular fog computing, task
dependency, task offloading. the Cloud for computation and storage due to the high com-
puting capabilities of Cloud VMs. However, the existence
of the physical gap between the vehicles and the Cloud
I. I NTRODUCTION
server results in a significant latency gap, which reduces the
EHICULAR networks play a consequential part in smart
V transportation systems due to the rapid evolvement of
performance and efficiency of task offloading. Thus, it requires
a decentralized architecture and full-fledged paradigm that
reduces the incurred latency and copes with the compute- and
Manuscript received 27 February 2023; revised 5 May 2023; accepted
25 May 2023. Date of publication 5 June 2023; date of current version time-intensive and latency-sensitive tasks requiring complex
12 December 2023. Open access funding provided by Óbuda University. The requirements in complementary with Cloud computing.
associate editor coordinating the review of this article and approving it for Fog-based vehicular computing (VFC) is a cutting-edge
publication was N. Kumar. (Corresponding author: Amir H. Gandomi.)
Kaushik Mishra, Goluguri N. V. Rajareddy, Umashankar Ghugar, and computing paradigm to address the innate loopholes of the
Gurpreet Singh Chhabra are with the Department of Computer Science and Cloud-based vehicular network [3], [4]. To tackle the require-
Engineering, GITAM (Deemed to be University), Visakhapatnam 530045, ments of complex tasks, VFC has been envisioned as a
India (e-mail: [email protected]; [email protected]; [email protected];
[email protected]). potential solution, providing both on-demand computation and
Amir H. Gandomi is with the Faculty of Engineering and Information communication resources. Integrating Fog computing with
Technology, University of Technology Sydney, Sydney, NSW 2007, Australia, Cloud computing considerably reduces the latency and traf-
and also with the University Research and Innovation Center (EKIK), Óbuda
University, 1034 Budapest, Hungary (e-mail: [email protected]). fic density as well as improves the users’ response time.
Digital Object Identifier 10.1109/TNSM.2023.3282795 The computation and communication at/to the Fog computing

c 2023 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License.
For more information, see https://creativecommons.org/licenses/by/4.0/
MISHRA et al.: COLLABORATIVE COMPUTATION AND OFFLOADING 4601

work in parallel with the services provided at the edge of by renewable energy for the offloading and the MFC servers
the network through the Mobile Edge Computing (MEC) or were generating profits from them. However, since the exist-
Mobile Fog Computing (MFC) servers equipped in each Road ing resources have become scarcer due to the rapid ingress of
Side Unit (RSU) and Access Points (APs) [5]. However, these traffic with other challenges nowadays, the vehicles can lease
RSUs have much less capacity in terms of power, storage and the resources to facilitate the task offloading.
computation. The computation of high-intensive tasks causes
high energy consumption at RSUs. In the VFC network, the
Fog servers are coupled with RSUs or deployed as a sepa- B. Motivation
rate entity. In this work, we consider the decoupling mode Deep learning (DL), a subset of the Machine Learning (ML)
of Fog servers due to the flexibility in the computations of domain, is a promising solution to address complex prob-
tasks with disparate requirements. The Fog layer encompasses lems. AI is an enabler for DL to mimic the learning process.
both homogeneous and heterogeneous Fog nodes to cope with Deep Reinforcement Learning (DRL) combines DL and RL,
compute-intensive and latency-sensitive tasks. The computa- making use of DL’s noncognitive behaviour and RL’s abil-
tional capabilities of Fog nodes change depending upon the ity to make decisions [9]. DRL interacts with the vehicles
vehicular specifications. In architecture, each RSU keeps in directly and obtains the optimal scheduling/offloading strat-
touch with all the Fog servers and their corresponding Fog egy mapping. Based on the existing literature, several works
nodes. At a given time, one Fog server may be connected to have used DRL to find the optimal offloading decision. For
several RSUs, but each RSU is only ever connected to one Fog instance, Yao et al. [9] proposed a hybrid resource allocation
server. An RSU either computes locally or offloads a request strategy for VFC using reinforcement learning with heuristic
to the Fog server for processing when it receives one. information. Qu et al. [10] proposed a DMRO algorithm inte-
grating DNN with Q-learning and meta-learning approaches
to identify the optimal offloading decisions. Ning et al. [11]
A. Problem Definition devised a resource management algorithm using DRL for
The VFC imposes several challenges. The first and most VEC. To meliorate the effectiveness of vehicular networks
challenging issue is unstable network connectivity. Internet con- (VNs), Maan and Chaba [12] devised a strategy using a Deep
nectivity is an indispensable part of vehicular networks needed Q-network for offloading. To optimize the total utility of VEC,
for transmitting any request. Unstable network connectivity Liu et al. [13] implemented a combined offloading strategy uti-
may lead to a catastrophic situation. Therefore, a Dew-enabled lizing Q-learning and DRL together. He et al. [14] proposed
Internet of Things is facilitated in this architecture. an offloading method using DRL to improve the Quality of
The second most prominent challenge is the high consump- Experience (QoE) for the Internet of Vehicles. However, load
tion of energy by the RSUs [6]. A considerable amount of balancing was not considered by the aforementioned strategies
energy is consumed by the RSUs during the offloading of the while allocating the tasks from vehicles, leading to insignifi-
requests and processing of some requests locally. The incurred cant resource utilization and latency overhead. The traditional
consumption is dependent on the quality of the medium con- ML methods utilize a central server to collect and process
nected to the Fog servers and the arrival rate of the requests the data. However, the central server gets overloaded with
at the RSUs. Although it processes some requests locally, it computation and communication overheads. Because data pri-
also consumes energy due to the disparate specifications and vacy is also an indispensable part of data acquisition, the
requirements of requests. Hence, there is a need for an optimal efficiency and accuracy of ML techniques largely depend on
association among RSUs and Fog servers to reduce energy the data’s size and the central server’s capacity, which exhibit
consumption. challenges in achieving optimal and accurate results with data
A third challenge is the dispersion of the loads uniformly confidentiality. In order to address the aforementioned issue
across Fog nodes to improve the QoS. Khattak et al. [7] of traditional ML and DRL techniques, this work proposes
and Kai et al. [8] proposed the integration of the princi- a Federated learning-supported Deep Q-learning offloading
ples of Fog computing with the vehicular ad-hoc network. strategy that facilitates the uploading of data collaboratively
However, the load balancing among Fog nodes in terms of into a global model without sharing the raw data [15].
QoS was not considered. Though the loads are of different Our work aimed to analyze the task-offloading strat-
configurations, these require to be offloaded uniformly across egy in two scenarios: (1) considering cooperative and non-
Fog nodes to prevent overloaded or underloaded conditions. cooperative MFC servers, and (2) considering task depen-
The computation of loads depends on the arrival rate and dency. In the first case, when a vehicle passes through an
requirements of each request at the corresponding RSU. The RSU, the tasks related to the respective vehicle are indepen-
overloaded/underloaded condition arises due to the involve- dently computed by the MFC server associated with that RSU.
ment of the different arrival rates at RSUs. In order to increase On the contrary, when many vehicles offload their tasks to the
QoS (resource usage), the collaboration between RSUs and RSU, the RSU chooses whether to compute such tasks locally
Fog servers must be established in order to significantly reduce or transfer them to the next RSU (located in the direction of
the load imbalance. the moving vehicle) deployed at Fog servers for computation.
The system costs also pose another challenge in these vehic- Hence, the computation is being performed cooperatively. In
ular networks. The communication and computation costs the second case, we consider dependent tasks for computation.
together make the system costs. Earlier, RSUs were powered In vehicular networks, vehicles generate a huge amount of
4602 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 20, NO. 4, DECEMBER 2023

• Considering load-balancing, resource utilization, energy


consumption, response time (latency), and deadline as
QoS metrics to appraise the efficacy of the implemented
method; and
• Conducting an empirical assessment of the proposed
offloading strategy with other existing methods.
The rest of the paper is organized as follows. Section II
reviews the related research. Section III describes the com-
putational models and problem formulation for compute-
intensive and latency-sensitive dependency-aware task offload-
ing. The proposed Federated learning-supported Deep Q-
Fig. 1. (a) Representation of a task through a DAG; (b) Fine-grained task
offloading.
learning offloading strategy is elucidated in Section IV.
Section V provides an empirical assessment and analysis of the
tasks with heterogeneous requirements that need parallel exe- considered QoS parameters for the proposed strategy. Finally,
cution and, hence, are divided into many sub-tasks. Each Section VI concludes the paper and highlights future research
sub-task is then processed by different computing nodes based directions.
on the offloading strategy to reduce both the computation and
response time. Figure 1(a) represents the inter-sub-dependency II. R ELATED R ESEARCH
of a task through a Directed Acyclic Graph (DAG), where The VANET model has been extended to Vehicular Fog
the task is divided into seven sub-tasks. Figure 1(b) shows Computing (VFC) by enabling the core principles of Fog com-
the fine-grained task scheduling in the Vehicle-VFC-Cloud puting for the effective offloading of latency-sensitive tasks to
collaborative framework. compute-intensive vehicles. It facilitates reducing energy con-
sumption, response time, load balancing, and latency while
C. Contribution improving resource utilization and, thus, performance. The
related research boils down to two primary aspects: (1) Fog
As mentioned, this paper proposes a Dew-enabled Federated computing in vehicular networks, and (2) deep learning for
learning-supported Deep Q-learning-based (FedDQL) offload- task offloading. The existing works for both of these aspects
ing method for compute-intensive and latency-sensitive depen- are briefly summarized as follows.
dent tasks for a collaborative DIoT-Edge-Fog-Cloud frame-
work. This strategy collaboratively optimizes the latency and A. Fog-Assisted Vehicular Networks
computations for its intended tasks by reducing the energy
consumption of MFC servers. Moreover, the computation A great deal of work has addressed the issues related to
loads are uniformly distributed across Fog nodes to prevent vehicular networks using the appealing characteristics of Fog
load imbalance and, thus, maximize resource utilization. The computing in collaboration with the Cloud [16], [17], [18],
Fog nodes in the Fog layer are grouped into clusters, each [19]. For instance, to reduce the total response time of delay-
of which contains a homogeneous Fog server, heterogeneous sensitive tasks, the authors of [16] proposed a heuristic-based
Fog server, Fog nodes, and numerous RSUs. In addition, each greedy scheduling algorithm for a three-layered VFC archi-
cluster has a DQL agent to train the DQL to determine the tecture for offloading tasks. Next, a Fog-assisted vehicular
offloading decisions. A collaborative framework is proposed computing framework is presented in [17], which is utilized
for tasks offloading in Vehicular Fog Computing. Intermediate as a testbed for controlling the ingress traffic on the roads.
nodes, such as routers, gateways, or switches, are deployed on The authors stated that the computations are performed by the
the edge of the network along with a classifier to classify RSUs deployed along the road with the assistance of mobile
the incoming requests and offload them to the target lay- vehicles to meet the demand of the ever-increasing growth of
ers for computation. The Fuzzy classifier is a classifier that compute-intensive tasks by smart vehicles. To identify conges-
determines the target layer for offloading based on the tasks’ tion among vehicles, [18] proposed a Fog-enabled framework
requirements, including network bandwidth, size in MI, CPU, to process the sensed data locally by optimizing the commu-
resource utilization, etc. nication. To preserve the privacy of VFC, the authors of [19]
The contributions of this paper are delineated as follows: introduced a real-time framework called GALAXY supported
• Modelling the framework with the principles of Dew by Federated learning without the aid of the Cloud.
computing in order to enable the Internet of Vehicles for
uninterrupted task offloading; B. Deep Learning for Task Offloading
• Proposing a collaborative Dew-enabled IoT-Edge-Fog- In an aim to reduce the transmission power and latency, the
Cloud computing framework for efficient offloading of authors of [20] devised a collaborative optimization technique
tasks, where a DAG depicts the interdependencies among to allocate tasks into Fog nodes in an IoT-Fog-Cloud archi-
sub-tasks; tecture. For efficient resource management, [21] proposed a
• Proposing a Federated learning-supported Deep two-fold technique for allocating tasks and managing vehicu-
Q-learning algorithm for dependency-aware task lar resources optimally in a VFC. In the latter work, contract
offloading; theory and two-side matching games were formulated for
MISHRA et al.: COLLABORATIVE COMPUTATION AND OFFLOADING 4603

resource management and task allocation, respectively. Using


an actor-critic-based DRL algorithm, the authors of [22]
devised an offloading algorithm considering priority in VFC by
predicting the dynamic pricing of vehicles. In [4], the authors
developed a resource allocation algorithm for parked and slow-
moving vehicles in VFC, aiming to reduce parked vehicle
service latency. The authors used a combined recurrent neu-
ral network (RNN) with DRL and, later, introduced a heuristic
strategy to expedite the convergence rate of RL algorithms. To
lessen the VM migration and reduce the power consumption
during migration, the authors of [23] devised a mechanism
called EnTruVe (ENergy and TRUst-aware VM allocation in
VEhicular Fog Computing). In addition, an analytical hierar-
chy process was used to determine an optimal VM based on
trust and latency, providing a large pool of vehicular fogs to
satisfy the requirements of VMs. In order to minimize the
computation-to-communication cost and latency, the authors
of [24] proposed a Fuzzy RL-based offloading algorithm that
elevates the learning process and maximizes the long-term
reward over the Q-learning algorithm. The authors of [25]
introduced a dynamic clustering-enabled clustering-based load
balancing approach. Moreover, the technique considers the
speed, direction, and position of moving vehicles to form
groups for making a pool of computing resources. In addi-
tion, it also employs a capacity-based load-balancing approach
for performing the distribution of loads between vehicles and
among the cluster of vehicles. To decide the target layers for
offloading, the authors of [26] proposed a Federated learning-
supported DRL mechanism for VFC, which learns through the
learning model among Fog nodes and vehicles, resulting in fast
convergence. Network overhead is also significantly reduced
due to the implementation of Federated learning, and thus, the
Fig. 2. Dew-enabled task offloading model.
privacy of the users’ data is also preserved.

of Dew computing are embedded for uninterrupted commu-


III. S YSTEM M ODELS AND P ROBLEM F ORMULATION nication during unstable connectivity. Dew servers consist of
Initially, this section introduces a dew-enabled VFC network local host machines that contain Dew DBMS, Dew Client pro-
model for task offloading. Next, different system models, gram, and Dew interactive sites. Dew servers keep backups in
including the vehicular task, communication, and computation their storage and provide services when unstable connectiv-
models, are elaborated. Furthermore, the problem formulation ity occurs. Meanwhile, if the user updates any data then the
with key objectives is discussed. Dew server keeps it in the Dew DBMS and makes it reflected
on the user program when the connectivity becomes stable.
Therefore, these Dew-enabled IoT devices are deployed in the
A. Dew-Enabled VFC Network Model geographical region for uninterrupted services for vehicular
The proposed dew-enabled Vehicular Fog Computing networks.
model, a collaborative task offloading network/communication Vehicles with Internet connectivity transmit requests to the
framework, is illustrated in Fig. 2. The model consists of five Cloud or Fog layer to be processed via the Edge layer. The
layers: the bottom IoT layer, Dew layer, Edge layer, Fog layer, Edge layer is made up of plenty of intermediary nodes, includ-
followed by the Cloud layer. The roles and significance of each ing routers, switches, or gateways, that are used to direct
layer in task offloading in a vehicular network are described the requested packets to the following layer immediately.
as follows. Additionally, it uses a Fuzzy logic classifier to categorize the
At the ground layer, the IoT layer consists of numerous tasks in order to identify the target offloading layers based
Internet-enabled physical devices (vehicles), which generate on the requirements of each task. However, this layer exhibits
requests of disparate specifications. Vehicles also contain local some challenges in terms of security, cost, connectivity, scala-
processing units. All communications take place through a sta- bility and complexity [8], [9], [11]. This concept is theoretical
ble Internet connection. However, a stable connection around and close to impossible in implementation. This layer is a
the clock seems impractical, and any catastrophe may arise replica of how the computations can be done on the edge
due to unstable connectivity. Therefore, the core principles of the network via intermediate nodes. The requests are then
4604 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 20, NO. 4, DECEMBER 2023

sent to the Cloud or Fog layer, depending on which one will j th subtask can be executed only if the i th subtask has been
handle them. The Fog layer is an amalgamation of different completed, where i , j ∈ I .
Fog nodes with homogeneous and heterogeneous resources. As shown in Fig. 1, the task T23 (n = 1, 2, . . . n; v =
These Fog nodes are computing nodes in terms of virtual 1, 2, . . . v ) is broken down into seven subtasks, such as
machines (VMs) to process requests either collaboratively or 3 , T 3 , T 3 , T 3 , T 3 , T 3 , and T 3 of the form
T2,1 2,2 2,3 2,4 2,5 2,6 2,7
locally in this layer. Furthermore, these computing nodes are Tn,v 3
i;i=1,2,...,7 . The subtask T2, 1 is the entry subtask and
clustered together based on the computing capacity into two executes first, and T2,7 3 is the exit subtask and executes at
groups: Collaborative Fog Nodes (CFNs), which is a collec- the end until all its predecessor subtasks have been exe-
tion of both homogenous and heterogeneous resources that cuted. Each subtask Tn,i v is defined with four attributes, i.e.,
are able to handle tasks (or requests) with disparate specifica- Tn,i = Ln,i , Dn,i , dn,i
v v v v , C v , where Lv  denotes the
n,i n,i
tions; and Local Fog Nodes (LFNs), which is a combination of length of the subtask in MI, Dn,i v  is the latency rate of the
only homogeneous resources that is capable of handling tasks subtask, dn,i v  is the deadline (hard or soft or firm) of the
with minimum completion time or requirements. Moreover, subtask, and Cn,i v  is the number of resources required for
this layer also deploys RSUs and MFC servers to communi- computation to execute Tn,i v .
cate among vehicles. Each RSU is embedded with an MFC In our approach, a task can be categorized into three classes,
server and gathers service requests from vehicles within its namely crucial, high-end, and low-end tasks. We assume
range while each MFC server processes tasks and subtasks that the crucial tasks are those which contain some crucial
through offloading. Here, the nodes are geographically dis- information about the vehicle and, hence, can be computed
tributed facilitating the execution of the tasks in a distributed in the vehicle itself. High-end tasks are associated with some
way to leverage the centralized architecture of the Cloud data- priority and deadline. If the high-end tasks do not meet the
centre. The tasks with high requirements or tasks taking longer deadline, the outcome must not be considered or the task is
to execute in the Fog layer are offloaded to the Cloud for pro- considered as failed. Low-priority tasks also have some prior-
cessing and storage. The Cloud datacentre encompasses the ity and deadlines. Unlike high-priority tasks, if the low-priority
physical machines (or hosts) and VMs out of hosts through tasks do not meet the deadline, the outcome is still in use,
virtualization technology. The VMs are the computing entities but the outcome will not be considered if the execution time
for enabling the scheduling and execution of high-end tasks. increases. If the execution of a task is longer than usual, then
The centralized architecture of the Cloud datacentre results it is forwarded to the upper layer for processing. However, the
in high response time and latency overhead. The Fog layer vehicles must ensure the execution of high-priority tasks over
is introduced in the proposed framework to minimize these low-priority tasks. A utility in terms of reward or penalty is
issues. associated with both high-priority and low-priority tasks upon
We assume that the tasks are dependent in nature and, meeting or missing deadlines. For the high-priority task, a log
hence, partitioned into subtasks. Furthermore, we assume that function is used to define the utility of a subtask given as:
N compute-intensive and latency-sensitive tasks are connected 
with each vehicle and will be scheduled and computed within H log(1 + τn − tCn ), tCn ≤ τn
μn = (1)
a deterministic completion time. The number of RSUs is rep- −ΓH , tCn > τn
resented as R = {1, 2, 3, . . . , r , . . . , R}, and the group of where tCn denotes the completion time of a subtask Tn,i v ; and
vehicles is referred to as V = {1, 2, 3, . . . , v , . . . , V }. The
−Γ implies a penalty for not being able to meet the deadline.
H
variable Tnv refers to the n th task of the vehicle v.
For the low-priority task, if the completion time of the exe-
cuting subtask is less than the defined deadline, then the utility
is a reward. Otherwise, the utility is a penalty and expressed as:
B. Vehicular Task Model  L
Γ , tCn ≤ τn
μL = (2)
This section introduces dependency-aware task offloading n
Γl e −c (tCn − τn ), tCn > τn
for the vehicular task model. Interdependency among tasks is
considered as the tasks are not atomic. In other words, when where ΓL defines a positive reward (constant); and c is a
a task is broken down into subtasks, the processing of each constant and greater than zero.
subtask is dependent upon its previous subtask’s execution. In order to offload a subtask to any other vehicle (call it
The processing of each subtask is based on the requirements a service vehicle), the task vehicle has to pay a unit price
of each subtask and is offloaded to the respective target layer denoted as ρn . Hence, the computation size of the subtask
for processing (locally (vehicle) or via Fog or Cloud). The (Cn ) can be estimated as Cn = fn tn , where fn is the
subtasks of a task are represented through a Directed Acyclic frequency assigned to the service vehicle, and tn denotes the
Graph (DAG) and modelled as G = (τ, ε), where τ is the computing time of the subtask. Finally, the utility of the task
array of subtasks and ε is the pair of directed edges depicting vehicle can be expressed as:
interdependencies between two subtasks. Let I = |τ | repre-
sent the total set of subtasks of a task Tnv . In DAG, a vertex Util n = 1(Tn = TH )μH L
n + 1(Tn = TL )μn − ρn Cn (3)
v denotes the i th subtask of the n th task of the vehicle
Tn,i where 1(.) is an indicator function [28]. and TH and
v, and a directed edge (Tn,iv , T v ) indicates the interdepen-
n,j TL denote the high-priority and low-priority subtasks,
dency between the i subtask and j th subtask of Tnv . The
th respectively.
MISHRA et al.: COLLABORATIVE COMPUTATION AND OFFLOADING 4605

TABLE I
TARGET O FFLOADING L AYERS AS I NFERENCES U SING F UZZY L OGIC delineates the required output (target offloading layers) based
on Fuzzy inputs.

D. Vehicular Communication Model


This section describes the communications and interactions
among different entities for smooth offloading. The inter-
actions refer to the communication model of vehicles to
RSUs, from RSUs to the Cloud server, and from vehicles
to the Cloud. The communications among these entities are
illustrated as follows.
Vehicles to RSUs: The Internet-enabled vehicles (IoV) com-
municate with RSUs through wireless communication based
on Frequency-Division Multiple Access (FDMA). To transmit
a byte of request, a vehicle v requires Tr rv amount of data
transmission rate to an RSU r, expressed in Eq. (4) [27]:
 
Tr rv = hvr × bw × 1 + Tp v × βv × ρ−2 (4)
where hvr is the allocated sub-channels to the vehicle v; bw is
the allocated network bandwidth for each sub-channel; Tp v is
the power to transmit each byte from the v th vehicle; βv is the
gain of the channel; and ρ−2 is the noise in the surrounding
environment.
RSUs to Cloud Server: An optical fibre-based wired connec-
tion is used to transmit tasks from RSUs to the Cloud server.
If the MFC server fails, tasks or subtasks are migrated to
the Cloud server for computation. Some transmission latency
occurs while offloading to the Cloud. The transmission latency
and acknowledgement latency are equivalent between the MFC
server and Cloud. While the transmission latency is inde-
pendent of the lengths of the tasks, it results due to the
presence of the physical gap between the MFC and Cloud.
Nonetheless, the incurred latency, in this case, is lesser than
the direct offloading of tasks from vehicles to the Cloud server.
C. Classification of Offloading Layers Using Fuzzy Logic Therefore, we denote the round-trip time (τrCloud ) for trans-
mitting data from the RSU to the Cloud and get the ack in
Fuzzy logic is used to determine the target offloading lay-
Eq. (5):
ers for the tasks generated by vehicles. It is implemented to  
minimize the latency incurred while executing high-end and τrCloud = 2×D Cloud
off (r ) ∴ D Cloud
off (r ) + D r
ack (Cloud) (5)
low-end tasks at one layer. Specifically, Fuzzy logic offloads
where Doff Cloud denotes the transmission latency incurred while
different groups of tasks requiring disparate computational (r )
capabilities of resources to different target layers. It pre- r
offloading a request from RSU r to the Cloud, and Dack (Cloud)
vents starvation and aging. By this method, if the execution is the latency incurred for sending an ack from the Cloud to
of a task is taking longer than usual, the respective task is the RSU r.
forwarded to the next adjacent layer for execution. It is man- Vehicles to Cloud server: The high-end tasks are directly
ifested through three modules: Fuzzy inputs, Fuzzification, offloaded to the Cloud servers for computation, thus requir-
and Defuzzification. Fuzzy inputs are the essential param- ing high-end specifications. Therefore, the communication
eters of the Fuzzy logic model that draw the inferences, involves the transmission latency of sending a request from
which include Tasks size (MI), Network Bandwidth (Mbps), a vehicle v to the Cloud server and is expressed in Eq. (6):
Latency-sensitivity, and Deadline-constrained. These inputs
Tr Cloud = hvCloud × bw × log2
are characterized by High (h), Medium (m), and Low (l) lexi- v  
cal parameters representing the heterogeneity and dynamicity × 1 + Tp v × βv × ρ−2 × Li + ack (6)
of tasks. Fuzzification is the process of drawing inferences
based on Fuzzy inputs with the aid of an inference engine. where Li denotes the length of the i th request; and hvCloud is
Fuzzy Knowledge Base is a repository consisting of infer- the allocated sub-channels to the vehicle v.
ences in terms of Fuzzy rules. Fuzzy inputs are processed
based on the defined membership functions. Defuzzification E. Computation Model
is a process of transforming the Fuzzy inferences to some This section explains the different computation models in
consolidated values based on membership functions. Table I this vehicular Fog computing architecture, which includes the
4606 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 20, NO. 4, DECEMBER 2023

energy consumption model, latency-aware computation model, 2) Latency-aware Model: A task’s subtasks can be carried
and pricing model. Each model is elucidated below. out locally (on the vehicle itself) or offloaded to a Cloud server
1) Energy Consumption Model: The total energy consumed or an MFC server at the Fog layer for processing. At various
at time t when a vehicle transmits a request either to the Fog phases of processing and offloading, variable rates of latency
layer or the Cloud layer via RSU is expressed in Eq. (7): might occur. In this instance, the processing times of the sub-
Fog/Cloud
tasks at the vehicles, MFC server, and Cloud server can be
Ec i (t) = Ec ri (t) + Ec i (t) (7) used to determine the latency.
Computing on vehicles: When the tasks are classified and
The energy consumed at time t at the RSU r (Ec ri (t))
offloaded to the target layers for computation, tasks with their
while propagating the request from the vehicles is estimated
subtasks arrive at the queue of the vehicle for computation. We
in Eq. (8):
assume that all the tasks in the queue are in FIFO order and the
Ec ri (t) = Li × Ec rnw (i , t) vehicle processes one subtask at a time. That means, while a
 r r  subtask is in execution, other subtasks must wait in the queue.
Pidle Pmax
+ Li × hi × + (8) This processing involves both processing time and waiting time
Cap ridle Cap rmax Util r
in the queue. Hence, the total latency incurred while computing
v is given by Eq. (14):
where Li is the length of the i th request; Ec rnw (i , t) denotes on vehicles for a subtask Tn,i
the energy consumed to transmit each byte of request from a v
v
Cn,i v
vehicle v to the RSU r via the network; hi is the transmission Dlocal(n,i) = + Dwait(n,i)
r
gain of the channel; Pidle r
and Pmax are the energy consumed cv
v
Cn,i
during the idle and active states of the RSU r, respectively; local(v ) local(v )
= + Tstart(n,i) − Trequest(n,i) (14)
Cap ridle and Cap rmax are the capacity of the RSU r during the cv
idle and active states, respectively; and Util r is the utilization Cv
where cn,i
v
v
denotes the local processing latency; Dwait(n,i)
of the RSU r. v ; C v is the number of
Next, the task is either offloaded to the Fog or Cloud for is the waiting time of a subtask Tn,i n,i
v ; c v is
computing resources needed to execute the subtask Tn,i
computation. Here, the energy is consumed while processing
local(v )
and storing the requests as well as sending the respec- the computation capability for the vehicle v; and Tstart(n,i)
tive acknowledgement. Hence, the energy is mathematically local(v )
and Trequest(n,i) are the starting and requested times of a
expressed as follows: v , respectively.
subtask Tn,i
Fog/Cloud &S Computing on MFC server: The latency rate at the MFC
Ec i (t) = Ec P
i + β × Ec Cloud
i + Ec ack (9)
server is the sum of the transmission delay from the vehicle
The energy consumed while processing and storing the to the MFC server via RSU, processing delay, waiting delay
request either at Fog node or Cloud VM, Ec P &S , is expressed
i of the subtasks on the MFC server to be scheduled, acknowl-
in Eq. (10): edgement delay back to the vehicle, and the offloading latency
&S to the Cloud upon failure. These factors are formulated as
Ec P
i = Li × Ec P S
i + Li × Ec i (10)
  follows:
St(j , t) First, the transmission delay in propagating a request i to
Ec P
i = δ × Pj
active
+ (1 − δ) × Pjidle × (11)
nT (j , t) the respective Fog node is estimated as:
where δ is the ratio ( TTactive
total
) of the active time to total time fog
D(n,i) = Li × TLnw (15)
of the j th Fog node or Cloud VM; (1 − δ) denotes the elapsed
time for the j th Fog node or Cloud VM; Pjactive and Pjidle where TLnw denotes the transmission delay caused by propa-
gating each byte of request i from the vehicle v to an MFC
denote the power consumption of the j th Fog node or Cloud
server fn j ; j = 1, 2, . . . , m.
VM during the idle and active states, respectively; St(j, t) is
Second, the latency caused by processing the request i on
the service time of the j th Fog node or Cloud VM; nT(j, t) is
an MFC server is denoted as:
the number of tasks associated with the j th Fog node or Cloud
T sent St(j , t)
VM; and β(= Treq ) is the ratio of the total request sent to DPfog = Li × (16)
req nT (j , t)
the total request offloaded to the Cloud from Fog nodes upon
failure. Third, transmitting an acknowledgement to the respective
Ec Cloud
i is the energy consumed by the j th Cloud VM when vehicle v causes some latency and is denoted as:
any Fog node fails to execute, as computed in Eq. (12): v nw
Dack (n,i) = Li × TL (17)
&S
Ec Cloud
i = Ec P
i + Ec ack (12) Next, the waiting latency at the queue of the MFC server is
Ec ack is the energy consumed while sending the acknowl- similar to the local waiting delay and is expressed as follows:
edgement back to RSU or the corresponding Fog node and is fog
Dwait(n,i) fog
= Tstart(n,i) fog
− Trequest(n,i) (18)
expressed in Eq. (13):
r /Fog
Finally, the transmission delay caused to offload of a request
Ec ack = Li × Ec nw (i , t) (13) i to one of the Cloud servers upon failure is expressed as
MISHRA et al.: COLLABORATIVE COMPUTATION AND OFFLOADING 4607

follows: frequency devoted to the offloaded subtask is estimated as


St(j , t) fn = fv − fvmin /ϑv , where ϑv can be estimated from the above
Cloud
D(n,i) = 2(Li × TLnw ) + Li × (19) requirement condition [22]. Consequently, if there is a need for
nT (j , t)
computing resources in order to compute the offloaded subtasks,
Here, the transmission delay states that when a request is then the cost will be higher for the respective vehicle v.
sent from an IoT device to a Cloud server, the time it takes for
the request to reach the server (transmission delay) is equiva- F. Problem Formulation
lent to the time it takes for the acknowledgement (response) to
travel back to the IoT device. Therefore, the transmission delay The objective functions can be formulated as:
 v
is doubled because it includes both the time for the request to min β × TL n, i (t)
v rv
Tn,i
reach the server and the time for the response to reach the IoT n,i

device. So, the total transmission delay is the original delay + α × Ec (t), r ∈ R, v ∈ V , n ∈ N ; (23)
multiplied by two. 1
N 
 V  
v H L
max Cn 1(Tn = TH )Un + 1(Tn = TL )Un − ρn Cn
By combining Eqs. (15)-(19), we get the total latency caused N
n=1 v =1
at the MFC server in Eq. (20): (24)
fog fog
D fog = D(n,i) + DP subject to the following constraints:
v fog Cloud C1 : β + α = 1;
(n,i) + Dwait(n,i) + βD(n,i)
+ Dack (20) (25a)
v is migrated C2 : TL vn, i = Dlocal(n,
v
i) + D
fog
+ D Cloud ; (25b)
Computing on Cloud Server: The subtask Tn,i
to the Cloud server after being offloaded from the vehicle v to C3 : Xi,j (t) ∈ {0, 1}, ∀i ∈ R, ∀j ∈ V ; (25c)
the RSU r for processing. The transmission latency includes C4 : 0 ≤ Xi,j (t) ≤ 1; (25d)
transmission delay, processing delay at the Cloud server, and i∈R j ∈V
acknowledgement delay, which are analogous to the MFC comp v
server. Hence, the total latency caused by the Cloud server C5 : Comp v , m ≤ max , Comp v ,m ∈ rn,i ; (25e)
cap
v ∈V
is defined as follows: comm v
Cloud Cloud
C6 : Comm v , m ≤ max , Comm v ,m ∈ rn,i ; (25f)
D = D(n,i) + DPCloud v
+ Dack (n,i) (21) v ∈V
cap

3) Pricing Model: In VFC, the low-end tasks are computed  


on the vehicle itself, and the high-end and medium-end tasks Util vlocal ϑv
get scheduled on the Cloud server and Fog server due to the C7 : 0 ≤ ρvn ≤ , ∀n ∈ N , ∀v ∈ V ; (26a)
Cn
lack of computational capabilities of the vehicles. Hence, a C8 : 1(Cnv = 1) > 0, ∀n ∈ N , ∀v ∈ V ; (26b)
fair allocation of resources should adhere to the computation
of high-end tasks to meet their deadline on a priority basis. Eq. (23) indicates two key objectives: (1) reducing the
Let’s consider Tn number of high-end tasks that are gener- overall computational latency ratio across the computing lay-
ated from the vehicles Vv . The deadline and computation size ers, and (2) minimizing the average energy consumption
of a task/subtask are referred to as di and Li , respectively. across RSUs. Eq. (24) implies that the average utilization of
The minimum frequency required for the local subtasks is offloading subtasks should be maximized.
denoted as fvmin = T Li
i=1 di . Upon the availability of com- The constraint C1 in Eq. (25a) is the addition of weights
puting resources, a vehicle might deny offloading the requests. (β and α) of two sub-objectives (TL vn, i (t) and Ec (t))
Then, the frequency is computed as fv = T Li
i=1 ϑv d i , where stated in Eq. (23). Eq. (25b) represents the total computational
ϑv = fvmin /fv and the reserved frequency is [fvmin , fv ]. Here, latency incurred while transmitting requests from the RSU to
ϑ is used to denote the ratio of minimum and maximum Fog, RSU to Cloud, and then Fog to Cloud. In Eq. (25c),
frequencies to compute the subtasks locally, and the range Xi, j (t) is a binary variable that states the assignment of
of ϑ is [ϑv , 1]. Therefore, we denote the total utilization of the subtasks to the desired MFC servers for computation.
local subtasks as [22]: Eq. (25d) states that the RSU j can be linked with at least
one Fog server m, but a Fog server m can be linked with more
Tn
than zero RSUs at a given instant. In Eq. (25e), Comp v , m and
Util vlocal (ϑv ) = log(1 + di − ϑdi ), ϑ ∈ [ϑv , 1] (22) Comm v , m are the allocated computation and communication
i=1
resources, respectively, and should not exceed the maximum
In one scenario, a vehicle migrates its subtasks to a pass- computation and communication capabilities of MFC servers.
ing vehicle for computation due to the lack of computational The constraint C7 guarantees a positive cost value within the
capabilities and allocates frequency fn for the migrated subtask range of the maximum value, and ϑv denotes the current
v . Therefore, the utilization of the migrated local subtasks
Tn, i value of ϑ in Vv while offloading a task. The constraint C8
on a new vehicle is denoted as Util vlocal (ϑv ), and the cost denotes the availability of the selected vehicle for the subtask
(ρn ) paid for the subtask Tn, v
i should meet the requirement offloading.
as ρn = Util local (ϑv ) − Util vlocal (ϑv ). This is compensa-
v This section formalized the task offloading problem in
tion for the loss of the utilization of the local subtask. The VFC. This optimization problem aims to reduce the total
4608 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 20, NO. 4, DECEMBER 2023

computational latency and mean energy consumption while where Xi, j (t) is the mapping of subtask i to a compatible the
maximizing the total utilization of offloading subtasks in terms j th Fog/Cloud server; and Alloc comp (t) and Alloc comm (t)
of cost subject to the constraints of communication and com- are the allocated computation and communication resources
puting resources, the nature of the association between the v , respectively.
for the subtask i of the task Tn, i
RSUs and Fog servers, the binary mapping of subtasks to the Reward: Based on the execution of the action on the state
compatible Fog servers or Cloud servers, and the availability defined in the formulated environment, a reward is facilitated.
of the selected vehicle to offload the subtask when a vehicle The reward Re(t) is the negation of the weighted total of both
fails to compute. In a dynamic context (unpredictable requests average energy consumption and computational latency at time
with disparate requirements) where the level of collaboration t given as [11]:
between RSUs and Fog servers/Cloud servers fluctuates, it is   
challenging to optimize the objective functions in Equations Re(t) = − β × TL vn, i (t) + α × EC (t) (29)
23 and 24 using conventional optimization approaches like
heuristics or metaheuristics. In other words, it is not easy An agent must select the action Ac(t) as an offloading of
to find the best solutions for the objectives given the con- the subtask i at state S(t) to get the optimal reward while
stantly changing level of collaboration between the servers and minimizing the total energy consumption across RSUs and
RSUs due to the unpredictable demands. Hence, the next sec- the computational latency. In the next sections, the conven-
tion proposes a Federated learning-supported Deep Q-learning tional Q-learning approach is demonstrated, followed by the
offloading (FedDQL) strategy to optimize the aforementioned proposed FedDQL offloading strategy.
objective functions.

IV. TASK O FFLOADING A LGORITHM : A F EDERATED A. Standard Q-Learning Approach


L EARNING (FL)-S UPPORTED D EEP Q-L EARNING The Q-learning approach is used to identify the optimal
(F ED DQL) A PPROACH decision-making policy through a function Q{S (t) → Ac(t)}.
This section first discusses the conventional Deep Q- It is a guiding force for the agent in the environment to
learning neural networks for identifying a dynamic vehicular maximize the reward and is defined through a deterministic
offloading in the Fog paradigm. Afterwards, a Federated Bellman equation [29] as follows [13]:
Learning (FL)-supported Deep Q-learning strategy for global 
offloading is presented. In the proposed offloading strategy, Q(S (t), Ac(t)) = (1 − n)Q(S (t), Ac(t))

the conventional Q-learning is amalgamated with deep neu-
+ η(Re(t + 1)) + Υmax
Ac(t+1) Q(S (t + 1), Ac(t + 1)) (30)
ral networks as the proposed DQN strategy. In this network,
an agent acts by learning from the environment and opti-
mizes the sum of rewards. This environment, in terms of the where η is the learning rate; and Υ denotes the discount rate.
VFC environment, can be formulated as a Markov Decision The Q-learning approach first initializes the Q-value to
Process expressed as env : {S(t), Ac(t), Re(t), P(t)}, where zero for all periods. The agent gains the state space S(t)
S(t) denotes the state space of the environment at time t, Ac(t) information from the environment (RSUs) at time t. According
implies the action at time t performed by the DQN agent, to the state, the agent selects an action Ac(t) from the
Re(t) expresses the reward space an agent gets, and P(t) is the Q(S (t −1), Ac(t −1) ) using an − greedy policy. Using this
probability of transition. Here, an agent indicates the vehicle policy, the agent identifies the best action with (1 − ) proba-
associated with the number of RSUs and Fog/Cloud servers bility rate and random action with probability rate. Further,
in the Vehicle-Fog-Cloud environment. Each terminology used a Q-value is updated in each iteration, and the mean reward
in the environment is illustrated as follows. is calculated afterwards.
State: A state space of an environment is comprised of
computational models in terms of mean energy consump- B. Deep Q-Learning-Based Neural Network (DQNN)
tion of RSUs and the computational latency incurred while Approach
transmitting request i from a vehicle v and is defined as [11]:
The Q-learning method is primarily used for a small set of
S (t) = {Ec(1, t), Ec(2, t), . . . , Ec(|Z |, t), state and action spaces. Since the Vehicular Fog Computing
environment is dynamic in nature, the Q-learning approach is
D(1, t), D(2, t), . . . , D(|H |, t)}, i ∈ Z , j ∈ H (27)
not suitable for finding the optimal reward function. Therefore,
where Ec(i, t) is the energy consumption by the i th RSU at a Deep Neural Network is incorporated into the Q-learning
time t; and D(j, t) is the computational latency incurred by the approach with a parameter set θ to deal with the dynamic
j th Fog server at time t. nature of tasks for optimal results. In DNN, θ is used to map
Action: An action space comprises an action in terms of the input state to action. In order to reduce the loss, a target
offloading the requests by an agent. Hence, the action Ac(t) layer is added to the primary neural network for network stabi-
of a subtask i of the task Tn,v lization. The loss function of this neural network is expressed
i at time t is given by [11]:
in Eq. (31) [31]:
Ac(t) = Xi, j (t), i ∈ Z , j ∈ H ;
 2
Alloc comp (t), Alloc comm (t) (28) l (θ) = ODQNN −Q(S (t), Ac(t)) (31)
MISHRA et al.: COLLABORATIVE COMPUTATION AND OFFLOADING 4609

where ODQNN denotes the output of the DQNN and is Algorithm 1 FedDQL-Based Offloading Strategy
expressed in Eq. (32): Input: Cluster of Fog zones Z, Participation rate P; Set of vehicles V;
Set of requests T;
ODQNN = Re(t + 1) + max Q(S (t + 1), Ac(t + 1)) Output: Optimal mapping of subtasks into Fog servers to have
Ac(t+1) reduced energy consumption and computational latency
(32) Start
1. Set M = ∅, K = p|Z (f )|;
For network stabilization, the transition table is updated 2. For each Fog zone m ∈ z (f ) Estimate the transmission cost
with tuples, such as {S(t ), Ac(t), Re(t + 1), S(t + 1)}. The hp
(Tr m ) using Eq. (33);
DQNN works as follows: the parameters of the primary and 3. Arrange the Fog zones in z(f) according to their transmission
target layers are initialized. The DQNN agent gains the state cost in ascending order;
space information from all the RSUs, then identifies an action 4. Discover the initial k number of Fog zones from z(f) and add
them into the set of M;
space using an -greedy policy. Next, the transition table is 5. for f : 0 to F do
updated with the reward and state space data. Furthermore, 5.1 For each Fog zone m ∈ M , update its parameter set Om
f
the weights are updated, and the loss function specified in using the DQN approach
Eq. (31) is diminished using a gradient descent approach. The 5.2 Transfer the updated parameter cost to the centralized entity;
DQNN duplicates the parameters (θ) of a target layer from 5.3 Aggregation of all the local parameters takes place by the
the primary network at the end of a specific phase. centralized entity as
 Dm
θf +1 =
f
D ∗ θm ;
m∈M
C. FedDQL Approach End
Federated learning (FL) is a cutting-edge concept that con-
siders all the components to design a global model that does
not need to share the original data. Herein, the core features other Fog zones. Algorithm 1 shows the pseudocode for the
of FL were incorporated with DQNN to develop the FedDQL FedDQL-based offloading strategy.
global model, which has two primary advantages: (a) privacy In step 1, the array of Fog zones M and the series of involv-
preservation in terms of sharing confidential data, and (b) par- ing Fog zones are initialized. The cost of transmission by each
tial participation of the Fog servers that are energy-deficient. Fog zone is estimated using Eq. (33). Afterwards, each Fog
The Fog geographic region is scattered with clusters of Fog zone is sorted in ascending order based on their cost of trans-
zones consisting of one or more RSUs and homogeneous and mission m. Step 4 identifies the first k number of Fog zones
heterogeneous Fog servers. Each Fog zone has a DQNN agent in FL, which involves a set of F rounds. In each epoch, each
to determine the offloading scheme with the DQNN approach. Fog zone updates their local parameter lists and broadcasts
In FL architecture, which is a distributed machine learning it to the unified model to form a global model in steps 5.1
approach, there is a single model that is responsible for calcu- and 5.2. In step 5.3, the unified model obtains a global model
lating and setting all the global metrics. This model is called (θf +1 ) by performing aggregation at each round f of the local
the “unified model,” and it acts as a central point of coordi- parameter (θfm ), which is expressed as [15]:
nation for the distributed learning process. In the FedDQL Dm
model, the DQNN agent of each Fog zone downloads the θf +1 = f
∗ θm (35)
D
global metrics from the unified model and trains their local m∈M
model accordingly. After training, the metrics are updated and where Dm denotes the size of the local buffer of the m th
then forwarded to the unified model. Afterwards, the unified Fog zone; and D implies the total size of data estimated as
model aggregates all the local metrics and forms the global D = m∈M Dm .
metrics. In this model, all the non-involving Fog zones merely
download the unified model to keep an update.
V. E XPERIMENTAL A SSESSMENT AND D ISCUSSIONS
This work considers M set of involving Fog zones of size
K as K = P |Z (f )|, where P denotes the involvement factor, This section describes the extensive simulations that were
and Z(f ) is the clusters of Fog zones. The involved Fog zones performed on a real-world benchmark dataset for different con-
are identified according to their uplink transmission cost [22], flicting scheduling parameters to assess the proposed strategy.
which is calculated as follows: The simulation parameters were first delineated, then a con-
l vergence analysis of the proposed technique was carried out.
Tr Up
m = TP ×
m
(33) Afterwards, a comparative performance analysis was achieved
A
for the considered scheduling parameters against some existing
where TPm denotes the power to transmit the local metrics by
works.
the m th Fog zone of size l; and A is the rate taken to transmit
by the m th Fog zone of size l and is estimated as:
⎛ ⎞ A. Setting of the Simulation Environment
Tpm × h iFogSim over CloudSim is used as a simulator to instantiate
A = Bw log2 ⎝1 + |Z (f )|
⎠ (34)
σ 2 + P =1, P =m TPm × h a VFC simulation environment. This environment encom-
passes 10 Internet-enabled Vehicles, 3000 dynamic tasks, 296
where Bw is the channel’s bandwidth; h is the channel gain; Fog nodes/VMs, and 16 RSUs. There are multiple MFC
|Z (f )|
and P =1, P =m TPm × h is the surrounding noise caused by servers in this architecture, and each MFC server is connected
4610 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 20, NO. 4, DECEMBER 2023

TABLE II
S IMULATION PARAMETERS

TABLE III
T ECHNICAL S PECIFICATIONS OF F OG N ODES /VMS
Fig. 3. Convergence analysis for different learning rates.

to a single Road Side Unit (RSU), which is a device used


in vehicular communication systems. Additionally, each of
these servers has many cores, which are individual processing
units within the server that can execute tasks simultane-
ously. The lengths of a series of tasks range from 0-15000
MIs with the size of each task ranging from a set of
{30, 35, 40, 45, 50, 60} MB. Each task requires the com-
putation resource requirements, which are arbitrarily allocated Fig. 4. Convergence analysis for different baselines.
from a set of {0.6, 0.8, 1.0, 1.2, 1.4} Gigacycle/s. Each task
is arbitrarily subdivided into 6-10 subtasks. The transmission
power is dependent on the hardware used in the system [35]. In method was analyzed with different learning rates, and then
the IoT-Edge-Fog-Cloud environment, the transmission power the convergence rate of other algorithms was analyzed.
of an IoT device can be dependent on various hardware com- The convergence of FedDQL was analyzed with disparate
ponents, such as the transceiver, amplifier, power supply, and learning rates ranging from 0.01 to 0.09, as depicted in
processor. The efficiency of all these components impacts the Figure 3. It is evident that the most suitable learning rate is
transmission power of IoT devices. Table II lists the other sim- 0.01 with a mean reward of 2025, while the worst learning
ulated parameters used in the experiment, and the technical rate is 0.08 with a mean reward of 2003. Therefore, 0.01 was
specifications of Fog nodes/VMs are presented in Table III. considered in the simulation.
A real-world dataset [34] was utilized to assess the Figure 4 depicts the convergence rate of FedDQL and other
performance. The dataset has an ETC matrix containing a compared algorithms. It can be observed that FedDQL con-
number of tasks and machines with a corresponding compu- verges faster due to the consideration of the dynamic behaviour
tation ratio. This dataset has twelve instances in the form of of tasks and interdependencies among them. The optimal
u_x _tm with respect to uniformity in data (u), consistency (X) mapping of tasks into Fog servers or Cloud VMs results in
among data, and heterogeneity of tasks (t) and machines (m). fine-grained offloading.
The consistency among data was further classified into three
groups: consistent (c), inconsistent (i), and semi-consistent (s). C. Performance Analysis of Scheduling Metrics
Likewise, the tasks and machines heterogeneity is categorized
as high (h) or low (l). As a result, the twelve instances were Here, the performance of the FedDQL is analyzed and
formed by considering the aforementioned criteria. In addi- compared for different scheduling metrics, such as service
tion, a variable number of tasks and machines was considered time (ms), average utilization rate (%), mean energy con-
and formed into three groups: Group 1 (1000 × 96), Group sumption (KJ), and mean latency rate (ms). The proposed
2 (2000 × 196) and Group 3 (3000 × 294) in the structure FedDQL is compared with other offloading strategies
(t × m). [23], [30], [31], [32], [33] to gauge its efficacy.
To assess the performance of the devised FedDQL offload- 1) Performance Analysis of Service Rate: For latency-
ing algorithm, five baselines [23], [30], [31], [32], [33] were sensitive applications, the user’s request must be serviced in
considered for analysis and comparison. a deterministic time and, hence, plays an indispensable part
in this computing paradigm. Figure 5 shows a comparative
analysis of the obtained results for service rate in three dif-
B. Convergence Analysis ferent sets (tasks × machines). The dew-enabled architecture
To demonstrate the convergence analysis of FedDQL over and Deep Q-learning approach helped minimise the proposed
other baseline algorithms, a convergence of the proposed framework’s service time. Comparatively, the other approaches
MISHRA et al.: COLLABORATIVE COMPUTATION AND OFFLOADING 4611

Fig. 5. QoS analysis of service rate (ms) for different sets: (a) 1000 × 96,
(b) 2000 × 196, and (c) 3000 × 294.

Fig. 6. QoS analysis of average utilization (%) for different sets:


do not consider the decomposition of tasks and classification of (a) 1000 × 96, (b) 2000 × 196, and (c) 3000 × 294.
the target layers for offloading. Identifying different offloading
layers has expedited the execution process and, thus, resulted Figure 6 depicts a comparative analysis of the obtained
in reduced service time for each user’s request. The FedDQL results for the average degree of utilization for three different
outperforms the other models [23], [30], [31], [32], [33] with sets. Nevertheless, all other approaches do not consider load
an improvement of 49%, 23.2%, 22.6%, 14.7% and 3.23% on balancing. The FedDQL outperforms the other models [23],
average. [30], [31], [32], [33] with an improvement of 51%, 36.8%,
2) Performance Analysis of Average Utilization: Resource 32.4%, 27.7% and 9.33% on average for three sets.
utilization is a significant part of offloading requests for 3) Performance Analysis of Latency Rate: Latency, which
resources. This factor contributes an equal amount with load is caused by the high computational time or delay in transmis-
balancing in meliorating the performance of any offloading sion and propagation, is a pivotal element for latency-sensitive
algorithm. The use of the Deep Q-learning network helps to applications in computing paradigms. Due to the implemen-
predict the load and assign the requests according to each Fog tation of Fog servers between the Internet-enabled Vehicles
server, improving the utilization of all the underlying resources and Cloud layer, the latency rate has been notably reduced.
considerably. Moreover, classifying tasks and determining the offloading
4612 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 20, NO. 4, DECEMBER 2023

Fig. 8. QoS analysis of average energy consumption (KJ) for different sets:
(a) 1000 × 96, (b) 2000 × 196, and (c) 3000 × 294.
Fig. 7. QoS analysis of latency rate (ms) for different sets: (a) 1000 × 96,
(b) 2000 × 196, and (c) 3000 × 294.
4) Performance Analysis of Average Energy Consumption:
The degree of energy consumption is a challenging factor for
layers also minimise the latency across computing layers. The any datacentre. The consumption of energy depends on various
latency rate holds significance when there is an increasing factors, such as the specifications of the Fog servers/datacentre,
number of tasks in a dynamic environment. Figure 7 presents computational capabilities, resource-constrained, size of the
a comparative analysis of the obtained results for latency rate. tasks, etc. In our approach, the energy consumption is eval-
It is obvious that the suggested method presents a remarkable uated for processing and storing each task on a resource,
improvement over the other models for different specifications. transmitting and acknowledging the requests from/to the com-
The FedDQL outperforms the other models [23], [30], [31], puting layers. Efficient utilization of resources could minimize
[32], [33] with an improvement of 15%, 9.4%, 9.9%, 14.7% the degree of utilization. Figure 8 presents the QoS analy-
and 2.33% on average for three sets. As the number of requests sis of the degree of energy consumption for an increasing
increase, the proposed method shows notable results over [32]. number of tasks for the proposed method. It is apparent
For a smaller set of requests, both of these (proposed and [32]) that the suggested method reduces energy consumption more
perform approximately. efficiently than the other methods with different degrees of
MISHRA et al.: COLLABORATIVE COMPUTATION AND OFFLOADING 4613

computational specifications. The FedDQL outperforms the [8] K. Kai, W. Cong, and L. Tao, “Fog computing for vehicular ad-
other models [23], [30], [32] with an improvement of 39.9%, hoc networks: Paradigms, scenarios, and issues,” J. China Univ. Posts
Telecommun., vol. 23, no. 2, pp. 56–96, 2016. [Online]. Available:
34.7% and 21.6% on average for three sets. https://doi.org/10.1016/S1005-8885(16)60021-3
For service rate, average utilization, latency and energy con- [9] L. Yao, X. Xu, M. Bilal, and H. Wang, “Dynamic edge computation
sumption, the effectiveness of the proposed FedDQL model offloading for Internet of Vehicles with deep reinforcement learn-
ing,” IEEE Trans. Intell. Transp. Syst., early access, Jun. 6, 2022,
has been compared to [23], [30], [31], [32], [33] and the doi: 10.1109/TITS.2022.3178759.
improvement compared to [33] is 49%, 51% and 15%, the [10] G. Qu, H. Wu, R. Li, and P. Jiao, “DMRO: A deep meta reinforcement
improvement of FedDQL compared to [30] is 23.2%, 36.8%, learning-based task offloading framework for edge-cloud computing,”
IEEE Trans. Netw. Service Manag., vol. 18, no. 3, pp. 3448–3459,
9.4% and 39.9%, the improvement of FedDQL compared Sep. 2021.
to [23] is 22.6%, 32.4%, 9.9% and 34.7%, the improvement [11] Z. Ning, P. Dong, X. Wang, J. J. P. C. Rodrigues, and F. Xia, “Deep
of FedDQL compared to [31] is 14.7%, 27.7% and 14.7%, reinforcement learning for vehicular edge computing: An intelligent
offloading system,” ACM Trans. Intell. Syst. Technol., vol. 10, no. 6,
and the improvement of FedDQL compared to [32] is 3.23%, pp. 1–24, 2019, doi: 10.1145/3317572.
9.33%, 2.33% and 21.6%, respectively. [12] U. Maan and Y. Chaba, “Deep Q-network based fog node offloading
strategy for 5G vehicular adhoc network,” Ad Hoc Netw., vol. 120,
Sep. 2021, Art. no. 102565. [Online]. Available: https://doi.org/10.1016/
j.adhoc.2021.102565
VI. C ONCLUSION AND F UTURE S TUDY [13] Y. Liu, H. Yu, S. Xie, and Y. Zhang, “Deep reinforcement learning
This paper proposes a noble offloading method for for offloading and resource allocation in vehicle edge computing and
networks,” IEEE Trans. Veh. Technol., vol. 68, no. 11, pp. 11158–11168,
compute-intensive and latency-sensitive dependency-aware Nov. 2019, doi: 10.1109/TVT.2019.2935450.
tasks in the Dew-enabled collaborative computing framework. [14] X. He, H. Lu, M. Du, Y. Mao, and K. Wang, “QoE-based task offloading
The dependency-aware tasks are depicted through a DAG, with deep reinforcement learning in edge-enabled Internet of Vehicles,”
IEEE Trans. Intell. Transp. Syst., vol. 22, no. 4, pp. 2252–2261,
where the interdependencies among tasks are also modelled. Apr. 2021.
Moreover, a Federated learning-supported Deep Q-learning [15] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Y. Arcas,
(FedDQL)-based offloading strategy has been designed for “Communication-efficient learning of deep networks from decentralized
data,” in Proc. 20th Int. Conf. Artif. Intell. Stat., 2017, pp. 1273–1282.
the optimal assignment of tasks to machines. Next, Fuzzy [16] C. Tang, X. Wei, C. Zhu, Y. Wang, and W. Jia, “Mobile vehi-
logic is implemented to determine the target offloading lay- cles as fog nodes for latency optimization in smart cities,” IEEE
ers to prevent starvation and aging. The simulation results Trans. Veh. Technol., vol. 69, no. 9, pp. 9364–9375, Sep. 2020,
doi: 10.1109/TVT.2020.2970763.
showcase the efficacy of FedDQL over alternative offloading [17] Z. Ning, J. Huang, and X. Wang, “Vehicular fog computing: Enabling
algorithms based on several performance metrics with different real-time traffic management for smart cities,” IEEE Wireless Commun.,
specifications. Tasks and machine heterogeneity are taken into vol. 26, no. 1, pp. 87–93, Feb. 2019, doi: 10.1109/MWC.2019.1700441.
[18] A. Thakur and R. Malekian, “Fog computing for detecting vehicu-
account to appraise the effectiveness of the proposed method. lar congestion, an Internet of Vehicles based approach: A review,”
The proposed FedDQL method outperforms others [23], [30], IEEE Intell. Transp. Syst. Mag., vol. 11, no. 2, pp. 8–16, Mar. 2019,
[31], [32], [33] with an average improvement of 49%, 34.3%, doi: 10.1109/MITS.2019.2903551.
[19] Y. Li, H. Li, G. Xu, T. Xiang, and R. Lu, “Practical privacy-preserving
29.2%, 16.2% and 8.21%, respectively. federated learning in vehicular fog computing,” IEEE Trans. Veh.
A potential direction for future study are tasks offloading Technol., vol. 71, no. 5, pp. 4692–4705, May 2022.
and data sharing while a vehicle is in motion. Also, data migra- [20] O.-K. Shahryari, H. Pedram, V. Khajehvand, and M. D. TakhtFooladi,
“Energy-efficient and delay-guaranteed computation offloading for
tion between MFC servers, and migration across computing fog-based IoT networks,” Comput. Netw., vol. 182, Dec. 2020,
layers in reverse could be a future scope. Art. no. 107511, doi: 10.1016/j.comnet.2020.107511.
[21] Z. Zhou, P. Liu, J. Feng, Y. Zhang, S. Mumtaz, and J. Rodriguez,
“Computation resource allocation and task assignment optimization
in vehicular fog computing: A contract-matching approach,” IEEE
R EFERENCES Trans. Veh. Technol., vol. 68, no. 4, pp. 3113–3125, Apr. 2019,
doi: 10.1109/TVT.2019.2894851.
[1] A. Boukerchea and R. E. De Grande, “Vehicular cloud computing: [22] J. Shi, J. Du, J. Wang, J. Wang, and J. Yuan, “Priority-aware task offload-
Architectures, applications, and mobility,” Comput. Netw., vol. 135, ing in vehicular fog computing based on deep reinforcement learning,”
pp. 171–189, Apr. 2018. IEEE Trans. Veh. Technol., vol. 69, no. 12, pp. 16067–16081, Dec. 2020,
[2] M. H. Eiza, Q. Ni, and Q. Shi, “Secure and privacy-aware cloud-assisted doi: 10.1109/TVT.2020.3041929.
video reporting service in 5G-enabled vehicular networks,” IEEE Trans. [23] F. H. Rahman, S. H. S. Newaz, T.-W. Au, W. S. Suhaili,
Veh. Technol., vol. 65, no. 10, pp. 7868–7881, Oct. 2016. M. A. P. Mahmud, and G. M. Lee, “EnTruVe: ENergy and TRUst-
[3] A. M. A. Hamdi, F. K. Hussain, and O. K. Hussain, “Task offloading aware virtual machine allocation in VEhicle fog computing for catering
in vehicular fog computing: State-of-the-art and open issues,” Future applications in 5G,” Future Gener. Comput. Syst., vol. 126, pp. 196–210,
Gener. Comput. Syst., vol. 133, pp. 201–212, Aug. 2022. [Online]. Jan. 2022.
Available: https://doi.org/10.1016/j.future.2022.03.019 [24] S. Vemireddy and R. R. Rout, “Fuzzy reinforcement learning for energy
[4] S.-S. Lee and S. Lee, “Resource allocation for vehicular fog comput- efficient task offloading in vehicular fog computing,” Comput. Netw.,
ing using reinforcement learning combined with heuristic information,” vol. 199, Nov. 2021, Art. no. 108463.
IEEE Internet Things J., vol. 7, no. 10, pp. 10450–10464, Oct. 2020, [25] A. R. Hameed, S. ul Islam, I. Ahmad, and K. Munir, “Energy- and
doi: 10.1109/JIOT.2020.2996213. performance-aware load-balancing in vehicular fog computing,” Sustain.
[5] H. Guo, J. Liu, J. Zhang, W. Sun, and N. Kato, “Mobile-edge compu- Comput. Inform. Syst., vol. 30, Jun. 2021, Art. no. 100454.
tation offloading for ultradense IoT networks,” IEEE Internet Things J., [26] B. Shabir, A. U. Rahman, A. W. Malik, R. Buyya, and M. A. Khan, “A
vol. 5, no. 6, pp. 4977–4988, Dec. 2018. federated multi-agent deep reinforcement learning for vehicular fog com-
[6] W. S. Atoui, W. Ajib, and M. Boukadoum, “Offline and online schedul- puting,” J. Supercomput., vol. 79, pp. 6141–6167, Oct. 2022. [Online].
ing algorithms for energy harvesting RSUs in VANETs,” IEEE Trans. Available: https://doi.org/10.1007/s11227-022-04911-8
Veh. Technol., vol. 67, no. 7, pp. 6370–6382, Jul. 2018. [27] X. Xu, Z. Fang, L. Qi, W. Dou, Q. He, and Y. Duan, “A deep rein-
[7] H. A. Khattak, S. U. Islam, I. U. Din, and M. Guizani, “Integrating fog forcement learning-based distributed service of loading method for edge
computing with VANETs: A consumer perspective,” IEEE Commun. computing empowered Internet of Vehicles,” Chin. J. Comput., vol. 44,
Standards Mag., vol. 3, no. 1, pp. 19–25, Mar. 2019. no. 12, pp. 2382–2405, 2021.
4614 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 20, NO. 4, DECEMBER 2023

[28] J. Zhao, Q. Li, Y. Gong, and K. Zhang, “Computation offloading and Umashankar Ghugar (Member, IEEE) received
resource allocation for cloud assisted mobile edge computing in vehicu- the Doctoral degree (Full-Time) from Berhampur
lar networks,” IEEE Trans. Veh. Technol., vol. 68, no. 8, pp. 7944–7956, University, Odisha, in 2021. He is currently work-
Aug. 2019. ing as an Assistant Professor with the School
[29] T. Hester et al., “Deep Q-learning from demonstrations,” in Proc. AAAI of Technology, Department of CSE, GITAM
Conf. Artif. Intell., vol. 32, 2018, pp. 3223–3230. University, Visakhapatnam. He has published 14
[30] C. Chakraborty, K. Mishra, S. K. Majhi, and H. K. Bhuyan, “Intelligent articles, including journals, book chapters, and con-
latency-aware tasks prioritization and offloading strategy in distributed ferences in international publishers. His research
fog-cloud of things,” IEEE Trans. Ind. Informat., vol. 19, no. 2, interests are in computer networks and network
pp. 2099–2106, Feb. 2023, doi: 10.1109/TII.2022.3173899. security in WSN. He is a Reviewer of IEEE
[31] V. Sethi and S. Pal, “FedDOVe: A federated deep Q-learning-based ACCESS, IEEE T RANSACTION ON E DUCATION,
offloading for vehicular fog computing,” Future Gener. Comput. Syst., IEEE T RANSACTIONS ON N EURAL N ETWORKS AND L EARNING S YSTEMS,
vol. 141, pp. 96–105, Apr. 2023. Security and Privacy (Wiley), International Journal of Communication
[32] M. Tiwari, I. Maity, and S. Misra, “FedServ: Federated task Systems (Wiley), International Journal of Distributed Sensor Networks
service in fog-enabled Internet of Vehicles,” IEEE Trans. Intell. (Hindawi), International Journal of Knowledge Discovery in Bioinformatics
Transp. Syst., vol. 23, no. 11, pp. 20943–20952, Nov. 2022, (IGI Global), and International Journal of Information Security and Privacy
doi: 10.1109/TITS.2022.3186401. (IGI Global) and a member of IACSIT, CSTA, and IRED.
[33] D. B. Son, V. T. An, T. T. Hai, B. M. Nguyen, N. P. Le, and H. T. T. Binh,
“Fuzzy deep Q-learning task offloading in delay constrained vehicular
fog computing,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN), 2021,
pp. 1–8, doi: 10.1109/IJCNN52387.2021.9533615.
[34] T. D. Braun et al., “A comparison of eleven static heuristics for mapping Gurpreet Singh Chhabra received the Ph.D.
a class of independent tasks onto heterogeneous distributed computing degree from Department of Computer Science
systems,” J. Parallel Distrib. Comput., vol. 61, no. 6, pp. 810–837, 2001. and Engineering. He is currently working as an
[Online]. Available: https://doi.org/10.1006/jpdc.2000.1714 Assistant Professor with the Computer Science
[35] X. Chen, H. Zhang, C. Wu, S. Mao, Y. Ji, and M. Bennis, “Optimized and Engineering Department, GITAM School of
computation offloading performance in virtual edge computing systems Technology, GITAM (Deemed to be University),
via deep reinforcement learning,” IEEE Internet Things J., vol. 6, no. 3, Visakhapatnam. He has 15 years of teaching expe-
pp. 4005–4018, Jun. 2019. rience. His qualifications are fortified with a great
deal of creativity and problem-solving skills. He has
credit for many international papers, patents, book
chapters, and books. His research interests in deep
learning, machine learning, data science, and fog computing. He is a Life
Member of the ISTE and IAENG.

Kaushik Mishra (Member, IEEE) received the


Ph.D. degree from the Veer Surendra Sai University
of Technology, Burla, in 2021. He is currently work-
ing as an Assistant Professor with the Department Amir H. Gandomi (Senior Member, IEEE) is a
of Computer Science and Engineering, Gandhi Professor of Data Science and an ARC DECRA
Institute of Technology and Management University, Fellow with the Faculty of Engineering and
Visakhapatnam, India. He has publications in vari- Information Technology, University of Technology
ous journals of repute and conference proceedings. Sydney. He is also affiliated with Óbuda University,
His research area includes cloud computing and fog Budapest, as a Distinguished Professor. Prior to
computing. He has received two best paper awards joining UTS, he was an Assistant Professor with
in two conferences held at NIT, Agartala, India, and the Stevens Institute of Technology, USA, and a
ITER, Bhubaneswar, India, in 2020. Distinguished Research Fellow with BEACON cen-
ter, Michigan State University, USA. He has pub-
lished over three hundred journal papers and 12
books which collectively have been cited 4000+ times (H-index = 91). He
has been named as one of the most influential scientific minds and received the
Highly Cited Researcher Award (top 1% publications and 0.1% researchers)
from Web of Science for six consecutive years from 2017 to 2022. In the
Goluguri N. V. Rajareddy is currently pursuing recent most impactful researcher list, done by Stanford University and released
the Ph.D. degree with NIT Silchar and also work- by Elsevier, and he is ranked among the top 1,000 researchers (top 0.01%)
ing as an Assistant Professor with the Department and top 50 researchers in AI and Image Processing subfield in 2021. He
of Computer Science and Engineering, School of also ranked 17th in GP bibliography among more than 15 000 researchers.
Technology, Gandhi Institute of Technology and His research interests are global optimization and (big) data analytics using
Management University (Deemed to be University), machine learning and evolutionary computations in particular. He has received
Visakhapatnam, India. He has published five multiple prestigious awards for his research excellence and impact, such as
research papers in various reputed international jour- the 2022 Walter L. Huber Prize and the Highest-Level Mid-Career Research
nals. He has eight years of full-time teaching expe- Award in all areas of civil engineering. He has served as an Associate Editor,
rience and his areas of interest are image processing an Editor, and the Guest Editor for several prestigious journals, such as an
and deep learning. He is a member of IAENG, Associate Editor for IEEE N ETWORKS and IEEE I NTERNET OF T HINGS
UACEE, and IFERP. J OURNAL. He is active in delivering keynotes and invited talks.

You might also like