A Collaborative Computation and Offloading For Compute-Intensive and Latency-Sensitive Dependency-Aware Tasks in Dew-Enabled Vehicular Fog Computing A Federated Deep Q-Learning Approach
A Collaborative Computation and Offloading For Compute-Intensive and Latency-Sensitive Dependency-Aware Tasks in Dew-Enabled Vehicular Fog Computing A Federated Deep Q-Learning Approach
A Collaborative Computation and Offloading For Compute-Intensive and Latency-Sensitive Dependency-Aware Tasks in Dew-Enabled Vehicular Fog Computing A Federated Deep Q-Learning Approach
4, DECEMBER 2023
Abstract—The demand for vehicular networks is prolifically the Internet of Things (IoT). These vehicular networks facil-
emerging as it supports advancing in capabilities and quali- itate numerous advanced yet complex applications, such as
ties of vehicle services. However, this vehicular network cannot automatic driving, crash detection, AR&VR-enabled intel-
solely carry out latency-sensitive and compute-intensive tasks,
as the slightest delay may cause any catastrophe. Therefore, ligent applications, and other interactive modules for pas-
fog computing can be a viable solution as an integration to sengers. These applications need intensive computations on
address the aforementioned challenges. Moreover, it comple- resources and interactive communications, which are criti-
ments Cloud computing as it reduces the incurred latency and cal challenges for vehicular networks requiring rich complex
ingress traffic by shifting the computing resources to the edge services. Besides, the ingress traffic on the road makes the
of the network. This work investigated task offloading methods
in Vehicular Fog Computing (VFC) networks and proposes a computations/communications difficult with the limited capa-
Federated learning-supported Deep Q-Learning-based (FedDQL) bilities of vehicles. These applications on vehicles are required
technique for optimal offloading of tasks in a collaborative to handle latency-sensitive data without delay, for which the
computing paradigm. The proposed offloading method in the network connectivity must be stable and accelerated to han-
VFC network performs computations, communications, offload- dle such tasks within a deterministic span of time. However,
ing, and resource utilization considering the latency and energy
consumption. The trade-offs between latency and computing and the slightest delay in communication between vehicles may
communication constraints were considered in this scenario. The cause a catastrophe. Therefore, the vehicular network cannot
FedDQL scheme was validated for dependent task sets to analyze solely be responsible for these latency-sensitive and compute-
the efficacy of this method. Finally, the results of extensive simu- intensive tasks, thus a high-end computing paradigm-based
lations provide evidence that the proposed method outperforms vehicular network is required.
others with an average improvement of 49%, 34.3%, 29.2%,
16.2% and 8.21%, respectively. Cloud computing has been viewed as a viable solution
to address these issues for vehicular networks [1], [2]. On
Index Terms—Deep reinforcement learning, federated learning, this Cloud-based vehicular network, tasks are offloaded to
mobile fog computing, q-learning, vehicular fog computing, task
dependency, task offloading. the Cloud for computation and storage due to the high com-
puting capabilities of Cloud VMs. However, the existence
of the physical gap between the vehicles and the Cloud
I. I NTRODUCTION
server results in a significant latency gap, which reduces the
EHICULAR networks play a consequential part in smart
V transportation systems due to the rapid evolvement of
performance and efficiency of task offloading. Thus, it requires
a decentralized architecture and full-fledged paradigm that
reduces the incurred latency and copes with the compute- and
Manuscript received 27 February 2023; revised 5 May 2023; accepted
25 May 2023. Date of publication 5 June 2023; date of current version time-intensive and latency-sensitive tasks requiring complex
12 December 2023. Open access funding provided by Óbuda University. The requirements in complementary with Cloud computing.
associate editor coordinating the review of this article and approving it for Fog-based vehicular computing (VFC) is a cutting-edge
publication was N. Kumar. (Corresponding author: Amir H. Gandomi.)
Kaushik Mishra, Goluguri N. V. Rajareddy, Umashankar Ghugar, and computing paradigm to address the innate loopholes of the
Gurpreet Singh Chhabra are with the Department of Computer Science and Cloud-based vehicular network [3], [4]. To tackle the require-
Engineering, GITAM (Deemed to be University), Visakhapatnam 530045, ments of complex tasks, VFC has been envisioned as a
India (e-mail: [email protected]; [email protected]; [email protected];
[email protected]). potential solution, providing both on-demand computation and
Amir H. Gandomi is with the Faculty of Engineering and Information communication resources. Integrating Fog computing with
Technology, University of Technology Sydney, Sydney, NSW 2007, Australia, Cloud computing considerably reduces the latency and traf-
and also with the University Research and Innovation Center (EKIK), Óbuda
University, 1034 Budapest, Hungary (e-mail: [email protected]). fic density as well as improves the users’ response time.
Digital Object Identifier 10.1109/TNSM.2023.3282795 The computation and communication at/to the Fog computing
c 2023 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License.
For more information, see https://creativecommons.org/licenses/by/4.0/
MISHRA et al.: COLLABORATIVE COMPUTATION AND OFFLOADING 4601
work in parallel with the services provided at the edge of by renewable energy for the offloading and the MFC servers
the network through the Mobile Edge Computing (MEC) or were generating profits from them. However, since the exist-
Mobile Fog Computing (MFC) servers equipped in each Road ing resources have become scarcer due to the rapid ingress of
Side Unit (RSU) and Access Points (APs) [5]. However, these traffic with other challenges nowadays, the vehicles can lease
RSUs have much less capacity in terms of power, storage and the resources to facilitate the task offloading.
computation. The computation of high-intensive tasks causes
high energy consumption at RSUs. In the VFC network, the
Fog servers are coupled with RSUs or deployed as a sepa- B. Motivation
rate entity. In this work, we consider the decoupling mode Deep learning (DL), a subset of the Machine Learning (ML)
of Fog servers due to the flexibility in the computations of domain, is a promising solution to address complex prob-
tasks with disparate requirements. The Fog layer encompasses lems. AI is an enabler for DL to mimic the learning process.
both homogeneous and heterogeneous Fog nodes to cope with Deep Reinforcement Learning (DRL) combines DL and RL,
compute-intensive and latency-sensitive tasks. The computa- making use of DL’s noncognitive behaviour and RL’s abil-
tional capabilities of Fog nodes change depending upon the ity to make decisions [9]. DRL interacts with the vehicles
vehicular specifications. In architecture, each RSU keeps in directly and obtains the optimal scheduling/offloading strat-
touch with all the Fog servers and their corresponding Fog egy mapping. Based on the existing literature, several works
nodes. At a given time, one Fog server may be connected to have used DRL to find the optimal offloading decision. For
several RSUs, but each RSU is only ever connected to one Fog instance, Yao et al. [9] proposed a hybrid resource allocation
server. An RSU either computes locally or offloads a request strategy for VFC using reinforcement learning with heuristic
to the Fog server for processing when it receives one. information. Qu et al. [10] proposed a DMRO algorithm inte-
grating DNN with Q-learning and meta-learning approaches
to identify the optimal offloading decisions. Ning et al. [11]
A. Problem Definition devised a resource management algorithm using DRL for
The VFC imposes several challenges. The first and most VEC. To meliorate the effectiveness of vehicular networks
challenging issue is unstable network connectivity. Internet con- (VNs), Maan and Chaba [12] devised a strategy using a Deep
nectivity is an indispensable part of vehicular networks needed Q-network for offloading. To optimize the total utility of VEC,
for transmitting any request. Unstable network connectivity Liu et al. [13] implemented a combined offloading strategy uti-
may lead to a catastrophic situation. Therefore, a Dew-enabled lizing Q-learning and DRL together. He et al. [14] proposed
Internet of Things is facilitated in this architecture. an offloading method using DRL to improve the Quality of
The second most prominent challenge is the high consump- Experience (QoE) for the Internet of Vehicles. However, load
tion of energy by the RSUs [6]. A considerable amount of balancing was not considered by the aforementioned strategies
energy is consumed by the RSUs during the offloading of the while allocating the tasks from vehicles, leading to insignifi-
requests and processing of some requests locally. The incurred cant resource utilization and latency overhead. The traditional
consumption is dependent on the quality of the medium con- ML methods utilize a central server to collect and process
nected to the Fog servers and the arrival rate of the requests the data. However, the central server gets overloaded with
at the RSUs. Although it processes some requests locally, it computation and communication overheads. Because data pri-
also consumes energy due to the disparate specifications and vacy is also an indispensable part of data acquisition, the
requirements of requests. Hence, there is a need for an optimal efficiency and accuracy of ML techniques largely depend on
association among RSUs and Fog servers to reduce energy the data’s size and the central server’s capacity, which exhibit
consumption. challenges in achieving optimal and accurate results with data
A third challenge is the dispersion of the loads uniformly confidentiality. In order to address the aforementioned issue
across Fog nodes to improve the QoS. Khattak et al. [7] of traditional ML and DRL techniques, this work proposes
and Kai et al. [8] proposed the integration of the princi- a Federated learning-supported Deep Q-learning offloading
ples of Fog computing with the vehicular ad-hoc network. strategy that facilitates the uploading of data collaboratively
However, the load balancing among Fog nodes in terms of into a global model without sharing the raw data [15].
QoS was not considered. Though the loads are of different Our work aimed to analyze the task-offloading strat-
configurations, these require to be offloaded uniformly across egy in two scenarios: (1) considering cooperative and non-
Fog nodes to prevent overloaded or underloaded conditions. cooperative MFC servers, and (2) considering task depen-
The computation of loads depends on the arrival rate and dency. In the first case, when a vehicle passes through an
requirements of each request at the corresponding RSU. The RSU, the tasks related to the respective vehicle are indepen-
overloaded/underloaded condition arises due to the involve- dently computed by the MFC server associated with that RSU.
ment of the different arrival rates at RSUs. In order to increase On the contrary, when many vehicles offload their tasks to the
QoS (resource usage), the collaboration between RSUs and RSU, the RSU chooses whether to compute such tasks locally
Fog servers must be established in order to significantly reduce or transfer them to the next RSU (located in the direction of
the load imbalance. the moving vehicle) deployed at Fog servers for computation.
The system costs also pose another challenge in these vehic- Hence, the computation is being performed cooperatively. In
ular networks. The communication and computation costs the second case, we consider dependent tasks for computation.
together make the system costs. Earlier, RSUs were powered In vehicular networks, vehicles generate a huge amount of
4602 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 20, NO. 4, DECEMBER 2023
sent to the Cloud or Fog layer, depending on which one will j th subtask can be executed only if the i th subtask has been
handle them. The Fog layer is an amalgamation of different completed, where i , j ∈ I .
Fog nodes with homogeneous and heterogeneous resources. As shown in Fig. 1, the task T23 (n = 1, 2, . . . n; v =
These Fog nodes are computing nodes in terms of virtual 1, 2, . . . v ) is broken down into seven subtasks, such as
machines (VMs) to process requests either collaboratively or 3 , T 3 , T 3 , T 3 , T 3 , T 3 , and T 3 of the form
T2,1 2,2 2,3 2,4 2,5 2,6 2,7
locally in this layer. Furthermore, these computing nodes are Tn,v 3
i;i=1,2,...,7 . The subtask T2, 1 is the entry subtask and
clustered together based on the computing capacity into two executes first, and T2,7 3 is the exit subtask and executes at
groups: Collaborative Fog Nodes (CFNs), which is a collec- the end until all its predecessor subtasks have been exe-
tion of both homogenous and heterogeneous resources that cuted. Each subtask Tn,i v is defined with four attributes, i.e.,
are able to handle tasks (or requests) with disparate specifica- Tn,i = Ln,i , Dn,i , dn,i
v v v v , C v , where Lv denotes the
n,i n,i
tions; and Local Fog Nodes (LFNs), which is a combination of length of the subtask in MI, Dn,i v is the latency rate of the
only homogeneous resources that is capable of handling tasks subtask, dn,i v is the deadline (hard or soft or firm) of the
with minimum completion time or requirements. Moreover, subtask, and Cn,i v is the number of resources required for
this layer also deploys RSUs and MFC servers to communi- computation to execute Tn,i v .
cate among vehicles. Each RSU is embedded with an MFC In our approach, a task can be categorized into three classes,
server and gathers service requests from vehicles within its namely crucial, high-end, and low-end tasks. We assume
range while each MFC server processes tasks and subtasks that the crucial tasks are those which contain some crucial
through offloading. Here, the nodes are geographically dis- information about the vehicle and, hence, can be computed
tributed facilitating the execution of the tasks in a distributed in the vehicle itself. High-end tasks are associated with some
way to leverage the centralized architecture of the Cloud data- priority and deadline. If the high-end tasks do not meet the
centre. The tasks with high requirements or tasks taking longer deadline, the outcome must not be considered or the task is
to execute in the Fog layer are offloaded to the Cloud for pro- considered as failed. Low-priority tasks also have some prior-
cessing and storage. The Cloud datacentre encompasses the ity and deadlines. Unlike high-priority tasks, if the low-priority
physical machines (or hosts) and VMs out of hosts through tasks do not meet the deadline, the outcome is still in use,
virtualization technology. The VMs are the computing entities but the outcome will not be considered if the execution time
for enabling the scheduling and execution of high-end tasks. increases. If the execution of a task is longer than usual, then
The centralized architecture of the Cloud datacentre results it is forwarded to the upper layer for processing. However, the
in high response time and latency overhead. The Fog layer vehicles must ensure the execution of high-priority tasks over
is introduced in the proposed framework to minimize these low-priority tasks. A utility in terms of reward or penalty is
issues. associated with both high-priority and low-priority tasks upon
We assume that the tasks are dependent in nature and, meeting or missing deadlines. For the high-priority task, a log
hence, partitioned into subtasks. Furthermore, we assume that function is used to define the utility of a subtask given as:
N compute-intensive and latency-sensitive tasks are connected
with each vehicle and will be scheduled and computed within H log(1 + τn − tCn ), tCn ≤ τn
μn = (1)
a deterministic completion time. The number of RSUs is rep- −ΓH , tCn > τn
resented as R = {1, 2, 3, . . . , r , . . . , R}, and the group of where tCn denotes the completion time of a subtask Tn,i v ; and
vehicles is referred to as V = {1, 2, 3, . . . , v , . . . , V }. The
−Γ implies a penalty for not being able to meet the deadline.
H
variable Tnv refers to the n th task of the vehicle v.
For the low-priority task, if the completion time of the exe-
cuting subtask is less than the defined deadline, then the utility
is a reward. Otherwise, the utility is a penalty and expressed as:
B. Vehicular Task Model L
Γ , tCn ≤ τn
μL = (2)
This section introduces dependency-aware task offloading n
Γl e −c (tCn − τn ), tCn > τn
for the vehicular task model. Interdependency among tasks is
considered as the tasks are not atomic. In other words, when where ΓL defines a positive reward (constant); and c is a
a task is broken down into subtasks, the processing of each constant and greater than zero.
subtask is dependent upon its previous subtask’s execution. In order to offload a subtask to any other vehicle (call it
The processing of each subtask is based on the requirements a service vehicle), the task vehicle has to pay a unit price
of each subtask and is offloaded to the respective target layer denoted as ρn . Hence, the computation size of the subtask
for processing (locally (vehicle) or via Fog or Cloud). The (Cn ) can be estimated as Cn = fn tn , where fn is the
subtasks of a task are represented through a Directed Acyclic frequency assigned to the service vehicle, and tn denotes the
Graph (DAG) and modelled as G = (τ, ε), where τ is the computing time of the subtask. Finally, the utility of the task
array of subtasks and ε is the pair of directed edges depicting vehicle can be expressed as:
interdependencies between two subtasks. Let I = |τ | repre-
sent the total set of subtasks of a task Tnv . In DAG, a vertex Util n = 1(Tn = TH )μH L
n + 1(Tn = TL )μn − ρn Cn (3)
v denotes the i th subtask of the n th task of the vehicle
Tn,i where 1(.) is an indicator function [28]. and TH and
v, and a directed edge (Tn,iv , T v ) indicates the interdepen-
n,j TL denote the high-priority and low-priority subtasks,
dency between the i subtask and j th subtask of Tnv . The
th respectively.
MISHRA et al.: COLLABORATIVE COMPUTATION AND OFFLOADING 4605
TABLE I
TARGET O FFLOADING L AYERS AS I NFERENCES U SING F UZZY L OGIC delineates the required output (target offloading layers) based
on Fuzzy inputs.
energy consumption model, latency-aware computation model, 2) Latency-aware Model: A task’s subtasks can be carried
and pricing model. Each model is elucidated below. out locally (on the vehicle itself) or offloaded to a Cloud server
1) Energy Consumption Model: The total energy consumed or an MFC server at the Fog layer for processing. At various
at time t when a vehicle transmits a request either to the Fog phases of processing and offloading, variable rates of latency
layer or the Cloud layer via RSU is expressed in Eq. (7): might occur. In this instance, the processing times of the sub-
Fog/Cloud
tasks at the vehicles, MFC server, and Cloud server can be
Ec i (t) = Ec ri (t) + Ec i (t) (7) used to determine the latency.
Computing on vehicles: When the tasks are classified and
The energy consumed at time t at the RSU r (Ec ri (t))
offloaded to the target layers for computation, tasks with their
while propagating the request from the vehicles is estimated
subtasks arrive at the queue of the vehicle for computation. We
in Eq. (8):
assume that all the tasks in the queue are in FIFO order and the
Ec ri (t) = Li × Ec rnw (i , t) vehicle processes one subtask at a time. That means, while a
r r subtask is in execution, other subtasks must wait in the queue.
Pidle Pmax
+ Li × hi × + (8) This processing involves both processing time and waiting time
Cap ridle Cap rmax Util r
in the queue. Hence, the total latency incurred while computing
v is given by Eq. (14):
where Li is the length of the i th request; Ec rnw (i , t) denotes on vehicles for a subtask Tn,i
the energy consumed to transmit each byte of request from a v
v
Cn,i v
vehicle v to the RSU r via the network; hi is the transmission Dlocal(n,i) = + Dwait(n,i)
r
gain of the channel; Pidle r
and Pmax are the energy consumed cv
v
Cn,i
during the idle and active states of the RSU r, respectively; local(v ) local(v )
= + Tstart(n,i) − Trequest(n,i) (14)
Cap ridle and Cap rmax are the capacity of the RSU r during the cv
idle and active states, respectively; and Util r is the utilization Cv
where cn,i
v
v
denotes the local processing latency; Dwait(n,i)
of the RSU r. v ; C v is the number of
Next, the task is either offloaded to the Fog or Cloud for is the waiting time of a subtask Tn,i n,i
v ; c v is
computing resources needed to execute the subtask Tn,i
computation. Here, the energy is consumed while processing
local(v )
and storing the requests as well as sending the respec- the computation capability for the vehicle v; and Tstart(n,i)
tive acknowledgement. Hence, the energy is mathematically local(v )
and Trequest(n,i) are the starting and requested times of a
expressed as follows: v , respectively.
subtask Tn,i
Fog/Cloud &S Computing on MFC server: The latency rate at the MFC
Ec i (t) = Ec P
i + β × Ec Cloud
i + Ec ack (9)
server is the sum of the transmission delay from the vehicle
The energy consumed while processing and storing the to the MFC server via RSU, processing delay, waiting delay
request either at Fog node or Cloud VM, Ec P &S , is expressed
i of the subtasks on the MFC server to be scheduled, acknowl-
in Eq. (10): edgement delay back to the vehicle, and the offloading latency
&S to the Cloud upon failure. These factors are formulated as
Ec P
i = Li × Ec P S
i + Li × Ec i (10)
follows:
St(j , t) First, the transmission delay in propagating a request i to
Ec P
i = δ × Pj
active
+ (1 − δ) × Pjidle × (11)
nT (j , t) the respective Fog node is estimated as:
where δ is the ratio ( TTactive
total
) of the active time to total time fog
D(n,i) = Li × TLnw (15)
of the j th Fog node or Cloud VM; (1 − δ) denotes the elapsed
time for the j th Fog node or Cloud VM; Pjactive and Pjidle where TLnw denotes the transmission delay caused by propa-
gating each byte of request i from the vehicle v to an MFC
denote the power consumption of the j th Fog node or Cloud
server fn j ; j = 1, 2, . . . , m.
VM during the idle and active states, respectively; St(j, t) is
Second, the latency caused by processing the request i on
the service time of the j th Fog node or Cloud VM; nT(j, t) is
an MFC server is denoted as:
the number of tasks associated with the j th Fog node or Cloud
T sent St(j , t)
VM; and β(= Treq ) is the ratio of the total request sent to DPfog = Li × (16)
req nT (j , t)
the total request offloaded to the Cloud from Fog nodes upon
failure. Third, transmitting an acknowledgement to the respective
Ec Cloud
i is the energy consumed by the j th Cloud VM when vehicle v causes some latency and is denoted as:
any Fog node fails to execute, as computed in Eq. (12): v nw
Dack (n,i) = Li × TL (17)
&S
Ec Cloud
i = Ec P
i + Ec ack (12) Next, the waiting latency at the queue of the MFC server is
Ec ack is the energy consumed while sending the acknowl- similar to the local waiting delay and is expressed as follows:
edgement back to RSU or the corresponding Fog node and is fog
Dwait(n,i) fog
= Tstart(n,i) fog
− Trequest(n,i) (18)
expressed in Eq. (13):
r /Fog
Finally, the transmission delay caused to offload of a request
Ec ack = Li × Ec nw (i , t) (13) i to one of the Cloud servers upon failure is expressed as
MISHRA et al.: COLLABORATIVE COMPUTATION AND OFFLOADING 4607
device. So, the total transmission delay is the original delay + α × Ec (t), r ∈ R, v ∈ V , n ∈ N ; (23)
multiplied by two. 1
N
V
v H L
max Cn 1(Tn = TH )Un + 1(Tn = TL )Un − ρn Cn
By combining Eqs. (15)-(19), we get the total latency caused N
n=1 v =1
at the MFC server in Eq. (20): (24)
fog fog
D fog = D(n,i) + DP subject to the following constraints:
v fog Cloud C1 : β + α = 1;
(n,i) + Dwait(n,i) + βD(n,i)
+ Dack (20) (25a)
v is migrated C2 : TL vn, i = Dlocal(n,
v
i) + D
fog
+ D Cloud ; (25b)
Computing on Cloud Server: The subtask Tn,i
to the Cloud server after being offloaded from the vehicle v to C3 : Xi,j (t) ∈ {0, 1}, ∀i ∈ R, ∀j ∈ V ; (25c)
the RSU r for processing. The transmission latency includes C4 : 0 ≤ Xi,j (t) ≤ 1; (25d)
transmission delay, processing delay at the Cloud server, and i∈R j ∈V
acknowledgement delay, which are analogous to the MFC comp v
server. Hence, the total latency caused by the Cloud server C5 : Comp v , m ≤ max , Comp v ,m ∈ rn,i ; (25e)
cap
v ∈V
is defined as follows: comm v
Cloud Cloud
C6 : Comm v , m ≤ max , Comm v ,m ∈ rn,i ; (25f)
D = D(n,i) + DPCloud v
+ Dack (n,i) (21) v ∈V
cap
computational latency and mean energy consumption while where Xi, j (t) is the mapping of subtask i to a compatible the
maximizing the total utilization of offloading subtasks in terms j th Fog/Cloud server; and Alloc comp (t) and Alloc comm (t)
of cost subject to the constraints of communication and com- are the allocated computation and communication resources
puting resources, the nature of the association between the v , respectively.
for the subtask i of the task Tn, i
RSUs and Fog servers, the binary mapping of subtasks to the Reward: Based on the execution of the action on the state
compatible Fog servers or Cloud servers, and the availability defined in the formulated environment, a reward is facilitated.
of the selected vehicle to offload the subtask when a vehicle The reward Re(t) is the negation of the weighted total of both
fails to compute. In a dynamic context (unpredictable requests average energy consumption and computational latency at time
with disparate requirements) where the level of collaboration t given as [11]:
between RSUs and Fog servers/Cloud servers fluctuates, it is
challenging to optimize the objective functions in Equations Re(t) = − β × TL vn, i (t) + α × EC (t) (29)
23 and 24 using conventional optimization approaches like
heuristics or metaheuristics. In other words, it is not easy An agent must select the action Ac(t) as an offloading of
to find the best solutions for the objectives given the con- the subtask i at state S(t) to get the optimal reward while
stantly changing level of collaboration between the servers and minimizing the total energy consumption across RSUs and
RSUs due to the unpredictable demands. Hence, the next sec- the computational latency. In the next sections, the conven-
tion proposes a Federated learning-supported Deep Q-learning tional Q-learning approach is demonstrated, followed by the
offloading (FedDQL) strategy to optimize the aforementioned proposed FedDQL offloading strategy.
objective functions.
where ODQNN denotes the output of the DQNN and is Algorithm 1 FedDQL-Based Offloading Strategy
expressed in Eq. (32): Input: Cluster of Fog zones Z, Participation rate P; Set of vehicles V;
Set of requests T;
ODQNN = Re(t + 1) + max Q(S (t + 1), Ac(t + 1)) Output: Optimal mapping of subtasks into Fog servers to have
Ac(t+1) reduced energy consumption and computational latency
(32) Start
1. Set M = ∅, K = p|Z (f )|;
For network stabilization, the transition table is updated 2. For each Fog zone m ∈ z (f ) Estimate the transmission cost
with tuples, such as {S(t ), Ac(t), Re(t + 1), S(t + 1)}. The hp
(Tr m ) using Eq. (33);
DQNN works as follows: the parameters of the primary and 3. Arrange the Fog zones in z(f) according to their transmission
target layers are initialized. The DQNN agent gains the state cost in ascending order;
space information from all the RSUs, then identifies an action 4. Discover the initial k number of Fog zones from z(f) and add
them into the set of M;
space using an -greedy policy. Next, the transition table is 5. for f : 0 to F do
updated with the reward and state space data. Furthermore, 5.1 For each Fog zone m ∈ M , update its parameter set Om
f
the weights are updated, and the loss function specified in using the DQN approach
Eq. (31) is diminished using a gradient descent approach. The 5.2 Transfer the updated parameter cost to the centralized entity;
DQNN duplicates the parameters (θ) of a target layer from 5.3 Aggregation of all the local parameters takes place by the
the primary network at the end of a specific phase. centralized entity as
Dm
θf +1 =
f
D ∗ θm ;
m∈M
C. FedDQL Approach End
Federated learning (FL) is a cutting-edge concept that con-
siders all the components to design a global model that does
not need to share the original data. Herein, the core features other Fog zones. Algorithm 1 shows the pseudocode for the
of FL were incorporated with DQNN to develop the FedDQL FedDQL-based offloading strategy.
global model, which has two primary advantages: (a) privacy In step 1, the array of Fog zones M and the series of involv-
preservation in terms of sharing confidential data, and (b) par- ing Fog zones are initialized. The cost of transmission by each
tial participation of the Fog servers that are energy-deficient. Fog zone is estimated using Eq. (33). Afterwards, each Fog
The Fog geographic region is scattered with clusters of Fog zone is sorted in ascending order based on their cost of trans-
zones consisting of one or more RSUs and homogeneous and mission m. Step 4 identifies the first k number of Fog zones
heterogeneous Fog servers. Each Fog zone has a DQNN agent in FL, which involves a set of F rounds. In each epoch, each
to determine the offloading scheme with the DQNN approach. Fog zone updates their local parameter lists and broadcasts
In FL architecture, which is a distributed machine learning it to the unified model to form a global model in steps 5.1
approach, there is a single model that is responsible for calcu- and 5.2. In step 5.3, the unified model obtains a global model
lating and setting all the global metrics. This model is called (θf +1 ) by performing aggregation at each round f of the local
the “unified model,” and it acts as a central point of coordi- parameter (θfm ), which is expressed as [15]:
nation for the distributed learning process. In the FedDQL Dm
model, the DQNN agent of each Fog zone downloads the θf +1 = f
∗ θm (35)
D
global metrics from the unified model and trains their local m∈M
model accordingly. After training, the metrics are updated and where Dm denotes the size of the local buffer of the m th
then forwarded to the unified model. Afterwards, the unified Fog zone; and D implies the total size of data estimated as
model aggregates all the local metrics and forms the global D = m∈M Dm .
metrics. In this model, all the non-involving Fog zones merely
download the unified model to keep an update.
V. E XPERIMENTAL A SSESSMENT AND D ISCUSSIONS
This work considers M set of involving Fog zones of size
K as K = P |Z (f )|, where P denotes the involvement factor, This section describes the extensive simulations that were
and Z(f ) is the clusters of Fog zones. The involved Fog zones performed on a real-world benchmark dataset for different con-
are identified according to their uplink transmission cost [22], flicting scheduling parameters to assess the proposed strategy.
which is calculated as follows: The simulation parameters were first delineated, then a con-
l vergence analysis of the proposed technique was carried out.
Tr Up
m = TP ×
m
(33) Afterwards, a comparative performance analysis was achieved
A
for the considered scheduling parameters against some existing
where TPm denotes the power to transmit the local metrics by
works.
the m th Fog zone of size l; and A is the rate taken to transmit
by the m th Fog zone of size l and is estimated as:
⎛ ⎞ A. Setting of the Simulation Environment
Tpm × h iFogSim over CloudSim is used as a simulator to instantiate
A = Bw log2 ⎝1 + |Z (f )|
⎠ (34)
σ 2 + P =1, P =m TPm × h a VFC simulation environment. This environment encom-
passes 10 Internet-enabled Vehicles, 3000 dynamic tasks, 296
where Bw is the channel’s bandwidth; h is the channel gain; Fog nodes/VMs, and 16 RSUs. There are multiple MFC
|Z (f )|
and P =1, P =m TPm × h is the surrounding noise caused by servers in this architecture, and each MFC server is connected
4610 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 20, NO. 4, DECEMBER 2023
TABLE II
S IMULATION PARAMETERS
TABLE III
T ECHNICAL S PECIFICATIONS OF F OG N ODES /VMS
Fig. 3. Convergence analysis for different learning rates.
Fig. 5. QoS analysis of service rate (ms) for different sets: (a) 1000 × 96,
(b) 2000 × 196, and (c) 3000 × 294.
Fig. 8. QoS analysis of average energy consumption (KJ) for different sets:
(a) 1000 × 96, (b) 2000 × 196, and (c) 3000 × 294.
Fig. 7. QoS analysis of latency rate (ms) for different sets: (a) 1000 × 96,
(b) 2000 × 196, and (c) 3000 × 294.
4) Performance Analysis of Average Energy Consumption:
The degree of energy consumption is a challenging factor for
layers also minimise the latency across computing layers. The any datacentre. The consumption of energy depends on various
latency rate holds significance when there is an increasing factors, such as the specifications of the Fog servers/datacentre,
number of tasks in a dynamic environment. Figure 7 presents computational capabilities, resource-constrained, size of the
a comparative analysis of the obtained results for latency rate. tasks, etc. In our approach, the energy consumption is eval-
It is obvious that the suggested method presents a remarkable uated for processing and storing each task on a resource,
improvement over the other models for different specifications. transmitting and acknowledging the requests from/to the com-
The FedDQL outperforms the other models [23], [30], [31], puting layers. Efficient utilization of resources could minimize
[32], [33] with an improvement of 15%, 9.4%, 9.9%, 14.7% the degree of utilization. Figure 8 presents the QoS analy-
and 2.33% on average for three sets. As the number of requests sis of the degree of energy consumption for an increasing
increase, the proposed method shows notable results over [32]. number of tasks for the proposed method. It is apparent
For a smaller set of requests, both of these (proposed and [32]) that the suggested method reduces energy consumption more
perform approximately. efficiently than the other methods with different degrees of
MISHRA et al.: COLLABORATIVE COMPUTATION AND OFFLOADING 4613
computational specifications. The FedDQL outperforms the [8] K. Kai, W. Cong, and L. Tao, “Fog computing for vehicular ad-
other models [23], [30], [32] with an improvement of 39.9%, hoc networks: Paradigms, scenarios, and issues,” J. China Univ. Posts
Telecommun., vol. 23, no. 2, pp. 56–96, 2016. [Online]. Available:
34.7% and 21.6% on average for three sets. https://doi.org/10.1016/S1005-8885(16)60021-3
For service rate, average utilization, latency and energy con- [9] L. Yao, X. Xu, M. Bilal, and H. Wang, “Dynamic edge computation
sumption, the effectiveness of the proposed FedDQL model offloading for Internet of Vehicles with deep reinforcement learn-
ing,” IEEE Trans. Intell. Transp. Syst., early access, Jun. 6, 2022,
has been compared to [23], [30], [31], [32], [33] and the doi: 10.1109/TITS.2022.3178759.
improvement compared to [33] is 49%, 51% and 15%, the [10] G. Qu, H. Wu, R. Li, and P. Jiao, “DMRO: A deep meta reinforcement
improvement of FedDQL compared to [30] is 23.2%, 36.8%, learning-based task offloading framework for edge-cloud computing,”
IEEE Trans. Netw. Service Manag., vol. 18, no. 3, pp. 3448–3459,
9.4% and 39.9%, the improvement of FedDQL compared Sep. 2021.
to [23] is 22.6%, 32.4%, 9.9% and 34.7%, the improvement [11] Z. Ning, P. Dong, X. Wang, J. J. P. C. Rodrigues, and F. Xia, “Deep
of FedDQL compared to [31] is 14.7%, 27.7% and 14.7%, reinforcement learning for vehicular edge computing: An intelligent
offloading system,” ACM Trans. Intell. Syst. Technol., vol. 10, no. 6,
and the improvement of FedDQL compared to [32] is 3.23%, pp. 1–24, 2019, doi: 10.1145/3317572.
9.33%, 2.33% and 21.6%, respectively. [12] U. Maan and Y. Chaba, “Deep Q-network based fog node offloading
strategy for 5G vehicular adhoc network,” Ad Hoc Netw., vol. 120,
Sep. 2021, Art. no. 102565. [Online]. Available: https://doi.org/10.1016/
j.adhoc.2021.102565
VI. C ONCLUSION AND F UTURE S TUDY [13] Y. Liu, H. Yu, S. Xie, and Y. Zhang, “Deep reinforcement learning
This paper proposes a noble offloading method for for offloading and resource allocation in vehicle edge computing and
networks,” IEEE Trans. Veh. Technol., vol. 68, no. 11, pp. 11158–11168,
compute-intensive and latency-sensitive dependency-aware Nov. 2019, doi: 10.1109/TVT.2019.2935450.
tasks in the Dew-enabled collaborative computing framework. [14] X. He, H. Lu, M. Du, Y. Mao, and K. Wang, “QoE-based task offloading
The dependency-aware tasks are depicted through a DAG, with deep reinforcement learning in edge-enabled Internet of Vehicles,”
IEEE Trans. Intell. Transp. Syst., vol. 22, no. 4, pp. 2252–2261,
where the interdependencies among tasks are also modelled. Apr. 2021.
Moreover, a Federated learning-supported Deep Q-learning [15] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Y. Arcas,
(FedDQL)-based offloading strategy has been designed for “Communication-efficient learning of deep networks from decentralized
data,” in Proc. 20th Int. Conf. Artif. Intell. Stat., 2017, pp. 1273–1282.
the optimal assignment of tasks to machines. Next, Fuzzy [16] C. Tang, X. Wei, C. Zhu, Y. Wang, and W. Jia, “Mobile vehi-
logic is implemented to determine the target offloading lay- cles as fog nodes for latency optimization in smart cities,” IEEE
ers to prevent starvation and aging. The simulation results Trans. Veh. Technol., vol. 69, no. 9, pp. 9364–9375, Sep. 2020,
doi: 10.1109/TVT.2020.2970763.
showcase the efficacy of FedDQL over alternative offloading [17] Z. Ning, J. Huang, and X. Wang, “Vehicular fog computing: Enabling
algorithms based on several performance metrics with different real-time traffic management for smart cities,” IEEE Wireless Commun.,
specifications. Tasks and machine heterogeneity are taken into vol. 26, no. 1, pp. 87–93, Feb. 2019, doi: 10.1109/MWC.2019.1700441.
[18] A. Thakur and R. Malekian, “Fog computing for detecting vehicu-
account to appraise the effectiveness of the proposed method. lar congestion, an Internet of Vehicles based approach: A review,”
The proposed FedDQL method outperforms others [23], [30], IEEE Intell. Transp. Syst. Mag., vol. 11, no. 2, pp. 8–16, Mar. 2019,
[31], [32], [33] with an average improvement of 49%, 34.3%, doi: 10.1109/MITS.2019.2903551.
[19] Y. Li, H. Li, G. Xu, T. Xiang, and R. Lu, “Practical privacy-preserving
29.2%, 16.2% and 8.21%, respectively. federated learning in vehicular fog computing,” IEEE Trans. Veh.
A potential direction for future study are tasks offloading Technol., vol. 71, no. 5, pp. 4692–4705, May 2022.
and data sharing while a vehicle is in motion. Also, data migra- [20] O.-K. Shahryari, H. Pedram, V. Khajehvand, and M. D. TakhtFooladi,
“Energy-efficient and delay-guaranteed computation offloading for
tion between MFC servers, and migration across computing fog-based IoT networks,” Comput. Netw., vol. 182, Dec. 2020,
layers in reverse could be a future scope. Art. no. 107511, doi: 10.1016/j.comnet.2020.107511.
[21] Z. Zhou, P. Liu, J. Feng, Y. Zhang, S. Mumtaz, and J. Rodriguez,
“Computation resource allocation and task assignment optimization
in vehicular fog computing: A contract-matching approach,” IEEE
R EFERENCES Trans. Veh. Technol., vol. 68, no. 4, pp. 3113–3125, Apr. 2019,
doi: 10.1109/TVT.2019.2894851.
[1] A. Boukerchea and R. E. De Grande, “Vehicular cloud computing: [22] J. Shi, J. Du, J. Wang, J. Wang, and J. Yuan, “Priority-aware task offload-
Architectures, applications, and mobility,” Comput. Netw., vol. 135, ing in vehicular fog computing based on deep reinforcement learning,”
pp. 171–189, Apr. 2018. IEEE Trans. Veh. Technol., vol. 69, no. 12, pp. 16067–16081, Dec. 2020,
[2] M. H. Eiza, Q. Ni, and Q. Shi, “Secure and privacy-aware cloud-assisted doi: 10.1109/TVT.2020.3041929.
video reporting service in 5G-enabled vehicular networks,” IEEE Trans. [23] F. H. Rahman, S. H. S. Newaz, T.-W. Au, W. S. Suhaili,
Veh. Technol., vol. 65, no. 10, pp. 7868–7881, Oct. 2016. M. A. P. Mahmud, and G. M. Lee, “EnTruVe: ENergy and TRUst-
[3] A. M. A. Hamdi, F. K. Hussain, and O. K. Hussain, “Task offloading aware virtual machine allocation in VEhicle fog computing for catering
in vehicular fog computing: State-of-the-art and open issues,” Future applications in 5G,” Future Gener. Comput. Syst., vol. 126, pp. 196–210,
Gener. Comput. Syst., vol. 133, pp. 201–212, Aug. 2022. [Online]. Jan. 2022.
Available: https://doi.org/10.1016/j.future.2022.03.019 [24] S. Vemireddy and R. R. Rout, “Fuzzy reinforcement learning for energy
[4] S.-S. Lee and S. Lee, “Resource allocation for vehicular fog comput- efficient task offloading in vehicular fog computing,” Comput. Netw.,
ing using reinforcement learning combined with heuristic information,” vol. 199, Nov. 2021, Art. no. 108463.
IEEE Internet Things J., vol. 7, no. 10, pp. 10450–10464, Oct. 2020, [25] A. R. Hameed, S. ul Islam, I. Ahmad, and K. Munir, “Energy- and
doi: 10.1109/JIOT.2020.2996213. performance-aware load-balancing in vehicular fog computing,” Sustain.
[5] H. Guo, J. Liu, J. Zhang, W. Sun, and N. Kato, “Mobile-edge compu- Comput. Inform. Syst., vol. 30, Jun. 2021, Art. no. 100454.
tation offloading for ultradense IoT networks,” IEEE Internet Things J., [26] B. Shabir, A. U. Rahman, A. W. Malik, R. Buyya, and M. A. Khan, “A
vol. 5, no. 6, pp. 4977–4988, Dec. 2018. federated multi-agent deep reinforcement learning for vehicular fog com-
[6] W. S. Atoui, W. Ajib, and M. Boukadoum, “Offline and online schedul- puting,” J. Supercomput., vol. 79, pp. 6141–6167, Oct. 2022. [Online].
ing algorithms for energy harvesting RSUs in VANETs,” IEEE Trans. Available: https://doi.org/10.1007/s11227-022-04911-8
Veh. Technol., vol. 67, no. 7, pp. 6370–6382, Jul. 2018. [27] X. Xu, Z. Fang, L. Qi, W. Dou, Q. He, and Y. Duan, “A deep rein-
[7] H. A. Khattak, S. U. Islam, I. U. Din, and M. Guizani, “Integrating fog forcement learning-based distributed service of loading method for edge
computing with VANETs: A consumer perspective,” IEEE Commun. computing empowered Internet of Vehicles,” Chin. J. Comput., vol. 44,
Standards Mag., vol. 3, no. 1, pp. 19–25, Mar. 2019. no. 12, pp. 2382–2405, 2021.
4614 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 20, NO. 4, DECEMBER 2023
[28] J. Zhao, Q. Li, Y. Gong, and K. Zhang, “Computation offloading and Umashankar Ghugar (Member, IEEE) received
resource allocation for cloud assisted mobile edge computing in vehicu- the Doctoral degree (Full-Time) from Berhampur
lar networks,” IEEE Trans. Veh. Technol., vol. 68, no. 8, pp. 7944–7956, University, Odisha, in 2021. He is currently work-
Aug. 2019. ing as an Assistant Professor with the School
[29] T. Hester et al., “Deep Q-learning from demonstrations,” in Proc. AAAI of Technology, Department of CSE, GITAM
Conf. Artif. Intell., vol. 32, 2018, pp. 3223–3230. University, Visakhapatnam. He has published 14
[30] C. Chakraborty, K. Mishra, S. K. Majhi, and H. K. Bhuyan, “Intelligent articles, including journals, book chapters, and con-
latency-aware tasks prioritization and offloading strategy in distributed ferences in international publishers. His research
fog-cloud of things,” IEEE Trans. Ind. Informat., vol. 19, no. 2, interests are in computer networks and network
pp. 2099–2106, Feb. 2023, doi: 10.1109/TII.2022.3173899. security in WSN. He is a Reviewer of IEEE
[31] V. Sethi and S. Pal, “FedDOVe: A federated deep Q-learning-based ACCESS, IEEE T RANSACTION ON E DUCATION,
offloading for vehicular fog computing,” Future Gener. Comput. Syst., IEEE T RANSACTIONS ON N EURAL N ETWORKS AND L EARNING S YSTEMS,
vol. 141, pp. 96–105, Apr. 2023. Security and Privacy (Wiley), International Journal of Communication
[32] M. Tiwari, I. Maity, and S. Misra, “FedServ: Federated task Systems (Wiley), International Journal of Distributed Sensor Networks
service in fog-enabled Internet of Vehicles,” IEEE Trans. Intell. (Hindawi), International Journal of Knowledge Discovery in Bioinformatics
Transp. Syst., vol. 23, no. 11, pp. 20943–20952, Nov. 2022, (IGI Global), and International Journal of Information Security and Privacy
doi: 10.1109/TITS.2022.3186401. (IGI Global) and a member of IACSIT, CSTA, and IRED.
[33] D. B. Son, V. T. An, T. T. Hai, B. M. Nguyen, N. P. Le, and H. T. T. Binh,
“Fuzzy deep Q-learning task offloading in delay constrained vehicular
fog computing,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN), 2021,
pp. 1–8, doi: 10.1109/IJCNN52387.2021.9533615.
[34] T. D. Braun et al., “A comparison of eleven static heuristics for mapping Gurpreet Singh Chhabra received the Ph.D.
a class of independent tasks onto heterogeneous distributed computing degree from Department of Computer Science
systems,” J. Parallel Distrib. Comput., vol. 61, no. 6, pp. 810–837, 2001. and Engineering. He is currently working as an
[Online]. Available: https://doi.org/10.1006/jpdc.2000.1714 Assistant Professor with the Computer Science
[35] X. Chen, H. Zhang, C. Wu, S. Mao, Y. Ji, and M. Bennis, “Optimized and Engineering Department, GITAM School of
computation offloading performance in virtual edge computing systems Technology, GITAM (Deemed to be University),
via deep reinforcement learning,” IEEE Internet Things J., vol. 6, no. 3, Visakhapatnam. He has 15 years of teaching expe-
pp. 4005–4018, Jun. 2019. rience. His qualifications are fortified with a great
deal of creativity and problem-solving skills. He has
credit for many international papers, patents, book
chapters, and books. His research interests in deep
learning, machine learning, data science, and fog computing. He is a Life
Member of the ISTE and IAENG.