0% found this document useful (0 votes)
13 views7 pages

Ref.11 Joint - Task - Offloading - and - Resource - Allocation - For - Delay-Sensitive - Fog - Networks

Uploaded by

shubham gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views7 pages

Ref.11 Joint - Task - Offloading - and - Resource - Allocation - For - Delay-Sensitive - Fog - Networks

Uploaded by

shubham gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Joint Task Offloading and Resource Allocation for

Delay-sensitive Fog Networks


Mithun Mukherjee∗ , Suman Kumar† , Mohammad Shojafar‡§ , Qi Zhang¶ , and Constandinos X. Mavromoustakisk

Guangdong Provincial Key Lab of Petrochemical Equipment Fault Diagnosis,
Guangdong University of Petrochemical Technology, Maoming, China.
† Department of Mathematics, IGNTU Amarkantak, MP, India
‡ Department of Mathematics, University of Padua, Padua, Italy
§ Department of Computer Science, Ryerson University, Toronto, Canada
¶ DIGIT and Department of Engineering, Aarhus University, Aarhus, Denmark
k Department of Computer Science, Mobile Systems Laboratory (MoSys Lab), University of Nicosia, Nicosia, Cyprus
[email protected], [email protected], mohammad.shojafar@{unipd.it, ryerson.ca},
[email protected], [email protected]

Abstract—Computational offloading becomes an important communications [2]. Undoubtedly, offloading to the powerful
and essential research issue for the delay-sensitive task comple- computational resource-enriched cloud servers is an ideal
tion at resource-constraint end-users. Fog computing that extends case without considering the latency constraint. However, the
the computing and storage resources of the cloud computing
to the network edge emerges as a potential solution towards transmission delay due to the burden on the fronthaul link
low-latency task provisioning via computational offloading. In and physical distance between end-user and cloud servers
our offloading scenario, each end-user will first offload the task limits the low-latency task processing in cloud computing. In
to its primary fog node. When the primary fog node cannot recent years, to address delay sensitive service provisioning,
meet the tolerable latency, it has the possibility to offload to fog computing, a form of edge computing [3]–[5], is being
the cloud and/or assisting fog node to obtain extra computing
resource to shorten the computing latency at the expense of widely used while extending the computational and storage
additional transmission latency. Therefore, a trade-off needs to resources of the cloud computing close to the network edge.
be carefully made in the offloading decision. At the same time, in
A. Motivation
addition to the task data from the end-users under its primary
coverage, the primary fog node receives the tasks from other Specifically, the fog computing nodes (we consider fog
end-users via its neighbor fog nodes. Thus, to jointly optimize computing node as fog node for a unified terminology) are
the computing and communication resources in the fog node, we equipped with higher computational and storage resources
formulate a delay-sensitive data offloading problem that mainly
considers the local task execution delay and transmission delay.
compared to the end-users. However, offloading the tasks with
An approximate solution is obtained via Quadratically Con- hard-latency1 requirement is indeed a challenging research
straint Quadratic Programming (QCQP). Finally, the extensive issue due the following reasons: a) the computing and storage
simulation results demonstrate the effectiveness of the proposed resources in the fog nodes are limited compared to the cloud,
solution, while guaranteeing minimum end-to-end latency for and, b) due to the distributed nature of fog computing, the fog
various task processing densities and traffic intensity levels.
node has little knowledge when the task data are offloaded to
I. I NTRODUCTION the other assistive fog nodes [6]. It is observed in literature
Most recently, the International Telecommunication Union that the transmission capacity between neighbor fog nodes
(ITU) [1] has broadly categorized the use-cases in the fifth becomes a significant factor for delay-sensitive task offloading
generation (5G) wireless communications/International Mobile when the number of fog node exceeds some certain values
Telecommunication (IMT)-2020 networks [2] into evolved in a network [7]–[9]. Moreover, due to collaborative task
mobile broadband (eMBB), massive machine-type communi- distribution in fog nodes, apart from the end-users under
cations (mMTC), and ultra-reliable and low-latency communi- its own coverage, a fog node also receives the tasks from
cations (uRLLC). In brief, on the one hand, eMBB focuses on its neighboring fog nodes, resulting the computing resource
the bandwidth requirements, whereas the connection-density allocation to a complex optimization problem.
is one of the main targets in mMTC. On the other hand, B. Related Work
uRLLC demands the time-critical and high-reliable service For several years, great effort has been devoted to study
provisioning. The emerging and promising uRLLC applica- the computational offloading in fog-cloud network for delay-
tions include factory automation, autonomous driving, virtual sensitive task processing. For example, considering multiple
and augmented reality, and cloud robotics.
1 Here, the hard latency [2] refers to the scenario if a packet is reached
In general, the computational resources are not sufficient
beyond the latency deadline, then the packet has no use for the current
at end-users to process the entire task data by itself to time frame, thus it will be immediately dropped from the system resulting
meet the stringent latency requirement for the low-latency degradation in reliability as envisioned in current 3GPP release 13.

978-1-5386-8088-9/19/$31.00 ©2019 IEEE

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY KURUKSHETRA. Downloaded on April 25,2022 at 18:02:11 UTC from IEEE Xplore. Restrictions apply.
tasks in a single end-user, an optimization solution for of-
Cloud
floading decision to select either fog node or cloud server
is suggested in [10] to address both latency constraint and Offload the task
2 Download
minimum energy consumption. Afterward, multi-user scenario data to the cloud
the result
was considered in [11]. In addition, a strict latency deadline for
2 Offload the task data to
each task of end-users was imposed in [12], [13]. Another ap- assistive jth fog node
Primary fog node Secondary fog node
proach to minimize the energy consumption among end-users
ith fog node Collect the result jth fog node
was suggested in [14]. As these above approaches [10]–[14] 1
only considered one fog node, the collaborative approach for Upload the task to Download
computational and transmission resource allocation between primary fog node the result
(say, ith fog node)
fog nodes were not considered. In fact, the solution aimed to
select the remote cloud server if the fog node cannot meet
the latency constraint and energy minimization. Compared to
the most recent work on latency optimization [15] for multi-
users in a network with multiple fog nodes and remote cloud Mi end-users Mj end-users
server, we formulate the offloading decision considering task
arrived from multiple end-users under its coverage and other Fig. 1. Illustration of a fog network to support low-latency communications
scenarios via task data offloading.
end-users via its assistive fog nodes.
C. Objective and Contributions presented in Section IV. The simulation results are presented
Assuming tasks arrived from more than one end-user, the in Section V. Finally, conclusions are drawn in Section VI.
offloading decision significantly affects the delay-sensitive task II. S YSTEM M ODEL
completion for all the users. Moreover, to balance between
As shown in Fig. 1, we consider a fog network that
transmission latency on the fog nodes and the computational
consists of a set of fog nodes N = {1, 2, . . . , N } and a set
benefit, we use an assisting fog node (we call it as secondary
of end-users K = {1, 2, . . . , K}. The fog nodes and end-
fog node) for the primary fog node. As a fog node, in general,
users are uniformly distributed over the network. Taking large-
receives the task data from the end-users under its primary
scale industrial application as a use-case, the data, such as
coverage of the end-users, and other end-users via its neighbor
the state information collected by the industrial sensors are
fog nodes, the computing resource allocation becomes a chal-
transmitted to the fog nodes for the assistance of delay-
lenging issue to meet the delay-sensitive task allocation. The
sensitive decision-making process, e.g., manufacturing process
main contributions of this paper are summarized as follows.
in a cyber-physical system (CPS), fault detection in mining,
• We consider the offloading scenario where the end-user
and so on. The system is assumed to be time-slotted indexed by
uploads the task that may not be entirely locally processed
t = {0, 1, , . . . , t}, where the length of each time slot is ∆t (in
to meet the latency deadline to the primary fog node.
s). Further, we assume that only one task arrives at the kth end-
Subsequently, if the primary fog node foresees that the
user at time slot t. When either the task data size or processing
offloaded task cannot be processed by the available com-
density that is defined as the number of cycles required to
puting resources within the tolerable delay, then it further
process each bit (in cycles/bit) of the workload is sufficiently
offloads the task to the cloud and/or to the secondary
large, the end-user needs to offload the task. In the above
fog node to obtain additional computing resources at the
situation, the kth end-user uploads a part or an entire task
expense of transmission delay.
to the nearest2 fog node which acts as a primary (or master)
• We formulate a delay-sensitive task offloading problem
fog node. We assume that the kth end-user simultaneously
that mainly considers the transmission delay (including an
executes the local task and offloads the task data to the ith
end-user to the primary fog node, the primary fog node to
primary fog node at time t. It is further assumed that a single
the secondary fog node, and the primary fog node to the
fog node can act as a primary fog node to multiple end-users.
cloud transmission delay) and local task execution delay PN
We consider that Mi = {1, 2, . . . , Mi }, i=1 Mi = K is
at end-user, primary, and secondary fog node, if any.
the set of end-users that select ith fog node as a primary fog
• To solve the challenging problem, we aim to propose
node. Moreover, if the available computing resources are not
a transformed Quadratically Constraint Quadratic Pro-
able to meet the delay requirement, then the primary fog node
gramming (QCQP) problem, we used CVX, a package
selects a neighbor fog node within its proximity for the task
for specifying and solving convex programs [16]. We
completion. Nevertheless, if the computation capabilities of the
perform extensive simulation results to evaluate the per-
fog nodes cannot meet the required computation for the task
formance gain in terms of delay of proposed solution with
execution, then tasks are uploaded to the clouds. Moreover,
various parameter settings in task offloading.
the CPU in a fog node is able to adjust its clock speed to
The rest of the paper is organized as follows. Section II meet different amount of CPU resources per bit [18].
presents the system model. The delay model is discussed in
Section III. The task offloading and resource allocation are 2 Primary fog node selection is itself a novel work (see, e.g., [17]).

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY KURUKSHETRA. Downloaded on April 25,2022 at 18:02:11 UTC from IEEE Xplore. Restrictions apply.
A. Task at the End-user Side offload the task data to the cloud that has been received from
Assuming the time-slotted system, the task Dk (t) (in bits) other end-user k 0 ∈ K \ Mi via Ji neighbor fog nodes, where
with a required processing density Lk (cycles/bit) that arrives Ji = {1, 2, . . . , Ji }, Ji ∈ N is the set of secondary fog nodes
at the kth end-user at the beginning time slot t can be that can select the ith fog node to offload their task data. This
processed at the starting from time slot (t + 1). The large- is because the offloading to the cloud and other fog node is co-
size task that cannot be computed in one time slot can be ordinated by the primary fog node of the kth end-user. Thus,
divided into small sub-tasks which can be computed in a as illustrated in Fig. 2, the ith fog node receives the task data
single time slot. Let Dklocal (t) be the amount of task (in bits) from
locally computed at the kth end-user. However, due to the 1) the end-user k ∈ Mi that selects the ith fog node as
computational and storage limitation, the total task data may primary fog node and
not be processed within the maximum tolerable delay tmax 2) other end-user k 0 ∈ K \ Mi via Ji neighbor fog nodes.
k
only at the end-user. In this scenario, the end-user offloads III. D ELAY M ODEL
the task data in order to receive the results for the processed Although the computation delay consists of local task
task within the tolerable delay. As some tasks such as OS- execution delay, queueing delay, task prefetching and resource
level processing cannot be offloaded, the end-user offloads the allocation delay, in this work, we only consider local task
task that can only be offloaded to avail higher computational execution delay. Considering task queue model and resource
resources. We define the following offloading decision variable allocation delay that itself a novel is a part of future work.
at the end-user side as
 A. Local Task Execution Delay
1 if the kth end-user selects the

In general, the task execution of any workload depends on
µk,i = ith fog node as primary fog node , the following factors: 1) required cycles in processing the data

0
otherwise. (in [bits]) and 2) CPU clock speed ( in [cycles/second]). Let
Lk be the processing density (in cycles/bit) for the kth end-
Let Dkoffload (t) be the task data (in bits) offloaded from the user task. Thus, the local task execution delay (in [s]) at the
kth end-user to the primary fog node. Therefore, we write kth end-user side is expressed as
Dk (t) = Dklocal (t) + µk,i Dkoffload (t).
Lk Dklocal (t)
τklocal = [s], (1)
B. Task at the Fog Node Side fk (t)
In general, the ith primary fog node aims to locally process where fk (t) denotes the CPU clock speed of the kth end-user.
the kth end-user’s entire offloaded task, where k ∈ Mi . Similarly, the local task execution delay (in [s]) for the kth
However, due to the hard latency constraint, the ith primary end-user’s task at the ith fog node side becomes
fog node is not always able to meet the task completion time local
Lk µk,i Dk,i (t)
local, fog
by itself for all end-users that offload the task to that primary τk,i = [s], (2)
fk,i (t)
fog node. Thus, the ith fog node often offloads the kth end-
user’s task to both secondary fog node and the cloud. Note that where fk,i (t) refers to the CPU clock speed of the ith fog
a trade-off exists between the computational and transmission node for the kth user task data processing. We denote fimax
latency among the tasks offloaded to other fog nodes and the as the maximum CPU rate for the ith fog node. Therefore,
cloud. Let denote Dk,i local
(t) be the amount of the kth end-user’s we express the total CPU rate (in [s]) for the users’ local task
task data locally processed at the ith primary fog node. Further, execution at the ith fog node as
offload offload
we denote Dk,i→j (t) and Dk,i→c (t) as the task data offloaded Mi
X Ji |M
X Xj |
to the jth fog node and cloud, respectively. We define the µk,i fk,i + βk0 ,j,i fk0 ,i ≤ fimax [s]. (3)
following offloading decision variables to offload the kth user’s k=1 j=1 k0 =1
task data at the ith primary fog node as
| {z } | {z }
for ∀k ∈ Mi for ∀k0 ∈ K \ Mi

1 if the ith fog node offloads the kth
 Note that we have ignored the processing time at the cloud
βk,i,j = end-user’s task data to the jth fog node , since cloud is equipped with a sufficient amount of computing

0 and storage resources [19].
otherwise.
B. Transmission Delay
and
 We assume that the downlink transmission latency for the
1 if the ith fog node offloads the
 result download is ignored compared to the transmission delay
λk,i = kth end-user’s task data to the cloud , due the task data offloaded from the end-users to the primary
fog node, inter-fog task data offloading, and uploaded task

0
otherwise.
data from the fog nodes to the cloud. Similar to [20], this
Thus, we write Dkoffload (t) = Dk,i local offload
(t) + βk,i,j Dk,i→j (t) assumption is due to small size of result data and at the same
offload
+λk,i Dk,i→c (t). We further assume that the ith fog node time the higher transmission power of the fog nodes compared
offloads the task of only end-user k ∈ Mi to the cloud, cannot to the end-user devices.

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY KURUKSHETRA. Downloaded on April 25,2022 at 18:02:11 UTC from IEEE Xplore. Restrictions apply.
At At end-user
end-user-level At At primary
primary fogfog node
node-level At
Atsecondary fog node
other fog node and/orand/or
cloud cloud
level

Local CPU Local CPU


offload
fk (t)[cycles/s] fk,i (t)[cycles/s] µk,i λk,i Dk,i→c (t)[bit]
local
Task for the kth end-user Dklocal (t)[bit] µk,i Dk,i (t)[bit] Offloaded to the cloud
Dk (t)[bit]
rk,i (t) bits/second offload
µk,i βk,i,j Dk,i→j (t)[bit]
µk,i Dkoffload (t)[bit]
Offload to the jth fog node

Local CPU Local CPU


offload
fk (t)[cycles/s] fMi ,i (t)[cycles/s] µMi ,i λMi ,i DM i ,i→c
(t)[bit]
Task for the Mi end-user Dklocal (t)[bit] µMi ,i DM local
(t)[bit] Offloaded to the cloud
i ,i

DMi (t)[bit]
rMi ,i (t) bits/second
offload
offload µMi ,i βMi ,i,j DM i ,i→j
(t)[bit]
µMi ,i DM i
(t)[bit]
Offload to the jth fog node

Offloaded task data from the Local CPU


k ′ th user via the jth fog node fk′ ,i (t)[cycles/s]
∀k ′ ∈ K \ Mi βk′ ,j,i Dkoffload
′ ,j→i (t)[bit]

Fig. 2. Illustration of the local task execution and task offloading at end-user level, primary fog node level, and remote cloud and secondary fog node level.

max
1) End-user to primary fog node: When the kth end- Maximum permissible delay, τk
user uploads its task data to a primary fog node, then the
transmission delay between the kth end-user and the ith Local task execution End-user side

primary fog node becomes τklocal

transmission µk,i Dkoffload (t) Transmission to Local task execution Primary fog
τk,i = [s], (4) primary fog node node side
rk,i (t)
End-user’s task

transmission local
τk,i τk,i
where rk,i (t) denotes the achievable transmission rate between
the kth end-user and the ith fog node in time slot t.
Transmission to Transmission to Local task Secondary fog
2) Inter-fog transmission delay: The transmission delay primary fog node secondary fog node execution node side
between the ith primary fog node and the jth fog node for transmission
τk,i transmission
τk,i→j local
τk,j
the kth end-user’s task becomes
offload
transmission
βk,i,j Dk,i→j (t) Transmission to Transmission to the cloud Cloud side
τk,i→j = [s], (5) primary fog node
rk,i,j (t)
transmission transmission
τk,i τk,i→c
where rk,i,j (t) is the achievable transmission rate between the
ith fog node and the jth fog node used for the kth end-user Fig. 3. Illustration of maximum permissible delay at end-user, primary fog
in time slot t. node, secondary fog node, and cloud side for parallel and overlapping local
task execution as well as transmission time for the kth user’s offloaded task.
3) Fog node to cloud transmission delay: Assuming
rk,i,c (t) be the achievable transmission rate between the ith
fog node and the cloud for the kth end-user in time slot t, the allocate the maximum CPU cycles (i.e., fkmax ) to the single
transmission delay between ith fog node to the cloud for the task data processing, i.e., fk = fkmax . In fact, an ideal case
kth end-user’s task data is expressed as would be if the entire task data can be processed at the end-
offload
λk,i Dk,i→c (t) user side within its tolerable delay τkmax . Moreover, as we do
transmission
τk,i→c = [s]. (6) not consider energy consumption issue that itself is a novel
rk,i,c (t)
work, we allow the end-user to locally process until tolerable
IV. O FFLOADING D ECISION , C OMPUTATIONAL , AND delay, say τkmax , i.e., τklocal ' τkmax . As a result, the offloaded
T RANSMISSION R ESOURCE A LLOCATION task data is expressed as µk,i Dkoffload ≡ Dk − (τkmax fkmax /Lk ).
As illustrated in Fig. 3, we consider that when a primary Problem Formulation: We aim to jointly optimize the
fog node receives the task data from the end-users, it can computing and transmission resource allocation in fog nodes
start to locally process and if necessary, offload the task data (both primary and secondary fog node) to guarantee the
ignoring the queueing delay with an assumption of parallel minimum delay for each end-user’s task completion con-
task processing [21]. For the sake of simplicity, we omit t sidering tasks which are arrived from multi-users. The task
in the rest of the paper. Assuming one task per user, we can offloading vector for the kth user is defined as Γ k =

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY KURUKSHETRA. Downloaded on April 25,2022 at 18:02:11 UTC from IEEE Xplore. Restrictions apply.
 local offload local offload
 L O L
 O
µk,i , βk,i,j , λk,c , Dk,i , Dk,i→j , Dk,j , Dk,i→c . Next, we de- Let αk = max [ζk,i , ζk,j + ζk,j , ζk,c ], then we obtain
fine the transmission resource allocation vector as rk = the transformed optimization function as min Pk wk| ,
  wk
rk,i , rk,i,j , rk,i,c . Next, we formulate the above optimization where Pk = [01×4 , µk,i , 0, µk,i , 01×3 , µk,i , 0, αk ],
problem as: (·)| represents the transpose of a matrix, wk =
L O L O
 [µk,i , βk,i,j , µk,j , λk,i , ζk,i , fk,i , ζk,j , rk,i,j , ζk,j , fk,j , ζk,c ,
transmission local, fog transmission
minimize τk,i + max τk,i , τk,i→j rk,i,c , rk,i ]|1×13 denotes the variable matrix. Let
Γ k ,rk ,fk,i ,fk,j
 bLk,i = [Lk Dk,i local
, 01×12 ], bO k,i = [0, Dk,i→j offload
, 01×11 ],
local, fog  transmission
+ τk,j , τk,i→c (7a) L local O
bk,j = [0, 0, Lk Dk,j , 01×10 ], bk,c = [01×3 , Dk,i→c offload
, 01×9 ].
Then, the matrix forms of (8a)–(8d) are
subject to µk,i , βk,i,j , λk,i ∈ {0, 1}, (7b)
Dklocal+ µk,i Dk,ilocal
+ βk,i,j offload
Dk,i→j , wk| bLk,i + wk| ALk,i wk ≤ 0, (9a)
+ offload
λk,i Dk,i→c ≥ Dk , (7c) wk| bO | O
k,i + wk wk Ak,j ≤ 0, (9b)
| L | L
Mi
X wk bk,j + wk Ak,i,j wk ≤ 0, (9c)
µk,i rk,i ≤ rimax , (7d) wk| bO | O
≤ 0,
k,c + wk Ak,c wk (9d)
k=1
Mi
X Mj
X where
max
βk,i,j rk,i,j + βk0 ,j,i rk0 ,j,i ≤ ri,j ,
 
05×4 05×2 05×7
− 12
 
0 L0 0
k=1 k0 =1 ALk,i = 02×4 ALk,i 02×7 , Ak,i = ,
− 12

(7e) 0
06×4 06×2 06×7
Mi
X
max
λk,i rk,i,c ≤ ri,c , (7f) 
07×6 07×2 07×5

− 12
 
k=1 0 0 0
AO = 02×6 AO 02×5  , AO = ,
and (3), (7g) k,i,j k,i,j k,i,j − 12 0
04×6 04×2 04×5
where the constraint (7b) corresponds to the offloading deci- 
09×8 09×2 09×3

− 12
 
sion variables for the kth end-user. In particular, the constraint 0 L0 0
ALk,j = 02×8 ALk,j 02×3 , Ak,j = ,
− 21
 
(7c) represents that the total task (either locally executed 0
02×8 02×2 02×3
or offloaded to fog and cloud) of the kth user must be
processed. Moreover, the constraint (7d) corresponds to the and
total transmission rate between the ith fog node and all the − 12
   
011×10 011×2 011×1 0 0
users, which is under the maximum value rimax . The constraint AO
k,c = 0 , AO = .
02×10 AOk,c 02×1 k,c − 21 0
(7e) implies that the total transmission rate for all users task
offloading is under the limit of maximum transmission rate Then, we assume that the kth user’s offloaded task data from
between the ith primary fog node and the jth fog node. the ith primary fog node to the jth fog node is entirely pro-
offload local
The constraint (7f) indicates that the total transmission rate cessed at the jth secondary fog node, i.e., Dk,i→j = Dk,j . Let
1 local local offload 2 local
between the ith fog node and the cloud for all the users under Dk = [Dk,i , Dk,j , 0, Dk,i→c , 01×9 ], Dk = [Dk , 01×12 ],
the ith fog node’s primary coverage is limited by the maximum and D3k = [01×12 , −Dk ]. Thus, the constraint (7c) becomes
max
transmission rate (i.e., ri,c ) allocated for the ith fog node. wk| D1k + D2k e|13 + D3k e|13 = 0,
The constraint (7g) indicates the total computing resources
allocated for the tasks locally processed at the ith fog node where e|M is the standard M × 1 unit vector. Similarly, the
for the end-users under its coverage and the end-users that constraints (7d) and (7f) become
arrive via its neighbor fog nodes. Mi
X
We transform the constraint (7b) into a quadratic equation as wk| Rk,i wk ≤ rimax , (10)
x(x − 1) = 0, where x ∈ {µk,i , βk,i,j , λk,i }. Let us introduce k=1

the auxiliary variables to convert the optimization problem into and


Mi
a convex QCQP. We use CVX toolbox [22] to get the optimum X
wk| Rk,i,c wk ≤ ri,c
max
, (11)
values, although the feasibility of the proposed solution that
k=1
itself a novel is a part of future work. We introduce the
L
auxiliary variables ζk,i O
, ζk,i,j L
, ζk,j O
, and ζk,c , such as respectively, where
1
 
0 01×11 2
local L Rk,i = 011×1 011×11 011×1 
τk,i ≤ ζk,i (8a)
1
transmission
τk,i→j O
≤ ζk,j (8b) 2 01×11 0
 
local
τk,j ≤ L
ζk,j (8c) 03×3 03×9 03×1
transmission O Rk,i,c = 09×3 R0k,i,c 09×1  ,
τk,i→c ≤ ζk,c (8d)
01×3 01×9 0

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY KURUKSHETRA. Downloaded on April 25,2022 at 18:02:11 UTC from IEEE Xplore. Restrictions apply.
TABLE I
S IMULATION PARAMETERS 3.5
Entire task processed at end-user
Entire task processed at primary
3
Parameters Values fog node
Proposed solution
Maximum Bandwidth (BW ) 20 MHz 2.5

Total delay (in [s])


Maximum transmission rate
 between

max
each fog node and cloud ri,c 1 Mbps 2

Maximum CPU rate for the end-user (fkmax ) 600 × 106 [cycles/s]
1.5
Maximum CPU rate for the fog node (fimax ) 5 × 109 [cycles/s]
CPU rate allocated per user at the cloud (fc ) 10 × 109 [cycles/s] 1
Total number of fog nodes (F ) 3
Total number of end-users (K) 5 0.5

0
330 960 1300 1900 2100
Processing density (in [cycles/byte])
1
 
08×1 08×7
R0k,i,c = 1
2 . Fig. 4. Total delay with different task processing densities, Dk = 1 MB.
2 01×7 08×1
Moreover, we rewrite the constraint (7g) as
Mi Mj
Ji X 8
X X
wk| wk| Qk0 ,i,j wk
Entire task processed at end-user
Qk,i wk + ≤ fimax , (12) 7 Entire task processed at primary fog node
Proposed solution
k=1 j=1 k0 =1
6
where  1

Total delay (in [s])


2 5
 05×5 04×1 
Qk,i = 1 013×7  4

2 0 
07×6 3

and 2
 
0 1
1

 09×9 2

 0
Qk0 ,i,j = 07×1 013×3 . 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
1
 
 0 01×7 0  Task data size (in [MB])
2
03×10
Fig. 5. Total delay with different task data size, Lk = 1900 [cycles/byte].
V. S IMULATION R ESULTS
This section evaluates the performance of the proposed is not able to meet the low-latency requirement. On the other
solution for task offloading in delay-sensitive fog networks hand, as our proposed solution jointly optimizes the offloading
with Monte Carlo simulations. Unless specified, we set the decision to the secondary fog node and cloud considering
processing density as Lk = 1900 [cycles/byte] for all users the transmission latency and computational latency, we obtain
considering x2 64 constant bit rate (CBR) encoding [23]. Other significantly lower latency compared to stand-alone primary
simulation parameters [14], [19] are summarized in Table I. fog node case.
Fig. 4 illustrates the total latency with different task pro- To show the impact of input data size (i.e., Dk ), Fig. 5
cessing density. We compare the proposed solution with two demonstrates the total delay with different data size for a fixed
following cases: a) when the entire task is processed at the end- task processing density. As it is obvious, the total delay for
user side (i.e., no offloading), b) when the end user offloads the each of the scenarios increases with the increase of arrived task
entire task to the primary fog node, and the primary fog node data size. Also, we conclude from the simulations that the total
processes the task data alone without any cooperation with latency for both the cases when the task data are processed
other fog node and cloud. From Fig. 4, it is observed that when entirely at the end-user side and primary fog node side,
the task processing density is low, the entire data processing at and it linearly increases with the input data size. However,
the end-user side performs well compared to offloading to the interestingly, at bigger task data size, the proposed solution
fog nodes. The reason is that the transmission latency due to shows a substantial advantage over previous cases to obtain
offloading from the end-user to the fog node dominates over minimum total latency.
the local processing time at the end-user side. However, as Finally, we highlight the hard-latency requirement in the fog
the task processing density increases, the proposed solution networks. In this context, we refer the system reliability as a
outperforms the local task processing at the end-user-side. probability that measures the maximum number of occurrence
Moreover, with higher processing density, only one fog node when the task cannot be completed within the tolerable latency.

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY KURUKSHETRA. Downloaded on April 25,2022 at 18:02:11 UTC from IEEE Xplore. Restrictions apply.
R EFERENCES
0.95
Probability of (Total delay < Tolerable delay) [1] “Requirements of the IMT-2020 networks,” document
0.9 Y.3101, Accessed Sept. 15, 2018. [Online]. Available:
0.85 https://www.itu.int/en/Pages/default.aspx
[2] C. Li, C.-P. Li, K. Hosseini, S. B. Lee, J. Jiang, W. Chen, G. Horn, T. Ji,
0.8
J. E. Smee, and J. Li, “5G-based systems design for tactile Internet,”
0.75 Proc. of the IEEE, pp. 1–18, 2018.
[3] C. Mouradian, D. Naboulsi, S. Yangui, R. H. Glitho, M. J. Morrow, and
0.7 P. A. Polakos, “A comprehensive survey on fog computing: State-of-the-
art and research challenges,” IEEE Commun. Surv. Tut., vol. 20, no. 1,
0.65
pp. 416–464, 1st quarter 2018.
[4] M. Mukherjee, L. Shu, and D. Wang, “Survey of fog computing:
0.6
Fundamental, network applications, and research challenges,” IEEE
Commun. Surv. Tut., pp. 1–32, Apr. 2018.
0.55 L k = 330 [cycles/byte]
[5] K. Dolui and S. K. Datta, “Comparison of edge computing implemen-
L k = 960 [cycles/byte] tations: Fog computing, cloudlet and mobile edge computing,” in Proc.
0.5 IEEE Global Internet of Things Summit (GIoTS), June 2017, pp. 1–6.
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
[6] J. Li, T. Zhang, J. Jin, Y. Yang, D. Yuan, and L. Gao, “Latency estimation
Tolerable delay (in [s])
for fog-based Internet of things,” in Proc. IEEE 27th ITNAC, Nov. 2017,
pp. 1–6.
Fig. 6. Latency performance under different tolerable delay, Dk = 1 MB. [7] H. Zhang, Y. Qiu, X. Chu, K. Long, and V. C. Leung, “Fog radio access
networks: Mobility management, interference mitigation, and resource
optimization,” IEEE Wireless Commun., vol. 24, no. 6, pp. 120–127,
From Fig. 6, we observe that it is less than 70% probability that Dec. 2017.
the latency can meet the tolerable latency when the tolerable [8] Y.-Y. Shih, W.-H. Chung, A.-C. Pang, T.-C. Chiu, and H.-Y. Wei,
delay is as low as 0.5 s. As the tolerable delay is relaxed, the “Enabling low-latency applications in fog-radio access networks,” IEEE
Network, vol. 31, no. 1, pp. 52–58, Jan. 2017.
probability to meet the total latency under its maximum value [9] M. Mukherjee, Y. Liu, J. Lloret, L. Guo, R. Matam, and M. Aazam,
increases. Moreover, we show that the task processing density “Transmission and latency-aware load balancing for fog radio access
has an adverse impact on the latency as well as reliability. In networks,” in IEEE GLOBECOM, Dec. 2018, pp. 1–6.
[10] M.-H. Chen, B. Liang, and M. Dong, “A semidefinite relaxation ap-
this way, we have shown the trade-off between the reliability proach to mobile cloud offloading with computing access point,” in Proc.
and maximum tolerable delay in the context of hard-latency IEEE SPAWC, June 2015, pp. 1–5.
requirements. [11] ——, “Joint offloading decision and resource allocation for multi-user
multi-task mobile cloud,” in Proc. IEEE ICC, May 2016.
[12] ——, “Multi-user multi-task offloading and resource allocation in mo-
VI. C ONCLUSION bile cloud systems,” IEEE Trans. on Wireless Commun., vol. 17, no. 10,
pp. 6790–6805, Oct. 2018.
In this paper, we study the task offloading issues for [13] M.-H. Chen, M. Dong, and B. Liang, “Resource sharing of a com-
puting access point for multi-user mobile cloud offloading with delay
delay-sensitive fog networks by joint optimization of partial constraints,” IEEE Trans. on Mobile Comput., pp. 1–13, 2018.
task offloading decision and both computational as well as [14] J. Du, L. Zhao, J. Feng, and X. Chu, “Computation offloading and
transmission resource allocation. While offloading the task, resource allocation in mixed fog/cloud computing systems with min-
max fairness guarantee,” IEEE Trans. Commun., vol. 66, no. 4, pp.
apart from local task execution delay, we considered the 1594–1608, Apr. 2018.
transmission delay between an end-user to the primary fog [15] Q. Li, J. Lei, and J. Lin, “Min-max latency optimization for multiuser
node, the primary fog node to the secondary fog node, and the computation offloading in fog-radio access networks,” in Proc. IEEE Int.
Conf. Acoustics, Speech and Signal Process. (ICASSP), Apr. 2018, pp.
primary fog node to the cloud transmission delay. Employing 3754–3758.
convex optimization, the computational and transmission re- [16] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex
source allocations are optimized in the fog node. A fog node programming, version 2.1,” http://cvxr.com/cvx, Mar. 2014.
[17] E. Balevi and R. D. Gitlin, “Optimizing the number of fog nodes for
receives the task data from not only the end-users under its cloud-fog-thing networks,” IEEE Access, pp. 11 173–11 183, Feb. 2018.
primary coverage but also other end-users via its neighbor fog [18] J. Kwak, Y. Kim, J. Lee, and S. Chong, “DREAM: Dynamic resource
nodes. Simulation results suggest that the proposed solution and task allocation for energy minimization in mobile cloud systems,”
IEEE J. Select. Areas in Commun., vol. 33, no. 12, pp. 2510–2523, Dec.
outperforms stand-alone fog node and local processing at the 2015.
end-user approach in terms of delay-sensitive task provisioning [19] Y. Mao, J. Zhang, S. H. Song, and K. B. Letaief, “Power-delay
with numerous application-specific parameter settings. The tradeoff in multi-user mobile-edge computing systems,” in Proc. IEEE
GLOBECOM, Dec. 2016, pp. 1–6.
future work includes the delay and resource optimization [20] S.-W. Ko, K. Huang, S.-L. Kim, and H. Chae, “Live prefetching
subject to carrier-grade reliability and latency constraints with for mobile computation offloading,” IEEE Trans. Wireless Commun.,
finite queue length and interference between fog nodes. vol. 16, no. 5, pp. 3057–3071, May 2017.
[21] J. Liu and Q. Zhang, “Offloading schemes in mobile edge computing
for ultra-reliable low latency communications,” IEEE Access, vol. 6, pp.
ACKNOWLEDGMENTS 12 825–12 837, 2018.
[22] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge Univ.
This research work is partially supported by Guang- Press, Cambridge, U.K., 2004.
[23] A. P. Miettinen and J. K. Nurminen, “Energy efficiency of mobile clients
dong Prov. Key Lab of Petrochemical Equipment Fault in cloud computing,” in Proc. USENIX Conference on Hot Topics in
Diagnosis, China and the AAL vINCI project (Grant Nr. Cloud Computing (HotCloud), June 2010, pp. 4–4.
vINCI /P2P/AAL/0217/0016) supported by Research Promo-
tion Foundation in Cyprus.

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY KURUKSHETRA. Downloaded on April 25,2022 at 18:02:11 UTC from IEEE Xplore. Restrictions apply.

You might also like