TMC2020
TMC2020
Abstract—Cooperative offloading in mobile edge computing enables resource-constrained edge clouds to help each other with
computation-intensive tasks. However, the power of such offloading could not be fully unleashed, unless trust risks in collaboration are
properly managed. As tasks are outsourced and processed at the network edge, completion latency usually presents high variability
that can harm the offered service levels. By jointly considering these two challenges, we propose OLCD, an Online Learning-aided
Cooperative offloaDing mechanism under the scenario where computation offloading is organized based on accumulated social trust.
Under co-provisioning of computation, transmission, and trust services, trust propagation is performed along the multi-hop offloading
path such that tasks are allowed to be fulfilled by powerful edge clouds. We harness Lyapunov optimization to exploit the spatial-
temporal optimality of long-term system cost minimization problem. By gap-preserving transformation, we decouple the series of
bidirectional offloading problems so that it suffices to solve a separate decision problem for each edge cloud. The optimal offloading
control can not materialize without complete latency knowledge. To adapt to latency variability, we resort to the delayed online learning
technique to facilitate completion latency prediction under long-duration processing, which is fed as input to queued-based offloading
control policy. Such predictive control is specially designed to minimize the loss due to prediction errors over time. We theoretically
prove that OLCD guarantees close-to-optimal system performance even with inaccurate prediction, but its robustness is achieved at
the expense of decreased stability. Trace-driven simulations demonstrate the efficiency of OLCD as well as its superiorities over prior
related work.
Index Terms—Mobile edge computing, multi-hop cooperative offloading, trust propagation, completion latency variability
1 INTRODUCTION
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
2834 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 19, NO. 12, DECEMBER 2020
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
LI ET AL.: LEARNING-AIDED COMPUTATION OFFLOADING FOR TRUSTED COLLABORATIVE MOBILE EDGE COMPUTING 2835
TABLE 1
Major Notations
Notation Description
I ; i; N ; n set and index of users and edge clouds
i ; g i task size, computation intensity of task i
ti0 ; bi ; s i arrival time, deadline and trust demand of task i
N i; N n i’s contributing edge cloud set, n’s trusted neighbor set
din , ’ computation latency, backhaul transmission latency Fig. 2. Top: trust propagation along multi-hop offloading path. Bottom:
En ; Bnm local computing capacity, offloading capacity online offloading control for any intermediate edge cloud (Edge-F: Front-
vmn ðtÞ; Wni trust value, cumulative trust value at n end server, Edge-B: Back-end cluster).
ain ðtÞ, rin ðtÞ admission decision variable, processing rate
ain ðtÞ; binm ðtÞ local computing/offloading decision variables that are physically unreachable have no trust relationships
an ðtÞ; bn ðtÞ workloads dispatched to the Edge-B and offloaded away
and that have positive mutual trust value are bound to be
QFn ðtÞ; QB n ðtÞ the Edge-F and Edge-B task queues of n
rd ; rs weighting cost parameters of latency and social trust physical neighbors. Due to heterogeneity in edge clouds’
DCn ðtÞ; DTn ðtÞ computation/transmission latency cost computing capability, vnm ðtÞ may not equal vmn ðtÞ. Let
CðtÞ, S n ðtÞ system cost, trust risk cost N n ¼ fmjenm ¼ 1; 8m 2 N n fngg N denote n’s trusted
neighbor set, capturing the set of edge clouds that it can
directly interact with (i.e., physical neighbors with positive
Endowed with cloud-like computing capacity, each edge trust value). Consider the trusted MEC service provider
cloud is associated with an access point (e.g., small cell base who is responsible for dynamic trust management includ-
station, WiFi access point) covering a dedicated local area, ing trust value update and trusted neighbor set update. Spe-
and serves a set I ¼ f1; 2; . . .g of user-generated computa- cifically, add a new edge cloud to N n when it builds trust
tion-intensive tasks1 in the area. Suppose edge clouds in the relationship with n, and delete edge clouds from N n if they
neighborhood (e.g., A, B and C in Fig. 1) are connected by miss positive social trust.
backhaul links, which can be used to send task requests or As shown in Fig. 1, social trust relationships among edge
responses between edge clouds [7]. By exploiting coopera- clouds can be leveraged to facilitate cooperative offloading.
tions among edge clouds, tasks that arrive at one edge cloud For example, since eAB > 0 and eBC > 0, computation
can be either processed locally, or offloaded to non-local edge workloads in edge cloud A can be offloaded to edge cloud
clouds via backhaul links for high-quality services.2 Different B, and finally processed in lightly-loaded edge cloud C.
from most existing one-hop offloading works, a task can be Here physical neighbors A and C miss positive mutual trust
offloaded multiple hops away and finally fulfilled by power- (i.e., eAC ¼ 0) due to lack of good recent interactions, and
ful edge clouds. Notice that our mechanism is also compati- thus the workload cannot be directly offloaded from A to C.
ble with edge cloud-to-cloud offloading strategy [9] (i.e.,
offloading edge clouds’ unsatisfied computation tasks to the
2.3 Multi-Hop Task Offloading via Trust
remote cloud) but along with high transmission latency and
Propagation
huge bandwidth costs. The system runs in a time-slotted
Mobile users randomly arrive at the system by submitting
fashion for making decisions, i.e., t 2 T ¼ f0; . . . ; T 1g,
where the slot (e.g., 1-5 minutes) is a much slower time scale task requests with diverse deadline and trust demands.4 Each of
than that of task arrival and offloading. them connects to the edge cloud that covers its vicinity.5
Formally, the task corresponding to user i 2 I can be speci-
2.2 Social Trust Model fied by a tuple i ; g i ; ti0 ; bi ; s i , where i 2 ½min ; max is task
size (in bits) that needs to be offloaded, and computation
Of particular importance is that multi-hop cooperative off-
intensity g i 2 ½g min ; g max is number of CPU cycles for proc-
loading relies on accumulated social trust relationships to
essing one-bit task. Upon arriving at slot ti0 2 T , user i
identify trustworthy edge clouds. We introduce social trust
reports the desired deadline bi and social trust s i , capturing
model captured by directed graph G ¼ ðN N ; E Þ, where
the maximum completion latency and trust risks in collabo-
E ¼ fðn; mÞ : enm ¼ 1; 8n; m 2 N g. enm ¼ 1 if and only if
ration that i can tolerate.
edge clouds n and m have positive trust value with each
For every task, each associated edge cloud can either
other, where the trust value perceived by n is denoted by
directly finish it (i.e., local computing), or forward it to
vnm ðtÞ 2 ½0; 1, characterizing the confidence that n has in
another trusted edge cloud in N n (i.e., cooperative offload-
m’s direct cooperation at slot t based on historical interac-
ing). Repeat this process and the multi-hop offloading path
tions. Here “direct” suggests that social trust relationships
is formed finally. For any task i 2 I , we use variable
are built upon physical relationships,3 i.e., two edge clouds
K 2 f0; 1; . . .g to capture how many hops i is offloaded in
total. As shown in Fig. 2, user i first connects to local edge
1. In the rest of this paper, we will use “user” and “task”
interchangeably.
2. We focus on cooperative offloading among edge clouds, where 4. Users usually have heterogeneous service requests regardless of
user requests are entirely offloaded from end devices to requesting whatever applications. Thus, we won’t make any distinction among
edge clouds. MEC applications. In particular, to satisfy user diverse deadline and
3. We consider the case with fixed physical connections among edge trust demands has promoted the development of deadline-aware [22]
clouds, i.e., physical relationships remain the same. While taking into and trust-aware [23] scheduling solutions.
account trust risks in collaboration, both trust value and trusted neigh- 5. We focus on the cooperative offloading case without user mobil-
bor set will be updated based on interaction results, i.e., social trust ity. The effect of user mobility on offloading control will be discussed
relationships are always changing. in Section 4.6.
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
2836 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 19, NO. 12, DECEMBER 2020
cloud ni0 2 N , and then its task request is offloaded K hops transmitted to edge clouds immediately and the associated
to edge cloud niK for processing. Let N i denote the set of overhead/delay can be ignored without affecting online con-
edge clouds contributing to the offloading service for user i, trol. Thus, we only consider communication traffic data later.
and pi ¼ fni0 ; . . . ; niK g denote the permutation of all K þ 1
edge clouds along the offloading path. Under social trust 2.4.1 Online Control for Edge-F
relationships, trust propagation is performed along multi- 1) Task Admission: At each slot, heterogenous task requests
hop offloading path. For simplicity, we denote the cumula- arrive at the system. Denote I ðtÞ I as the set of tasks
tive trust value of k-hop edge cloud nik as newly arrived at slot t. The instantaneous demand of user
Wki ¼ Wni i ; 8k 2 f0; . . . ; Kg. That is, i 2 I ðtÞ for edge cloud n can be described as Ain ðtÞ ¼
k
(
½1 @vni ni ðtik Þ Wk1
i
; if k > 0; i 1fni ¼ng . The amount of workloads admitted into the
i
Wk ¼ k1 k (1) 0 P
v0 ðt0 Þ;
i i
if k ¼ 0; Edge-F of n is an ðtÞ ¼ i2II ðtÞ ain ðtÞ, where 0 ain ðtÞ Ain ðtÞ
suggests that not all tasks are allowed in the system so as to
where @ 2 ½0; 1Þ is the decaying factor that decreases the
prevent system overload. The task admission decisions for
trust value when the number of offloading hops increases,6
the whole system can thus be given by vector a ðtÞ ¼
tik is the time that task i is assigned to edge cloud nik , and
fain ðtÞ; 8i 2 I ðtÞ; n 2 N g.
vi0 ðti0 Þ is the trust value between i and requesting edge
2) Task Scheduling: In addition to admitting locally
cloud ni0 at slot ti0 .
arrived tasks, the Edge-F, under multi-hop offloading, also
Taking trusted cooperative offloading illustrated in Fig. 1
receives tasks offloaded from others. The next control is to
for example, user i submits task request to edge cloud A at
determine which tasks are processed locally or offloaded to
slot ti0 with trust value viA ðti0 Þ, and then the task is offloaded
trusted neighbors, corresponding to local computing and
to edge cloud B at slot ti1 and finally processed in edge
cooperative offloading.
cloud C at slot ti2 . Hence, for this offloading service, offload-
For each task i 2 QFn ðtÞ, we use ain ðtÞ ¼ f0; 1g to denote
ing hop number K ¼ 2 and the total cumulative trust value
whether i is dispatched to edge cloud n’s Edge-B. Obvi-
WCi ¼ viA ðti0 ÞvAB ðti1 ÞvBC ðti2 Þ.
ously, ain ðtÞ ¼ 1 represents that i is finally offloaded to n
and its multi-hop offloading path ends, and ain ðtÞ ¼ 0 other-
2.4 Online Control for Edge Clouds
wise. The amount of workloads dispatched from n’s Edge-F
Given the multi-hop offloading path shown in Fig. 2, let’s to Edge-B is thus
take a closer look at any intermediate edge cloud, and learn X
how cooperative offloading functions. Similar to cloud data- an ðtÞ ¼ i ain ðtÞ: (2)
centers [24], each edge cloud consists of two parts, front-end QF
i2Q n ðtÞ
server (Edge-F) and back-end cluster (Edge-B), where Edge- We introduce the binary variable binm ðtÞ to capture coopera-
F is responsible for task admission and scheduling, and tive offloading decisions, where binm ðtÞ ¼ 1 represents that
Edge-B utilizes provisioned computing resources to process edge cloud n offloads task i to trusted neighbor m 2 N n at
tasks dispatched from Edge-F. These stochastic control pro- slot t, i.e., m will be added to i’s offloading path, and
cesses, together with time-varying task arrivals, may bring binm ðtÞ ¼ 0 otherwise. The links between edge clouds are
about dynamics of workloads in Edge-F and Edge-B. We assumed to be error-free, since the wired backhaul links are
apply queueing theory to handle such dynamics. typically reliable and only require simple channel coding
We consider the distributed scenario where edge clouds with substantially lower complexity than computation-
coordinate their online control strategies in an autonomous intensive tasks [15]. Thus, the backhaul transmission latency
way. Each edge cloud n 2 N maintains two task queues, does not involve complicated encoding and decoding.
whose backlogs7 QFn ðtÞ and QB n ðtÞ capture the amount of According to [25], the transmission latency of the backhaul
workloads queued in Edge-F and Edge-B at the beginning of is proportional to the size of traffic data with scaling factor
slot t. The optimal cooperative offloading control cannot ’. Then we obtain the unprocessed workloads offloaded
materialize without complete backlog information among away from n, i.e.,
edge clouds. Due to trust risks in collaboration, edge clouds X X
would prefer to interact with their trusted neighbors. That is, bn ðtÞ ¼ i binm ðtÞ: (3)
edge clouds are expected to share backlog information with QF
i2Q n ðtÞ
m2N
Nn
peers in trusted neighbor set. At each slot, edge clouds serve The task scheduling decisions in the whole system can be
both interaction results and communication traffic data. given by sðtÞ ¼ fain ðtÞ; binm ðtÞ; 8i 2 QFn ðtÞ; m 2 N n ; n 2 N g.
Since the size of the former is usually small and constant, it is
acceptable to assume that interaction results can be 2.4.2 Online Control for Edge-B
The Edge-B processes tasks dispatched from Edge-F. The
6. The decaying factor is introduced to control weights of trust value computing capability captured by processing rate largely
by the “distance” (i.e., offloading hop number) between edge clouds,
making the weights assigned to short-distance neighbors larger than depends on two aspects: workloads and computing resour-
long-distance ones. Accordingly, cumulative trust value is monotoni- ces. Let & n ðtÞ denote edge cloud n’s service limitation which
cally decreasing with the distance, suggesting edge clouds would pre- is determined by available resources at slot t. We introduce
fer short-distance neighbors to help with task processing, especially
net present value function [4] to expound the relationship
under trust risks. The effect of decaying factor on trusted cooperative
offloading performance is further studied in Section 5. between processing rate and workload/resource levels,
7. We use Q Fn ðtÞ to denote the set of tasks associated with QFn ðtÞ. The which is proven to fit measurement results [26]. The proc-
same is with Q B B
n ðtÞ and Qn ðtÞ. essing rate (in CPU cycles per second) can be computed by
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
LI ET AL.: LEARNING-AIDED COMPUTATION OFFLOADING FOR TRUSTED COLLABORATIVE MOBILE EDGE COMPUTING 2837
B
rn ðtÞ ¼ 1y x&n ðtÞQn ðtÞ , where x 2 ð1; 1Þ controls the skewness 3 PROBLEM FORMULATION
of the relationship between processing rate and workload/
By exploring collaborative computing potentials, the
resource levels, and y captures the speed when Edge-B is
desired trusted cooperative offloading mechanism does the
fully loaded. Under the same MEC system structure, param-
best effort to serve tasks while providing users with quality
eters x and y are identical to all edge clouds. The heteroge-
of service (QoS) guarantee. We focus on the QoS perfor-
neity in computing capability primarily comes from
mance in terms of trust risks and completion latency
differences in workload/resource levels. Intuitively, the less
(including transmission and computation latency). In partic-
workloads and the more available resources, the larger
ular, beneficial offloading, user trust demand and edge
processing rate will be.
clouds’ scheduling capacity constraints are respected.
Our study highlights the intriguing role of fair-share
scheduling, which has gained growing attention recently. In 3.1 Constraints of The Problem
particular, Rate Control Protocol (RCP) scheduling has been 1) Scheduling Exclusion Constraint: For task i queued in Edge-
developed as an adaptive fair-share solution, where every F, edge could n can either process it or offload it to one of
router assigns the same rate to all requests and updates the trusted neighbors unless keeping it waiting in queue QFn , i.e.,
rate approximately once per slot [27], [28]. Compared to pri- X
ority-based scheduling, RCP guarantees that all task ain ðtÞ þ binm ðtÞ 1; 8i 2 QFn ðtÞ; n 2 N : (7)
requests buffered in the same Edge-B share the same resour- m2N
Nn
ces without preemption, thus making requests finish as 2) Beneficial Offloading Constraint: The decision of edge
quickly as possible while staying stable and fair among cloud n that offloads task i 2 QFn ðtÞ to trusted neighbor
requests. For any task i 2 QB n ðtÞ, the amount of its work- m (i.e., binm ðtÞ ¼ 1 ) is beneficial if offloading to m does not
loads processed at slot t under RCP scheduling can be incur higher completion latency than local computing, i.e.,
described as
( ) dim þ ’i din ; 8i 2 QFn ðtÞ; m 2 N n ; n 2 N ; (8)
r n ðtÞ
rin ðtÞ ¼ min ; i ð:; tÞ ; (4) where ’ > 0 is a coefficient representing the backhaul
g i jQ
QB n ðtÞj transmission latency for one-bit task, and din is the computa-
P Pt1 i tion latency for task i when processed in n.
where i ð:; tÞ ¼ i n2N N t¼0 rn ðtÞ denotes the residual
3) Trust Demand Constraint: Multi-hop offloading relies on
workloads at the beginning of slot t. Users will leave the social trust relationships. This constraint enforces that task i
system as soon as their computation tasks are completely can be offloaded from edge cloud n to its neighbor m at slot t
served, i.e., i ð:; tÞ ¼ 0. Let din 2 ð0; dC;max denote the latency (i.e., binm ðtÞ ¼ 1), if the resulting cumulative trust value Wmi
when task i is served in edge cloud n. Due to different for offloading to m satisfies user trust demand s i , i.e.,
workload levels or resource contention [21], the computa-
Wmi ¼ ½1 @vnm ðtÞ Wni s i ; 8i 2 Q Fn ðtÞ; m 2 N n ; n 2 N : (9)
tion latency usually presents high variability, making it
unknown until the task is finished. We thus incorporate
Remark. Constraints (8) and (9) suggest that cooperative
online learning into multi-hop offloading mechanism in offloading control is performed under the concept of ben-
Section 4, to predict completion latency knowledge. eficial offloading and trusted offloading, which is vital to
providing high QoS. On this basis, a tradeoff between
2.4.3 Task Queues completion latency and trust risks is established accord-
We adopt the convention that task scheduling and process- ingly. It’s true that 1-hop offloading is superior for low
ing at slot t happen at the beginning of the slot, while task trust risk and low transmission latency, but that is not the
acceptance (i.e., admitting tasks arrived locally or offloaded whole story. Recall that MEC paradigm is proposed for
from neighbors/Edge-F) happens at the end [29]. Accord- computation-intensive tasks, i.e., computation latency is
ingly, the queueing dynamics of task queues, QFn and QB n, usually large, especially for resource-constrained edge
associated with any edge cloud n 2 N can be described as clouds. In multi-hop case, however, tasks are more likely
X to be offloaded to powerful edge clouds lying beyond
QFn ðt þ 1Þ ¼ maxfQFn ðtÞ an ðtÞ bn ðtÞ; 0g þ an ðtÞ þ bmn ðtÞ;
physical neighbors, thus realizing a huge decrease in
m2N
Nn
computation latency only at the expense of slightly
8 9 (5) increased trust risks and transmission latency.
< X =
4) Scheduling Capacity Constraint: We highlight the limited
n ðt þ 1Þ ¼ max Qn ðtÞ
QB rin ðtÞ; 0 þ an ðtÞ;
B
(6)
: ; scheduling capacity of edge clouds as follows:
QB
i2Q n ðtÞ
X
P ain ðtÞ En ; 8n 2 N ; (10)
where bmn ðtÞ ¼ i2Q QF i bimn ðtÞ is the amount of work-
m ðtÞ
QF
loads offloaded to n from trusted neighbor m 2 N n . The first X i2Q n ðtÞ
term on the right-hand-side (RHS) of (5) captures the unpro- bnm ðtÞ
i
Bnm ; 8m 2 N n ; n 2 N ; (11)
cessed workloads in Edge-F at slot t after part of workloads QF
i2Q n ðtÞ
are offloaded away or dispatched to Edge-B, and the last where (10) indicates at most En tasks are dispatched to
two terms describe the workloads arrived locally and off- Edge-B at one slot for guaranteed processing rate, and (11)
loaded from trusted neighbors. The first term on the RHS of specifies the limited offloading capacity of edge cloud m for
(6) denotes the unprocessed workloads in Edge-B after part n by placing an upper bound Bnm 2 ½0; B for the number of
of workloads are processed. offloaded tasks.
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
2838 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 19, NO. 12, DECEMBER 2020
1X DðQðtÞÞ þ V EfCðtÞjQðtÞg
TCO: min CðtÞ (14) ( )
a ðtÞ;ssðtÞ T t2TT X X
E i ~ F
a ðtÞQ ðtÞjQðtÞ þ B1 þ B2 ðtÞ
n n
s.t. Constraints (7)-(12): (14a) n2N
N i2II ðtÞ
( )
X X
Challenges. There are two challenges that impede the deri- þ E ain ðtÞ½i QB i ~F ðtÞ V rd din jQðtÞ
n ðtÞ Qn þ
vation of optimal online control: (1) spatial-temporal coupled n2N
N QF
i2Q n ðtÞ
decisions: tasks can be offloaded across edge clouds and the (
X X X X X
processing duration is affected by scheduling decisions; þ E ~F ðtÞ
i bimn ðtÞQ i
n
(2) no complete offline information on completion latency: as n2N
N m2N QF
N n i2Q
m ðtÞ QF
i2Q n ðtÞ
m2N
Nn
the basis for offloading decision making, task completion )
latency usually presents high variability, especially under ~F ðtÞ
binm ðtÞ½Q ~ nm ðtÞjQðtÞ ;
V rd ’ V rs v
n
stochastic task arrivals. These challenges call for an online
optimization approach that can efficiently perform multi- (17)
hop offloading with latency prediction knowledge.
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
LI ET AL.: LEARNING-AIDED COMPUTATION OFFLOADING FOR TRUSTED COLLABORATIVE MOBILE EDGE COMPUTING 2839
P max 2
where B1 ¼ 12 N ½ð
n2N Þ ðI max þ ½N 1Bmax Þ2 þ stochastic task arrivals [21]. Hence, those offloading solutions
rmax that focus on deterministic latency may not be feasible. Instead,
B;max
max 2 2
max max
Þ2 is a
Q
ð Þ ðEn þ ½N 1B Þ þ ð En Þ þ ð nmin g min
n
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
2840 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 19, NO. 12, DECEMBER 2020
Q ^B ðtÞ hn rf nt j B ;
^B ðt þ 1Þ ¼ Q (21)
n n Q ðtÞ n
ning of slot t. Such learning window model approximates feedback that it has observed from t (line 3). For better coopera-
practical scenarios and is also used in [32]. Hence, workload tive offloading control, each edge cloud is expected to share
prediction for T t turns out to be a set of delayed online learning workload information with its trusted neighbors. Given pre-
diction Q ^ B ðtÞ for all slots t 2 T t , we can obtain the available
processes, where for each slot t 2 T t , the feedback from t is n
delivered at the end of t 1 and can be applied at slot t. processing rate that edge cloud n 2 N can provide,
R^ n ðtÞ ¼ f^rn ðtÞ; 8t 2 T t g (line 6). With collective rate knowl-
We implement the DOGD method to provide effective ^ m ðtÞ; 8m 2 fng _ N n g, edge cloud n can thus
prediction on future workload levels. To capture workload edge fR
estimate the corresponding computation latency for each task
fluctuation, we assume QB;max ¼ maxt2TT QB n ðtÞ, which is pro-
n i 2 Q Fn ðtÞ when processed in associated edge clouds, i.e.,
vided by each edge cloud n 2 N based on prior experience.
d^n ¼ fd^im ; 8m 2 fng _ N n g (lines 7-13). In the following, we
i
To bound the sub-optimality gap in overall loss due to imper-
assume that completion latency knowledge (or more precisely,
fect prediction for slot t 2 T t , we construct a loss function
future workload levels) can be predicted accurately. The case
B with prediction errors will be discussed later.
^B ðtÞÞ ¼ Q
f nt ðQ ^ ðtÞ QB ðtÞ; (19)
n n n
4.2.3 Regret Analysis
We next analyze the performance of Algorithm 1 in predict-
^B ðtÞ 2 ½0; QB;max . A loss
which is a convex function on Qn n ing future workloads Q ^ B ðtÞ by computing the regret bound.
n
minimization problem for any edge cloud n over all slots
can be characterized as Let QB;
n ðtÞ ¼ fQ B;
n ðtÞ; 8t 2 T t g be the best static predictor
in hindsight obtained by the strategy in [35] with full knowl-
XX edge of workloads. We have
min ^ ðtÞÞ:
B
f nt ðQ (20)
^ B ðtÞ2½0;QB;max t2T
n X Xh B i
Q n n T t2T
Tt RegretTn ðDOGDÞ ¼ ^ ðtÞ f nt QB ðtÞ : (22)
f nt Qn n
T t2T
t2T Tt
Due to the delayed feedback of QB n ðtÞ, loss function
^B ðtÞÞ is not given before choosing the next predicted The following theorem upper-bounds the overall regret.
f nt ðQn
^B ðt þ 1Þ. The natural generalization of OGD to Theorem 1. The regret of DOGD in Algorithm 1 in predicting
workload Q n
future workloads with respect to the best static prediction strat-
this delayed setting is to process loss functions and apply
egy that uses Q B;
n ðtÞ; 8t 2 T , is upper bounded by
their gradients once they are delivered. Specifically, at any
X
slot t, each edge cloud n makes a workload prediction RegretT ðDOGDÞ ¼ RegretTn ðDOGDÞ
Q^B ðtÞ for each future slot t 2 T t based on the feedback that n2N
N
n
X bmax (23)
it has observed from t, and suffers the loss f nt ðQ ^B ðtÞÞ. The Tbmax
n þ hn þ 2D :
update rule for each slot t 2 T t is as follows: n2N
N
2hn 2
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
LI ET AL.: LEARNING-AIDED COMPUTATION OFFLOADING FOR TRUSTED COLLABORATIVE MOBILE EDGE COMPUTING 2841
Fig. 4. An example of reduction of the optimal task scheduling instance latency, and one for a set of cooperative offloading J inm ðtÞ ¼
problem to MCMF.
~ nm ðtÞ i Q
bi ðtÞ½Vi rd ’ þ Vi rs v ~F ðtÞe1 ;
~F ðtÞ max dQ
nm n n
Proof. See Appendix C in the supplemental materials, avail- 8m 2 N n taking transmission latency and trust risk into
able online. u
t account. By provisioning computation, transmission and
trust services, the optimal solution to (26) is expected to
4.3 Online Queue-Based Control Policy achieve latency versus trust risk tradeoff. Moreover, con-
Given latency prediction knowledge, OLCD employs online straints (7), (10) and (11) focus on a balanced scheduling
queue-based control policy to solve per-slot joint admission plan, while (8) and (9) suggest the prerequisites on benefi-
and scheduling subproblem with a carefully designed objec- cial offloading and trusted offloading.
tive specified for individual edge clouds. The solutions for all Consider an instantaneous assignment problem for edge
subproblems constitute a feasible solution to the original con- cloud n 2 N with trusted neighbor set N n ¼ fm1 ; m2 ; . . .g
trol problem P1 . and queued task set Q Fn ðtÞ ¼ fi1 ; i2 ; . . .g. Such problem can
be reduced to a minimum cost maximum flow (MCMF)
4.3.1 Task Admission Control problem [36], where constraints (7), (8), (9), (10), and (11)
Task admission decisions can be made by minimizing the guarantee tasks are properly assigned. Let Gnt ¼ ðQ QFn ðtÞ_
first term of the RHS of (18). Since admission decisions of N n _ fn; S; Dg; EÞ denote the flow network graph for edge
different edge clouds are independent, we can concurrently cloud n at slot t, where E is the set of edges, and vertices n,
obtain a n ðtÞ ¼ fain ðtÞ; 8i 2 I ðtÞg by solving S, D represent local edge cloud, source and destination
X nodes. There are jQ QFn ðtÞj edges connecting S to all nodes
F
min ain ðtÞ½QFn ðtÞ un ik 2 Qn ðtÞ with capacity 1 since every task can be offloaded
a n ðtÞ
i2II ðtÞ (24) to at most one edge cloud at one slot. There are also jN N nj
s.t. 0 an ðtÞ An ðtÞ:
i i edges connecting all nodes ml 2 N n to D with capacity Bnml
under offloading capacity constraint (11). Similarly, we
The optimal solution can thus reduce to a simple thresh- connect n to D, whose capacity is set to En due to limited
old rule ( processing capacity (10). For each task ik , we add an edge
i Ain ðtÞ; QFn ðtÞ un ; from node ik to all nodes ml 2 N n which satisfy (8) and (9),
an ðtÞ ¼ (25)
0; otherwise: indicating that the selected neighbors must be trusted
enough and can provide lower latency, which is vital to
Remark. For edge cloud n, newly arrived task i will be admit- realizing QoS guarantee for cooperative offloading. The
ik
ted into the system with the increase of workload Ain ðtÞ capacity and cost of each edge are set to 1 and Jnm l ðtÞ. Since
when backlog QFn ðtÞ is no larger than threshold un ; other- tasks can also be processed locally, we add an edge from all
i
wise it will be rejected for system stability. The intuitive nodes ik to node n with capacity 1 and cost Jnk ðtÞ. We set
behind no task admission is that, current task arrivals go the cost of all other edges in E to 0. Consequently, by finding
beyond the response capability of Edge-F, and the best way the minimum-cost maximum flow in Gnt , edge cloud n
to avoid long waits for scheduling is to seek for another is expected to serve task requests at slot t in a best-effort
trusted edge cloud that covers its vicinity to associate. manner with the minimum system cost in (26).
Take trusted cooperative offloading system illustrated in
4.3.2 Task Scheduling Control Fig. 1 for example. At slot t, with the help of edge clouds B
In OLCD, each Edge-F is responsible for scheduling tasks to and E (i.e., N n ¼ fm1 ; m2 g), D needs to serve 4 task requests
potential edge clouds with high service performance. By gap- queued in Edge-F (i.e., QFn ðtÞ ¼ fi1 ; i2 ; i3 ; i4 g). Each task ik is
preserving transformation, scheduling decisions (including associated with one desired trust value. The corresponding
local computing and cooperative offloading) of different edge flow network graph Gnt is shown in Fig. 4. In particular, for
clouds are independent from each other. For edge cloud n, task i1 , only edge cloud B satisfies its trust demand while
decisions on ain ðtÞ and binm ðtÞ can be determined by solving providing lower latency. Hence, node i1 can transfer flow to
i1
X nodes m1 and n with the cost Jnm 1
ðtÞ and Jni1 ðtÞ. The capaci-
i ~F
min ðain ðtÞ½V rd din þ i QB
n ðtÞ Qn ðtÞ ties for two edges are both 1.
s n ðtÞ
QF
i2Q n ðtÞ By reducing to the MCMF problem on Gnt , we can
X leverage any algorithm for that problem to determine
þ ~ nm ðtÞ
binm ðtÞ½Vi rd ’ þ Vi rs v
(26) the optimal offloading decisions in polynomial time.
m2N
Nn
1 One of the well-known techniques is the Successive
~F ðtÞ max dQ
i Q ~F ðtÞe Þ
n n Shortest Path (SSP) algorithm proposed by Edmonds and
s.t. Constraints (7)-(11): Karp [37].
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
2842 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 19, NO. 12, DECEMBER 2020
4.3.3 Queue Update Control no previous interactions are set to 0. Notice that the above is
The queue backlogs associated with each edge cloud n 2 N , just one common way to evaluate social trust between edge
QFn ðtÞ and QB clouds. Actually, there are many other ways for trust evalu-
n ðtÞ, can be updated according to (5) and (6),
with the above task admission and scheduling control ation in the literature.
strategies.
4.5 Performance Analysis
4.4 Accumulated Trust Update Policy OLCD contains three main components: a latency predictor,
We establish a connection between cooperative offloading an online queue-based controller and a social trust manager.
design and social trust management, which mainly involves All of them work online together and influence each other.
two aspects: trusted neighbor set update (specified earlier) The complete OLCD that enables autonomous coordination
and trust value update. Intuitively, edge clouds may hesi- among edge clouds is summarized in Algorithm 2. At any
tate to interact with less trusted or unknown ones due to slot t 2 T , each edge cloud employs online queue-based
trust risks in MEC collaboration. To encourage high-quality control in a distributed manner. The Edge-F is responsible
service offerings and combat uncooperative strategic behav- for task admission (lines 1-4) and scheduling (lines 9-14),
iors, accumulated trust value update policy along with the any of which that realizes the optimized workload alloca-
intuition of “performance-related incentive” is necessary tion is an optimal online control profile for per-slot minimi-
[38]. Specifically, the MEC service provider collects the zation problem P1 . Specifically, for each queued task, Edge-
locally-generated service result once a task is completed, F applies Algorithm 1 to predict completion latency in asso-
and dynamically aggregates it to yield the global trust val- ciated edge clouds, and then determines scheduling deci-
ues for all associated cooperative edge clouds. As a result, sions via SSP. Meanwhile, Edge-B processes tasks
edge clouds are motivated to build good trust relationships dispatched from Edge-F (lines 6-8). As the central trust man-
for increased chances of being assisted and selected in ager, MEC service provider is expected to perform accumu-
future offloading. lated trust update to encourage high-quality service
offerings (lines 16-21). If online scheduling acts as core strat-
4.4.1 Service Valuation for A Single Interaction egy based on latency prediction results, then surely trust
update guarantees the sound operation of cooperative
For any task i 2 I , the provided offloading service corre-
offloading.
sponds to the process from being admitted into the system
at slot ti0 to being served completely at slot ti ¼ tiK after mul- Algorithm 2. OLCD at Slot t
tiple offloading hops. All edge clouds who contribute to the
offloading service (i.e., N i ) can be easily located along the 1 for Each task i 2 I ðtÞ do
offloading path pi ¼ fni0 ; . . . ; niK g. Given i’s deadline 2 for Each edge cloud n 2 N do
3 if ni0 ¼ n then
demand bi and actual completion latency di; ¼ ti ti0 , the
4 Derive admission decision ain ðtÞ by (25);
provider assigns a performance-related rating Scorei 2 ½0; 1
5 for Each edge cloud n 2 N do
to this service offering. Intuitively, the smaller latency di; is,
6 for Each task i 2 Q B n ðtÞ do
the higher rating Scorei would be. One simple example is as
7 Derive processing rate rin ðtÞ by (4);
follows: i ð:; tÞ i ð:; tÞ rin ðtÞ;
( 8
i;
i 1 d2bi ; 0 di; < 2bi ; 9 for Each task i 2 Q Fn ðtÞ do
Score ¼ (27) 10 for Each edge cloud m 2 N n _ fng do
0; di; 2bi :
Apply Algorithm 1 to obtain estimated latency d^m ;
i
11
12 Apply SSP to obtain decisions ain ðtÞ and bnm ðtÞ;i
4.4.2 Aging-Based Trust Update 13 if ain ðtÞ ¼ 1 then
At the end of each slot t, the MEC service provider aggre- 14 i ð:; tÞ i ;
gates current and historical interaction results to update 15 Update Qn ðtÞ, QB
F
n ðtÞ according to (5), (6);
global trust values among all associated edge clouds. 16 for Each task i 2 I do
Clearly, recent interactions can reflect edge clouds’ future 17 if i ð:; tÞ ¼ 0 then
cooperative performance more accurately than earlier inter- 18 Update trust value vnm ðtÞ; 8n; m 2 N i by (28);
actions [19]. Hence, in calculating social trust value between 19 I I n fig.
any cooperative edge clouds, we weight the rating for each 20 for Each edge cloud n 2 N do
interaction according to its age, i.e., number of time slots 21 Update trusted neighbor set N n ;
that have passed since it happened. That is,
Complexity. Take OLCD implemented at slot t as an exam-
IX
nm ðtÞ
1 ple. The running time of task admission is OðNIðtÞÞ. There are
vnm ðtÞ ¼ Scorek xtk ; (28)
I nm ðtÞ k¼1
OðNÞ iterations for online predictive scheduling, within each
of which OLCD first considers v1 ¼ jN N n j þ 1 candidate sched-
where I nm ðtÞ denotes the number of interactions between
uling strategies for each task queued in n’s Edge-F, predicts
edge clouds n and m until slot t, and Scorek ¼ Scorei is the
computation latency for each strategy using
rating of the kth interaction associated with user i. The
aging coefficient x 2 ð0; 1Þ is used to control the weights of OðjT QFn ðtÞjÞ-complexity Algorithm 1 and applies SSP
T t j þ v1 jQ
interaction ratings by their ages denoted by tk , making the to obtain scheduling control with running time
weights assigned to recent interactions heavier than older minfOðv22 f Þ; Oðv32 c Þg. Here v2 ¼ v1 þ 2 þ jQ QFn ðtÞj is the
ones. The trust values between two edge clouds that have total number of vertices on Gnt , f is the derived maximum
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
LI ET AL.: LEARNING-AIDED COMPUTATION OFFLOADING FOR TRUSTED COLLABORATIVE MOBILE EDGE COMPUTING 2843
flow and c is the corresponding minimum cost. While QBn ðtÞ? The following theorem shows that OLCD is
for tasks queued in Edge-B, the amount of residual workloads robust against workload prediction errors.
is computed with complexity of OðjQ QB n ðtÞjÞ. Therefore, the
Theorem 3. Suppose there exists a constant such that at all
overall complexity of online scheduling is min fOðN½v1 jT T tj þ ^B ðtÞ QB ðtÞj holds. Under OLCD, we
slots t > 0, jQn n
2 F 2 2 F 3
v1 jQ
Qn ðtÞj þ v2 f Þ; OðN½v1 jT
T t j þ v1 jQ
Qn ðtÞj þ v2 c Þg. The run- have
ning time of calculating trust values for associated cooperative
P P
edge clouds is OðI n2N N m2N N n I nm ðtÞÞ. Thus, OLCD can 1XT 1 X
N
B3 þ V ½C max C
lim sup EfQFn ðtÞ þ QB
n ðtÞg u n þ ;
find a near-optimal solution in polynomial time. T !1 T t¼0 n¼1
OLCD always exhibits obvious dynamic characteristics,
(32)
especially under stochastic task arrivals. Next, we prove
that even in the dynamic case, OLCD achieves close-to-opti-
1XT 1
B3
mal system cost while guaranteeing system stability and lim sup EfC OLCD ðtÞg C þ ; (33)
T !1 T t¼0 V
robustness against prediction errors. Before that, we define
the perturbation parameter as P max
B;max
Qn rmax
where B3 ¼ B1 þ N ½
n2N En þ n
min g min
.
D
un ¼ 2max ðEn þ ½N 1Bmax Þ; (29) Proof. See Appendix E in the supplemental materials, avail-
which can be easily determined since it only requires the able online. u
t
knowledge of maximum coefficient of transmission latency Remark. Comparing (31) and (33), we conclude that with
cost, maximum number of tasks dispatched to the Edge-B inaccurate workload prediction, larger V is desired to
and maximum number of tasks offloaded away, and achieve the same average system cost as that with accu-
requires no statistical knowledge of system dynamics, e.g., rate information. However, such practice may result in
QFn ðtÞ and QBn ðtÞ. Such feature is desirable for practical higher average queue backlogs, which can be observed
implementations. by comparing (30) and (32). Therefore, OLCD works even
with inaccurate workload prediction but its robustness is
Theorem 2. Suppose QFn ð0Þ ¼ un ; QBn ð0Þ ¼ 0; 8n 2 N . If achieved at the expense of a decreased stability.
admission decisions a ðtÞ, scheduling decisions sðtÞ and queue
updates are performed by Algorithm 2 with V > 0, we obtain 4.6 Discussions
the following properties of OLCD: Discussions about End Devices’ Computing Capability. Our work
focuses on cooperative offloading or resource sharing among
1XT 1 X
N
B1 þ V ½C max C edge clouds. Existing researches mostly make a simplifying
lim sup EfQFn ðtÞ þ QB
n ðtÞg un þ ;
T !1 T t¼0 n¼1 assumption that user requests are entirely offloaded from end
(30) devices to requesting edge clouds [4], [5], [7]. In practice, how-
ever, end devices usually possess increasing computing capa-
1XT 1
B1 bility and can execute complex computation tasks.
lim sup EfC OLCD ðtÞg C þ ; (31)
T !1 T t¼0 V Accordingly, a combination of local device computing and net-
worked resource sharing empowers users with multiple task
where B1 is defined as in Lemma 1, C is the optimal system execution approaches, including local mobile computing, D2D
cost of TCO in (14), C max ¼ Nrd ðEdC;max þ ½N 1 offloading, direct edge cloud offloading and D2D-assisted
B’max Þþ ½N 1NBrs max is the largest system cost, and edge cloud offloading. It would be worthwhile to further study
> 0 is a constant denoting the long-term computation sur- how to motivate efficient cooperations among end devices.
plus achieved by some stationary strategy. Since mobile devices are carried or owned by users, it is prom-
ising to leverage intrinsic social ties among users as coopera-
Proof. See Appendix D in the supplemental materials, avail-
tion incentive. We believe that this will serve as the corner-
able online. u
t
stone for socially-motivated collaborative MEC systems.
Remark. This theorem specifies a ½Oð1=V Þ; OðV Þ tradeoff Discussions about User Mobility. In this work, we mainly
between cost optimality and queueing delay. According to talk about a trusted cooperative offloading mechanism
Little’s law, the average queueing delay including trans- within specific geographic regions. By leveraging coopera-
mission and computation latency, is proportional to queue tions among edge clouds, networked resource sharing can
backlogs. The long-term queue backlog bound in (30) indi- be realized via multi-hop offloading. In this case, user
cates that the overall average queue backlog grows line- mobility does not affect task offloading among edge clouds
arly with V . OLCD asymptotically achieves the optimal since the correspondence between users and requesting
cost performance of the offline problem by letting V ! 1. edge clouds remains the same. Either considering user
However, the optimal system cost is achieved at the mobility or not has no effect on offloading control. Hence,
expense of larger transmission and computation latency. we choose the simplified version, i.e., static scenario with-
Since larger Edge-F and Edge-B queues are required to sta- out user mobility. But if users are allowed to connect to
bilize the system, the convergence is postponed. other edge clouds, that’s another story. When any user
With the above time-averaged system performance, moves across different areas, its service usually needs to be
we next consider more realistic scenario. What happens migrated to follow it so that the benefits of cooperative off-
when the scheduling decisions are made based on pre- loading are maintained. Moreover, transmission latency
dicted workloads Q ^B ðtÞ that differ from actual workloads between the user and the edge cloud that host its service
n
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
2844 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 19, NO. 12, DECEMBER 2020
may also decrease. A key challenge lies in when and where capture unbalanced workloads [42], and each user is
to migrate the service, which has been studied in earlier assigned to its nearest edge cloud with trust value 1.
works [39]. Notice that a full analysis of dynamic service We implement the OLCD mechanism for T ¼ 1000 slots
migration under user mobility effect is out of the scope of and compare it with three benchmarks: (1) Local Execution
this work. It surely will be interesting for future research to (LoE) [13]: edge clouds only process locally arrived tasks
study user mobility-driven cooperative offloading. with the highest social trust; (2) Single-hop Offloading (SiO)
[18]: trusted cooperative offloading is confined within one
5 PERFORMANCE EVALUATION hop, which achieves lower completion latency while
5.1 Simulation Setup guaranteeing an acceptable social trust; (3) Multi-hop off-
We envision a MEC system deployed in a commercial com- loading ignorant of Trust (MiT) [15]: multi-hop offloading is
plex where business tenants deploy edge clouds to serve enabled to minimize completion latency regardless of trust
employees. We simulate a 300 m 300 m commercial com- relationships among edge clouds. All algorithms are imple-
plex with 100 edge clouds9 distributed by homogeneous Pois- mented through Python code in Anaconda Simulator, and
son Point Process, which is commonly used in previous evaluated on a machine with Windows 64 bits, 3.2 GHz Intel
studies [8]. Unless otherwise specified, each edge cloud is con- Core i5 Processor, and 16 GB 1600 MHz DDR3 Memory.
nected to 7 neighbors on average with initial trust value uni- 5.2 Evaluation Results
formly distributed in [0.9, 1], and the trust decaying factor @ is 5.2.1 Run-Time Performance
set to 0.05. Suppose both local computing capacity and off-
loading capacity are uniformly distributed in [3, 6]. Inspired We first illustrate the performance comparison of system cost,
by [4], we set the skewness parameter x ¼ 1:04, processing queue backlogs and satisfaction ratio in terms of timespan T .
speed of fully loaded edge clouds y ¼ 200, and service limita- Fig. 5 presents that our proposed OLCD achieves the
tion & n ðtÞ uniformly distributed in [120, 150]. The backhaul lowest system cost with high speed of convergence. The
transmission latency coefficient ’ is set to 0:1 sec=Kb [25]. intuitive is that in OLCD, tasks are often offloaded multiple
Consider the realistic job trace from Google cluster [41] as hops away and finally fulfilled by powerful edge clouds.
computation-intensive tasks. For each job trace, the task is While in LoE, tasks can only be processed locally. Under
specified by a tuple including time in seconds since the start limited computing capacity, this practice may result in high
of data collection, consumptions of CPU and memory, and computation latency, even with the lowest trust risks and
task type. Here, the task type chosen from f0; 1; 2; 3g is transmission latency. That’s why the system cost in LoE is
determined by conducting workload characterizations. the highest. Due to failure of fully exploiting computing
Notice that our study highlights the role of tasks with capabilities of cooperative edge clouds, one-hop SiO is infe-
diverse trust demands. It is acceptable to regard that differ- rior to multi-hop MiT and OLCD in reducing system cost.
ent task types correspond to heterogeneous trust demands Compared to OLCD, MiT ignores social trust relationships
f0; 0:2; 0:4; 0:6g, ranging from completely public applica- among edge clouds when making offloading decisions, thus
tions (i.e., s i ¼ 0) to privacy-sensitive ones (i.e., s i ¼ 0:6). leading to higher trust risk cost. The results reveal what
We adjust tasks’ inter-arrival time based on Poisson distri- benefit do multi-hop offloading underlying social trust rela-
bution to accommodate stochastic arrival and the deadlines tionships bring to MEC system.
are uniformly distributed in [10,15]. The expected number From Fig. 6, we can observe that the queue backlogs
of CPU cycles and expected input data size per-task under these mechanisms converge to steady-state levels
are 50M and 0:25 Mb with g i uniformly distributed in with almost no ripples. That is, both convergence and sys-
[100, 300]. Motivated by the fact that the popularity follows tem stability are maintained. Under perturbed Lyapunov
heavy-tailed distribution, we use Zipf distribution to optimization, QFn ðtÞ is “pushed” towards un to avoid edge
cloud n from wasting computing resources and potentials
of social trust relationships. In addition, the stochastic con-
9. To force edge clouds deployed by different tenants, the scale of
commercial complex should be large enough ( > 100;000 ft2 [8]). Only trol processes will also lead to changes in QBn ðtÞ accordingly.
in this way can the potential of cooperations among edge clouds be That’s why the queue backlogs fluctuate around one fixed
exerted fully. According to the 2012 statistics of IREM [40], a typical value.
commercial complex within an area around 135; 710 ft2 was occupied Fig. 7 shows that OLCD is superior to others in guarantee-
by 10 tenants on average. Hence, it is reasonable to assume that there
are 100 edge clouds deployed in a 300 m 300 m commercial complex ing high satisfaction ratio denoted by the proportion of off-
(968; 751:938 ft2 ). loaded tasks that are able to meet the deadlines. In general,
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
LI ET AL.: LEARNING-AIDED COMPUTATION OFFLOADING FOR TRUSTED COLLABORATIVE MOBILE EDGE COMPUTING 2845
Fig. 7. Satisfaction ratio versus T . Fig. 8. Average system cost versus average neighbor number.
TABLE 2
Average Running Time for the OLCD
Mechanism versus Benchmarks
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
2846 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 19, NO. 12, DECEMBER 2020
Fig. 10. Latency cost and trust risk cost versus maximum hops. Fig. 13. Impact of estimated QB;max
n on the regret of DOGD.
Fig. 11. Average system cost versus importance weight V . Fig. 14. Differences in cost increase versus V .
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
LI ET AL.: LEARNING-AIDED COMPUTATION OFFLOADING FOR TRUSTED COLLABORATIVE MOBILE EDGE COMPUTING 2847
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
2848 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 19, NO. 12, DECEMBER 2020
[8] L. Chen, S. Zhou, and J. Xu, “Computation peer offloading for [31] S. Shalev-Shwartz, “Online learning and online convex opti-
energy-constrained mobile edge computing in small-cell networks,” mization,” Found. Trends Mach. Learn., vol. 4, no. 2, pp. 107–194,
IEEE/ACM Trans. Netw., vol. 26, no. 4, pp. 1619–1632, Aug. 2018. Feb. 2012.
[9] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computa- [32] X. Zhang, C. Wu, Z. Li, and F. C. M. Lau, “Proactive VNF provi-
tion offloading for mobile-edge cloud computing,” IEEE/ACM sioning with multi-timescale cloud resources: Fusing online learn-
Trans. Netw., vol. 24, no. 5, pp. 2795–2808, Oct. 2016. ing and online optimization,” in Proc. IEEE Conf. Comput.
[10] J. Xu, L. Chen, and P. Zhou, “Joint service caching and task off- Commun., Apr. 2017, pp. 1–9.
loading for mobile edge computing in dense networks,” in Proc. [33] K. Quanrud and D. Khashabi, “Online learning with adversarial
IEEE Conf. Comput. Commun., Apr. 2018, pp. 207–215. delays,” in Proc. 28th Int. Conf. Neural Inf. Process. Syst., Dec. 2015,
[11] A.-C. Pang, W.-H. Chung, T.-C. Chiu, and J. Zhang, “Latency- pp. 1270–1278.
driven cooperative task computing in multi-user fog-radio access [34] P. Joulani, A. GyM€ orgy, and C. Szepesvari, “Online learning
networks,” in Proc. IEEE 37th Int. Conf. Distrib. Comput. Syst., under delayed feedback,” in Proc. 30th Int. Conf. Mach. Learn.,
Jun. 2017, pp. 615–624. Jun. 2013, pp. III-1453–III-1461.
[12] T. X. Tran, A. Hajisami, P. Pandey, and D. Pompili, “Collaborative [35] N. Chen, A. Agarwal, A. Wierman, S. Barman, and L. L. H. Andrew,
mobile edge computing in 5G networks: New paradigms, scenar- “Online convex optimization using predictions,” in Proc. ACM
ios, and challenges,” IEEE Commun. Mag., vol. 55, no. 4, pp. 54–61, SIGMETRICS Int. Conf. Meas. Model. Comput. Syst., Jun. 2015,
Apr. 2017. pp. 191–204.
[13] Y. Xiao and M. Krunz, “QoE and power efficiency tradeoff for fog [36] L. Kazemi and C. Shahabi, “GeoCrowd: Enabling query answer-
computing networks with fog node cooperation,” in Proc. IEEE ing with spatial crowdsourcing,” in Proc. 20th Int. Conf. Advances
Conf. Comput. Commun., Apr. 2017, pp. 1–9. Geographic Inf. Syst., Nov. 2012, pp. 189–198.
[14] M. Wang, H. Jin, C. Zhao, and D. Liang, “Delay optimization of [37] J. Edmonds and R. M. Karp, “Theoretical improvements in algo-
computation offloading in multi-hop ad hoc networks,” in Proc. rithmic efficiency for network flow problems,” J. ACM, vol. 19,
IEEE Int. Conf. Commun. Workshops, May 2017, pp. 314–319. pp. 248–264, 1972.
[15] X. Lyu, C. Ren, W. Ni, H. Tian, and R. P. Liu, “Distributed optimi- [38] X. Li, H. Ma, W. Yao, and X. Gui, “A trust-based framework for
zation of collaborative regions in large-scale inhomogeneous fog fault-tolerant data aggregation in wireless multimedia sensor net-
computing,” IEEE J. Sel. Areas Commun., vol. 36, no. 3, pp. 574–586, works,” IEEE Trans. Depend. Sec. Comput., vol. 9, no. 6, pp. 785–797,
Mar. 2018. Nov. 2012.
[16] L. Pu, X. Chen, J. Xu, and X. Fu, “D2D fogging: An energy-efficient [39] S. Wang, R. Urgaonkar, M. Zafer, T. He, K. Chan, and K. K. Leung,
and incentive-aware task offloading framework via network- “Dynamic service migration in mobile edge computing based on
assisted D2D collaboration,” IEEE J. Sel. Areas Commun., vol. 34, markov decision process,” IEEE/ACM Trans. Netw., vol. 27, no. 3,
no. 12, pp. 3887–3901, Dec. 2016. pp. 1272–1288, Jun. 2019.
[17] T. He, E. N. Ciftcioglu, S. Wang, and K. S. Chan, “Location privacy [40] Institute of Real Estate Management, “Trends in office buildings
in mobile edge clouds: A chaff-based approach,” IEEE J. Sel. Areas operations,” 2012. [Online]. Available: https://www.irem.org/File
Commun., vol. 35, no. 11, pp. 2625–2636, Nov. 2017. %20Library/IREM%20Store/Document%20Library/IESamples/
[18] L. Chen and J. Xu, “Socially trusted collaborative edge computing 12Samples/2012OfficeBuildTrends.pdf.
in ultra dense networks,” in Proc. 2nd ACM/IEEE Symp. Edge Com- [41] [Online]. Available: https://code.google.com/p/googleclusterdata/,
put., Oct. 2017, Art. no. 9. Accessed: 2019.
[19] Y. Lin and H. Shen, “CloudFog: Leveraging fog to extend cloud [42] M. E. J. Newman, “Power laws, pareto distributions and zipf’s
gaming for thin-client MMOG with high quality of service,” IEEE law,” Contemporary Physics, vol. 46, no. 5, pp. 323–351, Sept. 2005.
Trans. Parallel Distrib. Syst., vol. 28, no. 2, pp. 431–445, Feb. 2017. [43] Z. Xu, W. Liang, W. Xu, M. Jia, and S. Guo, “Efficient algorithms
[20] J. Zhu, Z. Zheng, and M. R. Lyu, “DR2: Dynamic request routing for capacitated cloudlet placements,” IEEE Trans. Parallel Distrib.
for tolerating latency variability in online cloud applications,” in Syst., vol. 27, no. 10, pp. 2866–2880, Oct. 2016.
Proc. IEEE 6th Int. Conf. Cloud Comput., Jun. 2013, pp. 589–596. [44] D. Zhao, X.-Y. Li, and H. Ma, “How to crowdsource tasks truth-
[21] Z. Qiu, J. F. Perez, and P. G. Harrison, “Variability-aware request fully without sacrificing utility: Online incentive mechanisms
replication for latency curtailment,” in Proc. 35th Annu. IEEE Int. with budget constraint,” in Proc. IEEE Conf. Comput. Commun.,
Conf. Comput. Commun., Apr. 2016, pp. 1–9. Apr. 2014, pp. 1213–1221.
[22] Z. Zheng and N. B. Shroff, “Online multi-resource allocation for [45] M. Zeng, Y. Li, K. Zhang, M. Waqas, and D. Jin, “Incentive mecha-
deadline sensitive jobs with partial values in the cloud,” in Proc. nism design for computation offloading in heterogeneous fog
35th Annu. IEEE Int. Conf. Comput. Commun., Apr. 2016, pp. 1–9. computing: A contract-based approach,” in Proc. IEEE Int. Conf.
[23] X. Li, H. Ma, W. Yao, and X. Gui, “Data-driven and feedback- Commun., May 2018, pp. 1–6.
enhanced trust computing pattern for large-scale multi-cloud [46] Y. Wang, X. Jia, Q. Jin, and J. Ma, “QuaCentive: A quality-aware
collaborative services,” IEEE Trans. Serv. Comput., vol. 11, no. 4, incentive mechanism in Mobile Crowdsourced Sensing (MCS),”
pp. 59–62, Jul. 2018. The J. Supercomput., vol. 72, no. 8, pp. 2924–2941, Aug. 2016.
[24] Y. Yao, L. Huang, A. B. Sharma, L. Golubchik, and M. J. Neely,
“Power cost reduction in distributed data centers: A two-time- Yuqing Li received the BS degree in communica-
scale approach for delay tolerant workloads,” IEEE Trans. Parallel tion engineering from Xidian University, Xi’an,
Distrib. Syst., vol. 25, no. 1, pp. 200–211, Jan. 2014. China, in 2014, and is currently working toward the
[25] K. Zhang, Y. Mao, S. Leng, Q. Zhao, L. Li, X. Peng, L. Pan, PhD degree in electronic engineering at Shanghai
S. Maharjan, and Y. Zhang, “Energy-efficient offloading for Jiao Tong University, Shanghai, China. Her current
mobile edge computing in 5G heterogeneous networks,” IEEE research interests include edge/mobile computing,
Access, vol. 4, pp. 5896–5907, Aug. 2016. social aware networks, privacy/security, and net-
[26] H. Wang, R. Shea, X. Ma, F. Wang, and J. Liu, “On design and per- work economics.
formance of cloud-based distributed interactive applications,” in
Proc. IEEE 22nd Int. Conf. Netw. Protocols, Oct. 2014, pp. 37–46.
[27] N. Dukkipati and N. McKeown, “Why flow-completion time is
the right metric for congestion control,” IEEE Access, vol. 36, no. 1,
pp. 59–62, Jan. 2006. Xiong Wang received the BE degree in electronic
[28] C. -H. Tai, J. Zhu, and N. Dukkipati, “Making large scale deploy- information engineering from the Huazhong Uni-
ment of RCP practical for real networks,” in Proc. IEEE 27th Conf. versity of Science and Technology, Wuhan, China,
Comput. Commun., Apr. 2008, pp. 2180–2188. in 2014, and is currently working toward the PhD
[29] M. J. Neely, Stochastic Network Optimization with Application to degree in electronic engineering at Shanghai Jiao
Communication and Queueing Systems. San Mateo, CA, USA: Mor- Tong University, Shanghai, China. His current
gan & Claypool, 2010. research interests include crowdsourcing, data
[30] Y. Li, W. Dai, J. Bai, X. Gan, J. Wang, and X. Wang, “An mining, resource allocation, and mobile computing.
intelligence-driven security-aware defense mechanism for
advanced persistent threats,” IEEE Trans. Inf. Forensics Security,
vol. 14, no. 3, pp. 646–661, Mar. 2019.
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.
LI ET AL.: LEARNING-AIDED COMPUTATION OFFLOADING FOR TRUSTED COLLABORATIVE MOBILE EDGE COMPUTING 2849
Xiaoying Gan received the PhD degree in elec- Luoyi Fu received the BE degree in electronic
tronic engineering from Shanghai Jiao Tong Uni- engineering from Shanghai Jiao Tong University,
versity, Shanghai, China, in 2006. She is currently China, in 2009 and the PhD degree in computer
an associate professor in the Department of Elec- science and engineering from the same university,
tronic Engineering, Shanghai Jiao Tong University. in 2015. She is currently an assistant professor in
From 2009 to 2010, she worked as a visiting the Department of Computer Science and Engi-
researcher at the California Institute for Telecom- neering, Shanghai Jiao Tong University. Her
munications and Information Technology, Univer- research of interests are in the area of social net-
sity of California San Diego. Her research interests working and big data, connectivity analysis, and
include network economics and wireless resource random graphs.
management. She is a member of the IEEE.
Haiming Jin received the BS degree from Shang- Xinbing Wang (SM’12) received the BS degree in
hai Jiao Tong University, Shanghai, China, in automation from Shanghai Jiao Tong University,
2012, and the PhD degree from the University of Shanghai, China, in 1998, the MS degree in com-
Illinois at UrbanaChampaign (UIUC), Urbana, IL, in puter science and technology from Tsinghua Uni-
2017. He is currently a tenure-track assistant pro- versity, Beijing, China, in 2001, and the PhD
fessor with the John Hopcroft Center for Computer degree with a major in electrical and computer
Science and the Department of Electronic Engi- engineering and minor in mathematics from North
neering, Shanghai Jiao Tong University. Before Carolina State University, Raleigh, in 2006. Cur-
this, he was a post-doctoral research associate rently, he is a professor with the Department of
with the Coordinated Science Laboratory, UIUC. Electronic Engineering, Shanghai Jiao Tong Uni-
His research interests include crowd and social versity. His research interests include resource
sensing systems, reinforcement learning, and mobile pervasive and ubiqui- allocation and management in mobile and wireless networks, cross-layer
tous computing. call admission control, and congestion control over wireless ad hoc and
sensor networks. He has been a member of the technical program commit-
tees of several conferences including ACM MobiCom 2012, ACM MobiHoc
2012, and IEEE INFOCOM 2009-2013. He is a senior member of the IEEE.
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 26,2021 at 04:20:55 UTC from IEEE Xplore. Restrictions apply.