0% found this document useful (0 votes)
35 views14 pages

Offloading Decision For Mobile Multi-Access Edge Computing in A Multi-Tiered 6G Network

Uploaded by

Deva Bala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views14 pages

Offloading Decision For Mobile Multi-Access Edge Computing in A Multi-Tiered 6G Network

Uploaded by

Deva Bala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Received 11 January 2021; revised 3 June 2021; accepted 13 June 2021.

Date of publication 17 June 2021; date of current version 6 September 2022.


Digital Object Identifier 10.1109/TETC.2021.3090061

Offloading Decision for Mobile Multi-Access Edge


Computing in a Multi-Tiered 6G Network
TIAGO KOKETSU RODRIGUES , (Member, IEEE),
JIAJIA LIU , (Senior Member, IEEE), AND NEI KATO , (Fellow, IEEE)
Tiago Koketsu Rodrigues and Nei Kato are with the Graduate School of Information Sciences, Tohoku University, Sendai, Miyagi 980-8577, Japan
Jiajia Liu is with the National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Cybersecurity,
Northwestern Polytechnical University, Xi’an, Shaanxi 710072, China
CORRESPONDING AUTHOR: TIAGO KOKETSU RODRIGUES ([email protected])
This article has supplementary downloadable material available at https://doi.org/10.1109/TETC.2021.3090061, provided by the authors.

ABSTRACT Future telecommunication systems in beyond 5G/6G networks, will include a massive amount
of devices and a high variation of applications, many of which with steep processing requirements and strict
latency limitations. To satisfy such demands, Multi-access Edge Computing will play a key role in the future
of cloud systems. Users can offload their applications to edge cloud servers, capable of processing their tasks
and responding with an output quickly. However, for this to become a reality, it is important to carefully
choose a server for each user. This decision is complicated by user mobility and how users could alternatively
connect to the remote cloud or execute applications locally. In this article, we propose a heuristic algorithm
for determining the best server for each user in a multiple mobile users, multiple servers, multi-tiered sce-
nario. Our proposal considers the time needed for transmitting and processing tasks when minimizing the
total service delay as well as the time needed for service setup and migration of data between servers. More-
over, our method attempts to mimic as faithfully as possible real-life scenarios. Finally, analysis shows that
our proposal is vastly superior to benchmark methods and even improves upon a solution commonly used in
the literature.
INDEX TERMS Multi-access edge computing, cloudlet, multi-tiered system, beyond 5G, 6G, offloading,
server selection

I. INTRODUCTION network [4]. Because of such features, MEC is righfully


Multi-access Edge Computing (MEC) is a cloud service expected to serve a fundamental role in Beyond 5G and 6G
standard designed primarily by the European Telecommuni- networks, enabling a plethora of new applications for edge
cations Standards Institute (ETSI) [1]. Its definition devices [5]. Possible use cases include video analytics,
replaced the previous Mobile Edge Computing one to autonomous driving, Internet of Things, augmented reality,
emphasize even further the support of multiple access tech- content distribution, and data caching, all of which are sig-
nologies in the network edge. The base behind MEC is pro- nificantly improved through MEC [1], [6]–[8].
viding cloud servers and computation offloading services There are other edge offloading models (e.g., fog comput-
from the edge of the network, i.e., near the end users and ing [9]) and even variations within MEC research itself.
data sources. This proximity allows for a lower latency and However, in this paper, we will consider MEC with cloudlets
turnaround time, specially when compared with conven- [10]. MEC, as a standard defined by ETSI, has significant
tional cloud services (where the cloud server is often potential for the future. As for cloudlets, they are defined as
deployed in the network core, significantly distant from cloud servers designed for ubiquitous connection. To guaran-
users and data sources) [2]. Consequently, this lower delay tee that MEC users are always near a cloudlet, these servers
allows MEC to work with real-time applications that are usually deployed in high numbers around the network
demand a fast response time [3]. Moreover, by utilizing edge. Such a deployment model has a high economic cost,
resources and infra-structure at the edge, MEC is also capa- which is offset by making the cloudlets significantly less
ble of alleviating the load and congestion at the core of powerful than conventional cloud servers. Because each

2168-6750 © 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE
1414
ed licensed use permission.
limited to: AMRITA VISHWA VIDYAPEETHAM See ht_tSCHOOL
AMRITA ps://www.ieee.org/publications/rights/index.html
OF ENGINEERING. Downloaded for more information. 27,2024 at VOLUME
on September 12:14:3110,
UTCNO.from
3, JULY-SEPT. 2022 Restriction
IEEE Xplore.
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

cloudlet services a small fraction of the number of users that


cloud servers are built for, such gap in capacity does not
incur a decay in the quality of service.
Regarding the quality of service, it can be measured in
MEC by service delay, i.e., the time between a user request
and the corresponding results [11]. While the proximity
brought by edge services does indeed mean a faster service,
it is very important to optimize the system so that service
delay is even further improved [12]. Doing so brings many
advantages: a lower service delay means more application
can be supported, it means that the resources are being used
more efficiently, and it allows for fewer cloudlets to be
deployed without losing much in terms of quality of service
[3]. Additionally, an extra difficulty introduced by 6G is the
massive amount of users and how they access the network in
a multi-access manner, i.e., multiple users connecting simul- FIGURE 1. User devices, base stations, cloudlets, and the remote
taneously to the same access point [5]. This competition for cloud all make part of the MEC system, which spans across the
resources in the network edge brought by the multiple access network edge and the network core.
must be properly managed lest some users receive signifi-
cantly bad service [6], [11]. Thus, neglecting to properly con- fronthaul and backhaul communication, uplink and
figure and improve the system would ultimately be a waste of downlink transmission, and processing;
potential in terms of network operation and ultimately lead to  The proposed algorithm utilizes the model to find a
poor performance. near-optimal offloading configuration for the moving
System optimization here can be interpreted as selecting users while maintaining feasible complexity, despite
which users will connect to each cloudlet, a problem usually the high number of variables and solutions of the
referred to as offloading decision [6], [13]. It is important to problem;
avoid overloading cloudlets (which would lead to a delay in  The algorithm is evaluated alongside benchmark solu-
answering requests) as well as not giving them enough cli- tions as well as the traditional flavor of configuration
ents (as that means more idle time, which is a waste of utilized in edge cloud research literature, showing that
resources). Furthermore, MEC users often are also able to the proposal significantly outperforms the mentioned
choose to either offload their requests to the central cloud alternatives;
server (which means more latency when compared with  Finally, the mathematical model, the algorithm, and
cloudlets [2] but faster processing time/more resources [14]) the evaluation all consider parameters and technolo-
or just process them locally (which has no competition with gies expected to be present in 6G networks, thus
other users or communication delay but has slower process- improving the chances of the contributions of this
ing [15]]). Depending on how busy the cloudlets are or the paper being relevant for the future of the telecommu-
state of the network, these two options are more advanta- nication industry.
geous. Finally, MEC operates at the edge of the network, The remainder of this paper is organized as follows. In
where changes in the scenario are often and impactful. For Section II, a review of the MEC offloading decision litera-
example, users can move, which significantly modifies their ture is given with a focus on what is mainly lacking and
connection setup and latency to access the service, some- how the present paper addresses these shortcomings.
times even connecting to a completely different access point. Section III presents the assumed scenario and a mathemati-
All these points, which are illustrated in Figure 1, must be cal model for calculating service delay under such assump-
taken into consideration when configuring the system lest the tions. Section IV presents an analytical model of the
final solution will be far from efficient and not deliver the offloading decision problem and our proposed heuristic
best possible quality of service. algorithm based on the mathematical model of Section III.
With that being said, in this paper, we present a novel Section V contains the performance evaluation between
algorithm for deciding the offloading destination for users in our proposal and other solutions. Finally, Section VI
a MEC system. The main contributions of this research are concludes our findings, followed by the acknowledgment.
as follows: The Appendix (available online), has an important proof
 The scenario considered here includes multiple moving for our proposal.
users, multiples cloudlets, and the possibility of off-
loading to either cloudlets or the centralized cloud II. RELATED WORKS
server as well as just performing local processing; The offloading decision problem is not a new proposition and
 A mathematical model is presented that can calculate it is similar to general server selection problems from
the full service delay given this scenario, including research fields older than MEC. Nonetheless, MEC has

VOLUME
ed licensed use 10,to:
limited NO.AMRITA
3, JULY-SEPT. 1415 Restriction
2022 VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at 12:14:31 UTC from IEEE Xplore.
VISHWA
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

various characteristic features to its system model (such as real system based on these solutions would be far from
the dynamicity of the edge environment and its correspond- optimal.
ing wireless media and the mobility of the users having sig- Finally, some works represent servers as having a constant
nificant effects on the performance) that mean that new capacity, i.e., each server is able of handling up to a pre-
solutions are needed [6]. Even then, the MEC offloading determined amount of users [3], [15], [17]. This makes sense
decision problem has been studied in the literature before. when talking about memory, as certainly user tasks require
However, the existing works usually are lacking in key fea- memory to be processed, but the memory cost pales in com-
tures, often to simplify the overall scenario and make the parison to the processing cost when talking about MEC [14].
solution less complex and feasible. We will also make men- With that being the case, a fixed capacity is not an accurate
tions to fog computing [9], as the system model of fog com- depiction. If the capacity is not carefully chosen (and in those
puting and MEC are very similar [6]. works, there is no mention of how to define this capacity),
One key point that is missing in a lot of research is a real- cloudlets could be overworked or underworked, which can
istic model of both the communication and processing ele- correspondingly lead to higher than optimal delay or less
ments of the MEC system model [16]. The issue is that such than desirable resource efficiency. Moreover, a fixed user
a model would require keeping track of calculating impor- capacity for servers is not able to adapt to the dynamic sce-
tant values (such as signal power, path loss, processing narios of MEC. Even in fog computing research we see the
workload) for many users and servers and calculate changes mention of fixed capacity servers [22], [23], [25]. A more fit-
in these states as the environment varies with time due to ting model would be a queue for utilizing the cloudlet pro-
user mobility and its effect on the wireless medium. This cessors where arriving tasks are put in until a processor is
would be a complex and difficult to calculate model, which available [19].
motivates researchers to simplify it. Some works utilize In summary, existing works are in general lacking realism
constant values for all of (or at least part of) the transmis- in their assumed scenarios, cutting off aspects of the net-
sion delay and/or the processing delay[2], [4], [12], [14], working model, processing model, tiered topology, or using
[17]. This removes user competition for resources and a flawed fixed capacity for processing resources. With that
allows for the solution to focus on each user individually. in mind, in this paper, we propose a mathematical model of
Other works ignore communication between servers and/or the MEC system that fully models the network and compu-
migration of tasks between servers [2], [3], [12], [18], tation elements of the delay. The model also includes multi-
removing a key feature from MEC that allows it to reconfig- ple users, multiple servers, mobile servers, and the
ure itself. Finally, some works consider a single user [4], possibility of processing tasks locally or at the network
[16], [19], a single server [12], [18], or static users [3], [12], core. Finally, the processing model utilizes a queue for the
[18], [20]; all these remove one layer of complexity by processors at the cloudlets. All these aspects make our
removing one degree of the possible solutions that would model realistic. More importantly, despite the complexity
have to be calculated. Even if we extend our related works of such a model, we still propose a feasible, near-optimal
to those mentioning fog computing, we see that there is a heuristic solution to the offloading decision problem under
lack of dynamicity [21] and encounter the same strategies all these assumptions.
of removing layers of complexity of the scenario for easier
calculations [22]–[26]. While all those research pieces are III. ASSUMED SCENARIO AND MATHEMATICAL MODEL
valid, it is necessary to mention that they do not properly In this section, we will present our assumed scenario and a
reflect real scenarios of MEC and therefore would face diffi- mathematical model for calculating the total average service
culty when translated to real systems [16]. delay of MEC under those assumptions. The model will con-
Another important aspect of MEC that is often glanced sider communication and computation and will be used later
over is how MEC systems are inherently multi-tiered [12], for creating our multi-tiered offloading decision algorithm.
[15]. Requests can be executed locally by the user device, at
the edge by cloudlets, or at the network core by conventional A. ASSUMED SCENARIO
cloud servers. All options have different benefits and draw- Our service model is as follows [5], [15]. We assume that
backs related both to processing and transmission. Moreover, all MEC users ui 2 U generate tasks following a Poisson
with a high number of users (as should be expected with distribution with a constant rate. These tasks (input) must
MEC), there should be a lot of competition for resources at be processed and their results (output) must be presented to
all levels, making a balance of workload between tiers of the the user. For processing the tasks, there are two options:
system even more important. Nonetheless, there is a lack of local processing or cloud offloading. If local processing is
research on this area, as many works ignore the possibility of chosen, there is no communication and the tasks are proc-
local execution [17], [19] or execution at the network core essed using the user device resources. If cloud offloading is
cloud servers [3], [18], [20] or both [2], [4], [14], [15], [27]. chosen, a second choice must be made: which cloud server
Some works on fog computing also ignore some of the tiers will the tasks be sent to. Possible choices are any of edge
of the network [24], [26]. This aspect severely decreases the cloud servers kp 2 K (hereby called cloudlets) or the remote
number of possible solutions and most likely means that a conventional cloud server (hereby called remote cloud). A

1416
ed licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at VOLUME
12:14:3110,
UTCNO.from
3, JULY-SEPT. 2022 Restriction
IEEE Xplore.
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

virtual machine will have to be set up first in the selected


cloud server to service the user. The user will send its tasks
to one of the base stations vr 2 V which in turn will relay
the tasks to the selected server. The server will process the
task and send back the corresponding results using one of
the base stations as a relay as well. Therefore, each user not
doing local processing connects to a single base station, a
single virtual server, and a single cloud server. Base stations
and cloud servers can each be connected to multiple users,
but virtual servers are always connected to one single user
each. Consequently, cloud servers can be hosting multiple
virtual servers. Additionally, users are capable of moving
around with time. To adapt to this movement, the system
can migrate virtual servers between cloud servers, making it
so a user offloads its tasks to a different server. The
assumed service model is illustrated in Figure 2, with spe-
cial differentiation to the different phases of the service
model. Note how local execution only has processing delay
whereas cloud offloading has additionally transmission,
migration, and setup delay associated with it.

B. USER MOBILITY
For starters, we will divide the observed time into J timeslots
of equal length t seconds each. Then, each user ui 2 U will FIGURE 2. Our assumed service model, with the different phases
of service delay color-coded and differentiation between cloud
have physical coordinates for where they are during each
t t offloading and local execution.
timeslot. I.e., user ui is at location ðxuji ; yuji Þ during timeslot tj .
Moreover, this model can also represent static users (which
t t
can be the case for some devices) by having ðxuji ; yuji Þ ¼ remote cloud has a vast amount of resources and thus the
ðxui ; yui Þ; 0  j  ðJ  1Þ (where t0 is the first timeslot). This
t0 t0
tasks do not have to wait for processors to be free [14].
means that our mathematical model (and our proposed solu- Because of this, processing delay in these scenarios is deter-
tion) will work with the assumption that the movement of mined solely by the average time to execute a task in the
users is known and pre-determined. Although it is a limita- remote cloud, represented by mremote (in seconds).
tion and loss of generality to not handle random movement, On the other hand, because the local environment and
t
users/user devices generally have specific routines (e.g., cloudlets are more limited in terms of resources, if Auji ¼
tj
going from work to home at specified times), with little varia- local or Aui ¼ kp then the corresponding tasks must queue
tion. Therefore, this is not a far-fetched assumption to make for using the processors there [14]. Both local and cloudlet
[14], [15], [17]. Additionally, when evaluating the perfor- environments thus follow a M/M/c queueing model [11], i.e.,
mance (in Section V), the actual movement of the users will a model where requests (tasks) arrive following a Poisson
be generated randomly. Thus, we can realistically model process, requests (tasks) are processed following a Poisson
mobility for all users by changing their position from time- process, and there are c servers (processors). Thus, for envi-
slot to timeslot. ronment , if  ¼ local or  ¼ kp , the occupation rate of its
processors is defined as the following.
C. PROCESSING DELAY
t
We will also divide the offloading decision between time- t j  m
slots, i.e., each user ui 2 U must decide the location Auji
t %j ¼ ; (1)
c
where its tasks will be processed during timeslot tj . More- t
over, processing delay will be divided between two intervals: j is the task arrival rate at  during timeslot tj (in tasks per
first, the task must wait until a processor is available; second, second), m is the average task execution time at  (in sec-
the task will be executed by the processor. The duration of onds), and c is how many processors are there at . Addi-
the waiting period depends on how many tasks are there at tionally, we will also declare the following variable to
the environment chosen to execute the task (i.e., how busy simplify the notation of a commonly used expression in this
the processors are) [10]. The duration of the execution period queueing model.
depends on how fast the processors are, which should vary t
from environment to environment [15]. t ðc  %j Þc
t cj ¼ ; (2)
If Auji ¼ ðremoteÞ then the tasks of ui will be processed in c !
the remote cloud during timeslot tj . We can assume that the

VOLUME
ed licensed use 10,to:
limited NO.AMRITA
3, JULY-SEPT. 1417 Restriction
2022 VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at 12:14:31 UTC from IEEE Xplore.
VISHWA
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

Given that, the probability that tasks arriving at environ- vTX is the transmission power of the transmitter (in decibels-
ment  have to wait is given by the formula below. milliwatts), Gx is the antenna gain at x (in decibels-isotrope),
tj
h is the Rayleigh fading coefficient, and LðTX;RXÞ is the path
c 1
!1
t t t X ðcx  %tj Þq t
loss between receiver and transmitter at timeslot tj (in deci-
Wj ¼ cj  ð1  %j Þ  þ cj : (3) bels-milliwatts). Path loss under millimeter wave communi-
q!
q¼0 cation systems in urban environments is commonly
determined (in decibels-milliwatts) through the formula
Thus, we can define the average processing delay of tasks
below [5].
processed at environment  as follows.
 
tj tj
t LðTX;RXÞ ¼ iintercept þ 10  iexponent  log10 dðTX;RXÞ ;
t Wj  m
Pj ¼ t þ m : (4) (10)
ð1  %j Þc  c
iintercept is the path loss floating intercept, iexponent is the path
If  ¼ local, then only the tasks of that local user will be exe- tj
loss average exponent, and dðTX;RXÞ is the distance between
cuted in the environment. Thus, we have the following definition. transmitter and receiver during timeslot tj (in meters). More-
t over, we assume that users will always connect to the base
local
j
¼ 1 ; (5) station that offers the highest signal power, a very common
1 is the task generation rate from a single user (in tasks per condition of cellular networks [10]. Therefore, the associated
second). Analogously, for cloudlet kp at timeslot tj , the envi- base station for user ui during timeslot tj is given by the for-
ronment  ¼ kp will execute all tasks from users in the set mula below.
defined below. t t t
buji ¼ vr jvr 2 V; Hðjvr ;ui Þ  H jv ;u 8vr0 2 V: (11)
t t
ð r0 i Þ
Ukjp ¼ fui jAuji ¼ kp g (6)
We assume NOMA and beamforming for our scenario,
Given this, we have the following definition. which are reasonable assumptions for 6G MEC networks [5].
Because of this, we can ignore interference both in the uplink
t t
kjp ¼ jUkjp j  1 : (7) and the downlink. Consider the set below of users connected
to base vr during timeslot tj .
Finally, we can define the average processing delay (in sec- t t
Uvjr ¼ fui jbuji ¼ vr g: (12)
onds) for tasks generated by user ui during timeslot tj as follows.
8 tj tj The number of available beams of a base station will be
>
< Plocal ; if Aui ¼ local divided between the users in the above set that will be trans-
t t
Puji ¼ mremote ; if Auji ¼ remote : (8) mitting/receiving simultaneously (which we can estimate
>
: Ptj tj
kp ; if Aui ¼ kp through the task generation rate). Given that, the Shannon-
Hartley theorem says that the transmission rates (in bits per
second) during fronthaul uplink and fronthaul downlink are
D. TRANSMISSION DELAY respectively obtained through the equations below.
t
As mentioned before, if Auji ¼ ðlocalÞ then no transmission is 0 1
Hj tj 
t
needed and the transmission delay is 0 for ui during tj . Other- B ui ;bui C
wise, the transmission will be categorized as uplink or down- ^ tuji ¼ Vuplink  log2 B B
R  C (13)
@ t  uplink A
link, and fronthaul or backhaul. Uplink refers to sending the U jt   1 N  V
 buj 
task input from the user to the cloud server. Downlink refers i

to sending the task output from the cloud server to the user.
Fronthaul is the communication between user and base sta- 0 1
Hj 
t
tion. Finally, backhaul is the communication between the
B C
 tuji ¼ Vdownlink  log2 B B
t
buji ;ui
base station and the cloud server. Thus, there are a total of R  C;
@ t  downlink A
four divisions of the transmission delay. U jt   1 N  V
 buj 
Fronthaul communication is done through a wireless i

medium. So, for a transmitter TX and a receiver RX, the sig- (14)
nal power sensed at the receiver in Watts during timeslot tj is
given by the formula below. Vuplink and Vdownlink are the total bandwidth (in Hertz) of
the uplink and downlink channels, B is the total number
 
v þG þG þhL j
t
101 of beams available in a base station, and N is the noise
tj 10 TX TX RX ðTX;RXÞ
density (in Watts per Hertz). Thus, to calculate the trans-
HðTX;RXÞ ¼ ; (9)
1000 mission delay of tasks generated by user ui during timeslot

1418
ed licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at VOLUME
12:14:3110,
UTCNO.from
3, JULY-SEPT. 2022 Restriction
IEEE Xplore.
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

tj in the fronthaul, we now just need to take into consider- different server, at which point the virtual machine will be
ation the propagation delay and the size of the payload, migrated to the new server [19]. For example, if a user goes
resulting in the following formulas respectively for uplink from offloading to server kp on timeslot tj to local processing
and downlink. on timeslot ðtjþ1 Þ, no migration is needed. If on timeslot
t
ðtjþ2 Þ, the user goes back to offloading to server kp , no migra-
dj t tion or setup is needed (as the virtual machine is already in
^ utji nuplink ðui ;buji Þ
F ¼ þ (15) kp ). However if, instead, the user offloads to kp0 on timeslot
R^ tuji g
ðtjþ2 Þ then the virtual machine has to be migrated to kp0 from
kp (which is the current location of the virtual machine). This
t
dj is true even if the server is the remote cloud instead of a
t
 utji ndownlink ðui ;buji Þ cloudlet. To represent this, we introduce a new variable to
F ¼ þ ; (16)
R tuji g track where the virtual machine of each user is hosted, as
shown below.
nuplink and ndownlink are the packet sizes (in bits) for the task 8 t
input (uplink) and output (downlink) and g is the speed of >
< Aui ; if tj ¼ t0
j
tj tj t
light (in meters per second). u ui ¼ Aui ; if tj 6¼ t0 ; Auji 6¼ local ; (20)
Backhaul communication is done through wired connec- >
: tj1 t
u ui ; if tj 6¼ t0 ; Auji ¼ local
tions between the base stations and the servers [16]. In a
t
wired medium, we can assume that bandwidth is abundant uuji ¼ local represents a user that has no virtual machine set
enough that the transmission rate is high and propagation up at any server so far. Similarly to setup delay, if any tasks
delay dominates [15]. This results in the formulas below for arrive at the new server before migration is done, then they
the transmission delay of tasks generated by user ui during must wait. This waiting period we will define as migration
timeslot tj in the backhaul respectively for uplink and down- delay. Finally, as with setup delay, no migration is needed
link. when executing tasks locally.
tj For both scenarios, the affected tasks are the ones that
^ tuji ¼ EAtuj i
B (17) arrive before the virtual machine is ready at the server. Thus,
bui
we will introduce below two variables that respectively indi-
cate whether user ui has tasks affected by setup delay during
tj timeslot tj or not, and whether user ui has tasks affected by
 tuji ¼ EAtuj i ;
B (18)
bui migration delay during timeslot tj or not.
k 8 t
Evpr (respectively Eremote
vr ) is the estimated latency (in seconds) < 0 ; if Auji ¼ local
tj
between base station vr and cloudlet kp (respectively the Wui ¼ 0 ; if tj 6¼ t0 ; utuj1 6¼ local (21)
: i
remote cloud). 1 ; otherwise
Finally, by joining all stages of the transmission, we can
calculate the average transmission delay in seconds of tasks 8 t
generated by user ui during timeslot tj through the formula >
> 0 ; if Auji ¼ local
<
tj 0 ; if tj ¼ t0
below. Mu i ¼ : (22)
8 >
>
t t
; if uuji ¼ uuj1
:0 i
>
<0
t
; if Auji ¼ local 1 ; otherwise
tj
Tui ¼ F ^ utji þ F
 utji þ B
^ tuji þ B
 tuji ; if Atuji ¼ remote :
>
: ^ tj tj tj For the sake of brevity, we will create a variable to identify
Fui þ F  ui þ B ^ ui þ B  tuji ; if Atuji ¼ kp 2 K
how long it takes for the virtual machine of user ui to be
(19) ready at timeslot tj . If no setup or migration is needed, this
variable is 0 to symbolize how the virtual machine is ready
from the start of the timeslot.
E. SETUP AND MIGRATION DELAYS
The first time a user connects to the system (i.e., does not uti- t t t t
zuji ¼ Wuji  S þ Muji  muji ; (23)
lize local processing for its tasks), a virtual machine must be
set up for that user in the server it will offload to [11]. Any S represents how long a virtual machine takes to set up (in
t
tasks that arrive at the server before the virtual machine is set seconds) and muji is how long it takes to migrate a virtual
tj1 t
up must be buffered and wait. This waiting period we will machine from uui to uuji (in seconds). We can now calculate
define as setup delay. Additionally, no setup is needed when how many tasks will wait by considering the uplink transmis-
executing tasks locally. sion delay and task generation rate as well as the amount of
After the virtual machine for a user is set up, it will be kept time needed for the virtual machine to be ready. Thus, the
within that host server until the user starts offloading to a number of tasks created by user ui affected by either the setup

VOLUME
ed licensed use 10,to:
limited NO.AMRITA
3, JULY-SEPT. 1419 Restriction
2022 VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at 12:14:31 UTC from IEEE Xplore.
VISHWA
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

delay or the migration delay during timeslot tj is found


through the formula below.
j   k
t t
Zuji ¼ zuji  F ^ utji þ B
^ tuji  1 ; (24)

bc is the floor function.


Given these assumptions, we can jointly define setup delay
and migration delay (in seconds) of user ui during timeslot tj
as follows.
8
> 0 ^ utji þ B
; if F ^ tuji  ztuji
>
< PZ t j   
t t

Muji ¼
ui
zuji  F^ utj þB
^ tuj  q :
1
>
>
q¼0 i i t
; if Wuji ¼ 1 or Muji ¼ 1
t
: 1 t
0 ; otherwise
(25)
Note how if the uplink transmission delay is longer than the
time it takes to set up a virtual machine (respectively migrate
the virtual machine) then no tasks are affected by setup delay
(respectively migration delay) as the tasks will arrive only
after the virtual machine is already ready. Also, because we FIGURE 3. The states/solutions and their transitions for a simple
want the average setup/migration delay per task generated by scenario with only one user, one cloudlet and one timeslot.
ui during tj , we divide the sum of the setup/migration delay
felt by the affected tasks by the total amount of tasks gener- denominator and inside logarithms (e.g., Eqs. (3), (13) and
ated by ui during tj . (14)). Thus, this is a non-convex and NP-Hard optimiza-
tion problem [12], meaning we have to search for a non-
F. SERVICE DELAY optimal solution through a heuristic method.
As mentioned before, we consider the average service delay,
which is the average time that users have to wait for getting B. SOLUTION MODEL
task results after generating the corresponding task. We can Let us consider the problem as a set of states, where each
define this as below by utilizing the other equations in this t
state is described by Qa ¼ fAuji ; 8ui 2 U; 8tj ; t0  j  tJ1 g.
section. jUjJ
Thus, there are ðjKj þ 2Þ different states Qa . We will
P PtJ1  tj tj tj
 assume there are edges between the states if they fulfill one
ui 2U tj ¼t0 P u i þ T u i þ Mu i of the following conditions:
S¼ : (26) t0
jUj  J  There is an edge between Qa and Qb if Auji0 2 Qa ¼
t0 t0
6 ui ; 0  j0  ðJ  1Þ;
Auji0 2 Qb 8ui0 2 U; ui0 ¼ Auji 2
tj0 t
IV. PROPOSED SOLUTION Qa ¼ Aui 2 Qb ; 0  j0  ðJ  1Þ; tj0 ¼ 6 tj ; Aui 2 Qa ¼
j

t
In this section, the problem we want to address will be for- local; Auji 2 Qb ¼6 local. I.e., Qa and Qb are identical
mally presented and analyzed. Then a solution will be pro- except for the offloading decision of one user ui during
vided alongside our proposal’s complexity analysis. one timeslot tj , which is not to offload in Qa (i.e., execute
locally) and it is to offload to one of the cloud servers in
A. OBJECTIVE Qb . In this case, there is one directed edge from Qa to Qb
The goal of our proposal will be minimizing Eq. (26) as which we will call connect ui to cloud server during tj .
t0
this would mean minimizing the average service delay. To  There is an edge between Qa and Qb if Auji0 2 Qa ¼
t tj0 t0
do this, we want to select the optimal values of Auji for all Aui0 2 Qb 8ui0 2 U; ui0 6¼ ui ; 0  j0  ðJ  1Þ; Auji 2
tj0 t
ui 2 U and for all 0  j  ðJ  1Þ, i.e., select where the Qa ¼ Aui 2 Qb ; 0  j0  ðJ  1Þ; tj0 6¼ tj ; Auji 2 Qb ¼
tasks of each user will be executed at each timeslot. How- local. I.e., Qa and Qb are identical except for the off-
ever, there are ðjKj þ 2ÞjUjJ solutions for this problem loading decision of one user ui during one timeslot tj ,
(each user at each timeslot can choose any of the cloudlets, which is not to offload (i.e., execute locally) in Qb ,
the remote cloud or local execution), making it unfeasible regardless of what the decision is in Qa . In this case,
to analyze all possibilities even with a low time granularity there is one directed edge from Qa to Qb which we will
(which leads to low accuracy in regards to mobility) and call disconnect ui during tj .
small amounts of servers and users. Moreover, the minimi- This defines all possible states (and solutions) for all prob-
zation of Eq. (26) is an integer linear programming prob- lems as well as transitions between them. Figure 3 illus-
lem with the decision variables appearing in the trates the states and their transitions in a simple scenario.

1420
ed licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at VOLUME
12:14:3110,
UTCNO.from
3, JULY-SEPT. 2022 Restriction
IEEE Xplore.
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

Special note to how the ”disconnect ui during tj ” transition


Algorithm 2. Phase 2: Finding Best State With Migrations
exists even if it connects a state to itself. The main point
that makes our algorithm feasible is that we can calculate Input: current state Qs
the service delay in Qb in constant time if there is an edge 1: for all timeslots tj do
from Qa to Qb and we know the service delay in Qa . The 2: for all ui 2 U do
proof that this calculation is done in constant time is in the 3: Qs Qs þ disconnect ui during tj
4: for all ui 2 U do
Appendix (available online).
5: for all A ¼ kp 2 K and remote do
6: Qf Qs þ connect ui to A in tj
C. HEURISTIC ALGORITHM 7: if dfs > 0 then add bid ðdfs ; ui ; AÞ to 
Our proposed algorithm works by moving between the possi- 8: Sort  in decreasing order of d
9: while  6¼ ; do
ble states of the scenario, starting from Q0 , which we define
10: ðd ; ui ; AÞ pop 
as the state where all users are doing local execution at all
11: if ui is offloading in Qs during tj then
timeslots. Users will bid to change the state from the current 12: discard ðd ; ui ; AÞ
one to a new one where the user goes from local execution to 13: else if ðd ; ui ; AÞ is outdated then
offloading to a cloud server. The values of these bids are rep- 14: Qf Qs þ connect ui to A during tj
resented by the difference in service delay between the cur- 15: if dfs > 0 then insert sort ðdfs ; ui ; AÞ in 
rent state and the new one. The algorithm always accepts the 16: else
highest bid so that service delay is lowered the most until 17: Qs Qs þ connect ui to A during tj
there is no positive bid left. The procedure is divided into 18: return Qs
two phases, which will be described below.
For Phase 1, users will make bids to connect to the same
cloud server during all timeslots. Thus, for each user ui , jKj
bids will be created, each one to connect to cloudlet kp for all Algorithm 3. Proposal: Finding Near-Optimal State
kp 2 K. Then, one more bid is made by each user to connect 1: Qs Phase 1()
to the remote cloud. As mentioned before, the value of each 2: Qs Phase 2(Qs )
3: return Qs

Algorithm 1. Phase 1: Finding Best State Without Migration


bid is calculated from the difference of the service delay in
1:  ; Q0 and the resulting state, after the user connects to the corre-
2: for all ui 2 U do sponding cloud server in all timeslots. Such value can be
3: for all A ¼ kp 2 K and remote do
obtained through J state transitions and the sum of the corre-
4: Qs Q0
5: d 0 sponding dba . All bids with positive values are put in a
6: for all timeslots tj do decreasing sorted pile. The algorithm will then go on to ana-
7: Qf Qs þ connect ui to A in tj lyze the top bid on the pile. First of all, if the user who made
8: d d þ dfs that bid is already connected to a cloud server in the current
9: Qs Qf state then the bid is discarded and the current state is not
10: if d > 0 then add bid ðd ; ui ; AÞ to  changed; this is because the user already had a better bid
11: Sort  in decreasing order of d accepted. If the user is still doing local execution, then an
12: Qs Q0 analysis is made to check if the bid is outdated. A bid is con-
13: while  6¼ ; do sidered outdated if there are any differences between the cur-
14: ðd ; ui ; AÞ pop  rent state now and the old current state when the bid was
15: if ui is offloading in Qs then t t
made in the values of jUv_jr j; v_ r ¼ buji ; 0  j  ðJ  1Þ and, if
16: discard ðd ; ui ; AÞ t
17: else if ðd ; ui ; AÞ is outdated then the bid is for connecting to kp , the values of jUkjp j; 0  j 
18: d 0 ðJ  1Þ. If there are differences, then the value of the bid is
19: Q# Qs updated by calculating once more the difference of service
20: for all timeslots tj do delay, this time between the new current state and the result-
21: Qf Q# þ connect ui to A during tj ing state after the corresponding cloud server connection. If
22: d d þ df# the value of the updated bid is still positive, it is re-sorted
23: Q# Qf into the pile. If there are no differences, then the bid is
24: if d > 0 then insert sort ðd ; ui ; AÞ in  accepted and the current state is updated to reflect the corre-
25: else sponding user connecting to the corresponding server. By
26: for all timeslots tj do the end of Phase 1, users will be properly connected to cloud
27: Qs Qs þ connect ui to A during tj
servers but without considering the possibility of migration
28: return Qs
of virtual machines.

VOLUME
ed licensed use 10,to:
limited NO.AMRITA
3, JULY-SEPT. 1421 Restriction
2022 VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at 12:14:31 UTC from IEEE Xplore.
VISHWA
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

For Phase 2, the following routine is repeated for all time- Phase 2 is has a procedure similar to Phase 1 that is
slots tj , starting from j ¼ 0 and going in order up to j ¼ repeated J times. The main difference is that this procedure
J  1. First, the current state is changed so that all users are starts by disconnecting all users and the bids in Phase 2 need
doing local execution during tj (offloading decision for the Oð1Þ to be calculated, checked for outdatedness, and
other timeslots are not modified). This means jUj state transi- updated. With this, the complexity of Phase 2 is found
tions, one for each user, disconnecting them. Then, again through the formula below.
users will create each jKj þ 1 bids, one for each cloudlet kp  
and one for the remote cloud, trying to connect to the corre- D  ðD þ 1Þ
J  jUj þ D þ D  logD þ : (28)
sponding cloud server during timeslot tj . The value of those 2
bids is the difference in service delay between the current
state and the resulting state after the connection, each taking
a single state transition this time. Once more, positive bids Thus, by adding Eqs. (27) and (28) and simplifying the
are put on a sorted pile. From here, the pile is analyzed and result, if we consider that jUj > > J, we have that the com-
bids are accepted/discarded exactly like in Phase 1, with the plexity of the algorithm is OðjUj2  jKj2 Þ. This is a huge
main difference being that the outdated check only needs to progress from the complexity of OððjKj þ 2ÞjUjJ Þ of analyz-
check timeslot tj . At the end of Phase 2, offloading decisions ing all possible solutions, which is not even feasible for small
have been made for all timeslots and all users. amounts of users and servers. On the other hand, our algo-
Phase 1 is described in Algorithm 1, Phase 2 is described rithm is guaranteed to finish under a time of OðjUj2  jKj2 Þ. It
in Algorithm 2, and the overall algorithm is shown in Algo- is also notable how this algorithm does not require any con-
rithm 3. Although omitted for simplicity, the outdatedness vergence as it works with users making bids and winning/
check (line 17 in Algorithm 1 and line 13 in Algorithm 2) losing them. Thus, there are no cases when a particular sce-
t t
includes comparing the values of jUv_jr j (and jUkjp j if the bid is nario leads to a slower solution. Due to this, the algorithm
for connecting to a cloudlet) of the state when the bid was can be quickly executed for wide area networks, which
created and the current state Qs in all timeslots in Phase 1 should have thousands of users and tens of servers [5].
and only timeslot tj in Phase 2. To make this possible, bids
must also have records of these values when they are cre- V. PERFORMANCE EVALUATION
ated/updated. Because our proposal only accepts positive In this section, our proposed algorithm will be compared
bids, the service delay is always being improved. Moreover, with two benchmark solutions and one solution typically uti-
because only the highest bids are accepted, the algorithm lized in the literature. First, the environment used for the
attempts to, at each step, decrease the service delay as much analysis will be introduced as well as the methods utilized
as possible. This is not an optimal result but brings about a for comparison. Then, all methods will be analyzed and their
significant improvement to service delay. Performance analy- performance will be presented and explained.
sis can be found in detail in Section V.
A. ANALYSIS ENVIRONMENT
D. COMPLEXITY ANALYSIS The parameters considered during the analysis in this section
As it was proven, each state transition can be calculated in con- are the ones shown in Table 1 unless explicitly stated other-
stant time, i.e., Oð1Þ. Phase 1 creates D ¼ jUj  ðjKj þ 1Þ bids, wise. The values in this table are the ones expected for
each of which needs to calculate J transitions. Bids can be beyond 5G and 6G networks, according to the literature [5].
sorted through quick sort, for an average-case complexity of As such, we can say that the performances shown here are
OðD  logDÞ. Checking if a bid is outdated has OðJÞ complex- good estimates of how the method will behave under future
ity and updating it also has OðJÞ complexity. Worst case sce- communication networks.
nario, every single time the top bid is checked for outdatedness, Besides our proposal, three other methods are also evalu-
it is accepted and makes every single other bid in the pile out- ated in order to offer a comparison in performance. They are
dated, meaning they must be checked, updated and the pile re- explained below.
sorted before the algorithm resumes. This re-sort can be done in  Full local: all users perform local execution during all
OðDÞ 1, so the whole procedure takes OðD þ 2  JÞ and it is of the time slots. As such, there is no offloading to
repeated q  ðq þ 1Þ=2 times. Adding all of this together, the cloud servers nor transmission of data.
complexity of Phase 1 is found through the formula below.  Full remote: all users offload their tasks to the remote
cloud during all of the time slots. There is no local exe-
ðD þ 2  JÞ  ðD þ 2  J þ 1Þ cution and the cloudlets are not utilized. Only the
D  J þ D  logD þ : (27)
2 remote cloud does task processing under this method.
 Conventional (Conv.): cloudlets have a fixed pre-deter-
mined capacity of how many users they can handle.
Users will always offload to a cloudlet as long as there
1
In reality, the size of the pile decreases after each bid is accepted, so D is
higher than the actual complexity for sorting in this case. However, we will is one cloudlet that is not at full capacity. Moreover,
stick to D as the upper bound anyway for simplicity. users will always offload to the cloudlet that offers the

1422
ed licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at VOLUME
12:14:3110,
UTCNO.from
3, JULY-SEPT. 2022 Restriction
IEEE Xplore.
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

TABLE 1. Parameters of the performance analysis environment.

Number of timeslots 6
Timeslot length 100s
Number of users 10000
Number of cloudlets 3
Number of base stations 10
User mobility probability 20%
Maximum movement speed 1m/s
Single user task creation rate 0.1 tasks/s
Task execution time (local) 500ms
Task execution time (cloudlet) 50ms
Task execution time (remote) 10ms
Number of processors (local) 1
Number of processors (cloudlet) 16
Number of antennas (base station) 64
Transmission power (base station) 30dBm
Transmission power (user) 27dBm
Path loss floating intercept 75.85
FIGURE 4. Service delay for all methods under scenarios with dif-
Path loss average exponent 3.73
ferent amounts of users.
Rayleigh fading coefficient -1.59175
Antenna gain (base station) 24.5dBi
Antenna gain (user) 8.35dBi here are the average across 100 random network topologies
Noise density 3.9811 10-19 W/Hz
Wireless bandwidth 1MHz to reach an acceptable confidence interval.
Packet size 256KB
Backhaul access time (cloudlet) [0, 10]ms B. RESULTS
Backhaul access time (remote cloud) 100ms For the first test, we will analyze how all methods perform
Migration time [0, 500]ms under different number of users, with results shown in
Virtual machine setup time 500ms
Figure 4. More users mean more tasks. This increase in
workload does not affect the full local method because in
that method users are independent of each other. The full
lowest transmission delay. If all cloudlets are full, the remote method gets worse with more users because there are
user will select either local execution or offload to the more users sharing beams in the wireless channel, deteriorat-
remote cloud, whichever is faster2 [3], [15], [17]. ing the transmission delay. Regardless, because both meth-
The full local and full remote methods are used as a bench- ods do not utilize the cloudlets, their service delay is higher
mark to show how much improvement the use of MEC can than the other solutions due to a worse level of processing
cause as well as how well our proposal utilizes the multi- delay and transmission delay, respectively. The conventional
tiered system when compared to single-tiered options. These method performs better than those benchmarks at both values
two are simple solutions as the clients (user devices) can be of tested capacity, but the proposal has the lowest service
simply hard-wired to either execute all tasks locally or send delay. This is because, at 900 capacity, the conventional
all tasks to the remote cloud. The conventional method is method has the cloudlets mostly idle and has to rely on the
derived from existing works in the literature and it is here to more distant remote cloud even though the cloudlets could
show how much our proposal can improve from a fixed still be used. At 3100 capacity, although the remote cloud is
capacity cloudlet model and show how such a model is not a barely used (which is why the performance is better than 900
good representation of MEC. Both the conventional method capacity), the cloudlets are overworked and service delay is
and our proposal need a control plane of sorts implemented still not ideal. This is notable as there is a big jump in the ser-
in a server that can aggregate information from the whole vice delay at 9500 users, which is when the remote cloud
system and use that information to estimate delay values. starts being used. Additionally, the conventional utilizes the
This control plane, for both methods, will make the offload- remote cloud due to a lower processing delay, but this keeps
ing decisions for all users and inform them of such (this can the base stations overworked. Our proposal addresses these
be done before the system starts working to avoid overhead issues by keeping the cloudlets balanced and actually using
if the information is available). Finally, the results shown local execution when there are too many users for the cloud-
let, as this also avoids a high wireless transmission delay.
Figure 5 shows this, with the use of local execution slowly
2
We disregard fronthault delay when comparing local execution and the increasing.
remote cloud here for two reasons. First, it is not trivial to determine the opti- Next, we evaluated the methods while the number of
mal amount of users transmitting per base station and which users should be cloudlets was increased, with results shown in Figure 6.
allowed to transmit (in fact, this leads related works to also ignore fronthaul
communications). Second, this allows us to additionally measure the impact More cloudlets mean more resources in the MEC, which
of fronthaul transmission delay in the overall service delay. should theoretically lower the service delay. Of course, this

VOLUME
ed licensed use 10,to:
limited NO.AMRITA
3, JULY-SEPT. 1423 Restriction
2022 VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at 12:14:31 UTC from IEEE Xplore.
VISHWA
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

FIGURE 5. How our proposal divides tasks between all tiers of FIGURE 7. How our proposal divides tasks between all tiers of
MEC with different amounts of users. MEC with different cloudlets of users.

does not affect the full local and full remote solutions as they under capacity is enough). Moreover, the proposal also
do not utilize cloudlets at all and, thus, remain the two worse accounts for the fronthault transmission delay and, even with
methods in this analysis. The conventional method and the enough cloudlets, still refrains from putting all users on the
proposal improve in performance with more cloudlets. Once edge as this would worsen the transmission delay too much.
again, the 3100 capacity performs better than the 900 capac- This is evidenced in Figure 7, where we can see how the pro-
ity in most cases because it utilizes cloudlets more. The rea- posal increases gradually the rate of tasks executed in the
son there is a sudden drop at 4 cloudlets for the 3100 edge until there the gains from a lower processing delay are
capacity method is because at this point the system stops less impactful than the cost of increasing the wireless trans-
using the remote cloud altogether. Anyway, with a high mission delay.
enough number of cloudlets, even the 900 capacity solution We now vary the time needed to transmit data to the
reaches a scenario where all users can connect to cloudlets. remote cloud. Results are shown in Figure 8. As expected,
In such scenarios, 900 capacity is a better choice as the the full remote method has the most variation, achieving
cloudlets can handle less users and consequently provide a optimal results with low access delay but gradually deterio-
better processing delay. Regardless, both always stay worse rating as the access gets slower. Additionally, the full local
than the proposal, which is the quickest one to take full solution remains unchanged, being the worst until the remote
advantage of the cloudlets and utilizes a solution where the cloud access is so slow that the full remote solution surpasses
workload is balanced and waiting time is minimized it. The conventional solution with 900 capacity starts better
(whereas the conventional solution assumes that staying than its 3100 capacity counterpart because, with fast enough

FIGURE 6. Service delay for all methods under scenarios with dif- FIGURE 8. Service delay for all methods under scenarios with
ferent amounts of cloudlets. varying remote cloud access time.

1424
ed licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at VOLUME
12:14:3110,
UTCNO.from
3, JULY-SEPT. 2022 Restriction
IEEE Xplore.
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

FIGURE 9. How our proposal divides tasks between all tiers of FIGURE 11. How our proposal divides tasks between all tiers of
MEC with varying remote cloud access time. MEC with varying task execution time in the local environment.

access, the remote cloud is better than the edge. With a lower best at balancing the workload between cloudlets and recog-
capacity of 900, we can utilize the remote cloud more and nizing the best option in all scenarios.
thus perform better. However, the situation changes when Finally, we now vary how long it takes to execute a single
the access gets slower and consequently the 3100 capacity task in the local environment. Results can be seen in
solution becomes better. After an access delay of 260ms, Figure 10. Here, the full local solution starts as the best and
local execution is better than the remote cloud and both con- gradually gets worse as local execution time gets slower.
ventional solutions completely avoid offloading to the remote Full remote stays unchanged as it never uses local process-
cloud as a result, which is why there is a big drop at that point ing. Full remote is the worst solution at the beginning but it
for the 900 capacity solution why their both curves of the is surpassed in this regard by full local as its execution time
conventional solution do not change afterwards. Nonethe- gets slower. Meanwhile, similarly to before, the conventional
less, the proposal again performs the best. As shown in solution with 900 capacity starts better because it uses the
Figure 9, the proposal can recognize the benefits of fast edge less and relies now on local execution more, which is
access at the beginning and thus offloads all tasks to the better than the edge if local processing time is fast enough.
remote cloud. However, even at low levels, acessing the It, however, changes place with the 3100 capacity solution
remote cloud costs too much in terms of transmission delay once local execution gets too slow. Moreover, after the
and the proposal prefers to utilize local execution to comple- 220ms mark, offloading to the remote cloud becomes better
ment the cloudlet offloading. Once more, the proposal is the than local execution and both conventional solutions aban-
don local execution completely, which is why their curves
do not change after that point and why there is big jump in
the curve for the 900 capacity solution. Regardless, the pro-
posal stays as the best solution, with the reason being clear in
Figure 11. The proposal sees that local execution is the best
alternative and avoids offloading completely while local exe-
cution time is fast enough. As local execution time gets
slower, the proposal starts offloading more and more tasks to
the edge. Notably, at 300 ms for local execution, the pro-
posal calculates that using cloudlets more than using local
processing is better, which is the two curves cross in the
graph. Additionally, at aounrd the 400 ms mark, local proce-
ssing is slow enough that the proposal calculates that offload-
ing more to the remote cloud in order to use local processing
less is more effective. This is why the curve for remote cloud
starts going up after that point. Finally, as expected, the curve
for local processing steadily goes down as local execution
time becomes longer and longer, since the proposal can cal-
FIGURE 10. Service delay for all methods under scenarios with culate the slower response time from using the local environ-
varying task execution time in the local environment. ment. The reason why this is done gradually is because of

VOLUME
ed licensed use 10,to:
limited NO.AMRITA
3, JULY-SEPT. 1425 Restriction
2022 VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at 12:14:31 UTC from IEEE Xplore.
VISHWA
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

the tradeoff of a worse transmission delay as more users must [5] T. K. Rodrigues, K. Suto, and N. Kato, “Edge cloud server deployment
with transmission power control through machine learning for 6G Internet
connect to the base station. As before, the proposal is the best of Things,” IEEE Trans. Emerg. Top. Comput., early access, Dec. 31,
at balancing workload in the edge and choosing the best 2019, doi: 10.1109/TETC.2019.2963091.
alternative in all scenarios. [6] T. K. Rodrigues, K. Suto, H. Nishiyama, J. Liu, and N. Kato, “Machine
learning meets computation and communication control in evolving edge
and cloud: Challenges and future perspective,” IEEE Commun. Surv. Tut.,
VI. CONCLUSION vol. 22, no. 1, pp. 38–67, Mar. 2020.
MEC systems are a complex environment with multiple [7] G. Piumatti, F. Lamberti, A. Sanna, and P. Montuschi, “Robust robot
tracking for next-generation collaborative robotics-based gaming environ-
parameters to be taken into consideration. Nonetheless, such ments,” IEEE Trans. Emerg. Top. Comput., vol. 8, no. 3, pp. 869–882,
systems have a wide array of resources organized across mul- Jul.–Sep. 2020.
tiple tiers of the network and, if used properly, MEC can be a [8] F. Lamberti, F. Manuri, A. Sanna, G. Paravati, P. Pezzolla, and P. Montu-
schi, “Challenges, opportunities, and future trends of emerging techniques
platform for a vast range of different applications. In this for augmented reality-based maintenance,” IEEE Trans. Emerg. Top.
paper, we proposed a configuration algorithm for deciding Comput., vol. 2, no. 4, pp. 411–421, Dec. 2014.
when and where to offload tasks for mobile MEC users. This [9] S. Ghanavati, J. Abawajy, and D. Izadi, “Automata-based dynamic fault
tolerant task scheduling approach in fog computing,” IEEE Trans. Emerg.
algorithm not only takes into account the time needed to Top. Comput., early access, Oct. 26, 2020, doi: 10.1109/TETC.2020.
transmit and process the tasks in multiple environments but it 3033672.
also considers execution locally, at edge cloudlets, and at a [10] T. G. Rodrigues, K. Suto, H. Nishiyama, N. Kato, and K. Temma, “Cloud-
lets activation scheme for scalable mobile edge computing with transmis-
remote cloud server. This, plus the extra effort put in its sion power control and virtual machine migration,” IEEE Trans. Comput.,
underlying model to mimic as much as possible real-life vol. 67, no. 9, pp. 1287–1300, Sep. 2018.
MEC scenarios, sets our algorithm apart from other offload- [11] T. G. Rodrigues, K. Suto, H. Nishiyama, and N. Kato, “Hybrid method for
minimizing service delay in edge cloud computing through VM migration
ing decision methods in the literature. The realism and the and transmission power control,” IEEE Trans. Comput., vol. 66, no. 5, pp.
better performance should make our model and proposal a 810–819, May 2017.
foundation for 6G MEC research in the future. [12] T. Alfakih, M. M. Hassan, A. Gumaei, C. Savaglio, and G. Fortino, “Task
offloading and resource allocation for mobile edge computing by deep
As shown in the performance analysis, our proposal is bet- reinforcement learning based on SARSA,” IEEE Access, vol. 8, pp.
ter than the benchmark solutions and a conventional solution 54 074–54 084, Mar. 2020.
from the literature. Comparison with benchmarks shows that [13] D. A. Tran, T. T. Do, and T. Zhang, “A stochastic geo-partitioning prob-
lem for mobile edge computing,” IEEE Trans. Emerg. Top. Comput., early
the possibility of using the edge is essential for delivering a access, Mar. 04, 2020, doi: 10.1109/TETC.2020.2978229.
fast service. However, as the comparison with the conven- [14] Y. Ma, W. Liang, J. Li, X. Jia, and S. Guo, “Mobility-aware and delay-sen-
tional method demonstrates, it is important to offer a flexible sitive service provisioning in mobile edge-cloud networks,” IEEE Trans.
Mobile Comput., early access, Jul. 02, 2020, doi: 10.1109/TMC.2020.300
solution, capable of using the cloudlets more or less depend- 6507.
ing on the scenario and accordingly utilizing other tiers in [15] S. Ma, S. Song, J. Zhao, L. Zhai, and F. Yang, “Joint network selection
the network if the workload is too high. Additionally, this and service placement based on particle swarm optimization for multi-
access edge computing,” IEEE Access, vol. 8, pp. 160871–160881, Sep.
must be done taking careful consideration of processing and 2020.
transmission delays, as anything else will lead to less than [16] I. Hadzic, Y. Abe, and H. C. Woithe, “Server placement and selection for
optimal results. This higher flexibility means that all resour- edge computing in the ePC,” IEEE Trans. Serv. Comput., vol. 12, no. 5,
pp. 671–684, Sep.–Oct. 2019.
ces in the system are efficiently used in our proposal, giving [17] D. Sarddar and E. Nandi, “Optimization of edge server selection technique
it a significant advantage over its counterparts. using local server and system manager in content delivery network,” Int. J.
Grid Distrib. Comput., vol. 8, no. 4, pp. 83–90, Aug. 2015.
[18] J. Li, H. Gao, T. Lv, and Y. Lu, “A service migration strategy based on
ACKNOWLEDGMENTS multiple attribute decision in mobile edge computing,” in Proc. IEEE
Wireless Commun. Netw. Conf., April 2018, pp. 1–6.
This work was supported in part by the Research and Devel- [19] S. Wang, Y. Guo, N. Zhang, P. Yang, A. Zhou, and X. S. Shen, “Delay-
opment on Intellectual ICT System for Disaster Response aware microservice coordination in mobile edge computing: a reinforce-
and Recovery and in part by the Commissioned Research of ment learning approach,” IEEE Trans. Mobile Comput., vol. 20, no. 3, pp.
939–951, Mar. 2021.
the National Institute of Information and Communications
[20] T. Nguyen, E. Huh, and M. Jo, “Decentralized and revised content-centric
Technology (NICT), Japan. This article has supplementary networking-based service deployment and discovery platform in mobile
downloadable material available at https://doi.org/10.1109/ edge computing for IoT devices,” IEEE Internet Things J., vol. 6, no. 3,
TETC.2021.3090061, provided by the authors. pp. 4162–4175, Jun. 2019.
[21] I. Martinez, A. S. Hafid, and A. Jarray, “Design, resource management,
and evaluation of fog computing systems: A survey,” IEEE Internet Things
REFERENCES J., vol. 8, no. 4, pp. 2494–2516, Feb. 2021.
[1] Multi-access edge computing (MEC). Accessed: Jun. 2, 2021. [Online]. [22] E. Baccarelli, M. Scarpitini, A. Momenzadeh, and S. S. Ahrabi, “Learning-
Available: https://www.etsi.org/technologies/multi-access-edge-computing in-the-fog (LiFo): Deep learning meets fog computing for the minimum-
[2] A. Samanta and Y. Li, “Latency-oblivious incentive service offloading in energy distributed early-exit of inference in delay-critical IoT realMS,”
mobile edge computing,” in Proc. IEEE/ACM Symp. Edge Comput., Oct. IEEE Access, vol. 9, pp. 25 716–25 757, Feb. 2021.
2018, pp. 351–353. [23] B. Ali, M. A. Pasha, S. ul Islam, H. Song, and R. Buyya, “A volunteer-sup-
[3] M. Chen and Y. Hao, “Task offloading for mobile edge computing in soft- ported fog computing environment for delay-sensitive IoT applications,”
ware defined ultra-dense network,” IEEE J. Sel. Areas Commun., vol. 36, IEEE Internet Things J., vol. 8, no. 5, pp. 3822–3830, Mar. 2021.
no. 3, pp. 587–597, Mar. 2018. [24] R. Mahmud, A. N. Toosi, K. Ramamohanarao, and R. Buyya, “Con-
[4] D. Zhao, T. Yang, Y. Jin, and Y. Xu, “A service migration strategy based text-aware placement of industry 4.0 applications in fog computing
on multiple attribute decision in mobile edge computing,” in Proc. IEEE environments,” IEEE Trans. Ind. Inf., vol. 16, no. 11, pp. 7004–7013,
17th Int. Conf. Commun. Technol., Oct. 2017, pp. 986–990. Nov. 2020.

1426
ed licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at VOLUME
12:14:3110,
UTCNO.from
3, JULY-SEPT. 2022 Restriction
IEEE Xplore.
Rodrigues et al.: Offloading Decision for Mobile Multi-Access Edge Computing in a Multi-Tiered 6G Network

[25] M. Goudarzi, H. Wu, M. Palaniswami, and R. Buyya, “An application NEI KATO (Fellow, IEEE) is currently a professor,
placement technique for concurrent IoT applications in edge and fog com- the dean with Graduate School of Information Sci-
puting environments,” IEEE Trans. Mobile Comput., vol. 20, no. 4, pp. ences, and was the director (2015–2019) of
1298–1311, Apr. 2021. Research Organization of Electrical Communica-
[26] D. Wu and N. Ansari, “A cooperative computing strategy for blockchain- tion and the strategic adviser to the President,
secured fog computing,” IEEE Internet Things J., vol. 7, no. 7, pp. 6603– Tohoku University (2013). His research interests
6609, Jul. 2020. computer networking, wireless mobile communica-
[27] G. Fragkos, S. Lebien, and E. E. Tsiropoulou, “Artificial intelligent multi- tions, satellite communications, ad hoc sensor and
access edge computing servers management,” IEEE Access, vol. 8, pp. mesh networks, UAV networks, smart grid, AI,
171 292–171 304, Sep. 2020. IoT, big data, and pattern recognition. He has auth-
ored or coauthored more than 450 papers in presti-
TIAGO KOKETSU RODRIGUES (Member, gious peer-reviewed journals and conferences. He is currently the vice-
IEEE) is currently an assistant professor with president (Member and Global Activities) of the IEEE Communications
Tohoku University. His research interests include Society (2018), and the editor-in-chief of IEEE Transactions on Vehicular
artificial intelligence, machine learning, network Technology (2017). He is currently a Clarivate Analytics Highly Cited
modeling and simulation, and cloud systems. From researcher for 2019 and 2020, a fellow of The Engineering Academy of
2017 to 2020, he was the lead system administrator Japan, and a fellow of IEICE.
of the IEEE Transactions on Vehicular Technol-
ogy, overviewing the review process of all submis-
sions and the submission system as a whole. He is
currently an editor of the IEEE Transactions on
Vehicular Technology and the IEEE Network.

JIAJIA LIU (Senior Member, IEEE), is currently a


professor with the School of Cybersecurity, North-
western Polytechnical University. He has authored
or coauthored more than 180 peer-reviewed articles
in many prestigious IEEE journals and conferences.
His research interests include wireless mobile com-
munications, FiWi, and the Internet of Things. He
is currently an associate editor for the IEEE Trans-
actions on Wireless Communications, an editor of
theIEEE Network, and the guest editor of the IEEE
Transactions on Emerging Topics in Computing
and the IEEE Internet of Things Journal. He is a distinguished lecturer of the
Communications Society.

VOLUME
ed licensed use 10,to:
limited NO.AMRITA
3, JULY-SEPT. 1427 Restriction
2022 VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on September 27,2024 at 12:14:31 UTC from IEEE Xplore.
VISHWA

You might also like