0% found this document useful (0 votes)
31 views10 pages

Reference Paper - Page 87

Uploaded by

jyothisaieshwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views10 pages

Reference Paper - Page 87

Uploaded by

jyothisaieshwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Decision Analytics Journal 8 (2023) 100295

Contents lists available at ScienceDirect

Decision Analytics Journal


journal homepage: www.elsevier.com/locate/dajour

An optimal fog-cloud offloading framework for big data optimization in


heterogeneous IoT networks
Sujit Bebortta a , Subhranshu Sekhar Tripathy b , Umar Muhammad Modibbo c ,∗, Irfan Ali d
a Department of Computer Science, Ravenshaw University, Cuttack, 753003, Odisha, India
b
School of Computer Engineering, KIIT Deemed to be University, Campus 15 Rd, Chandaka Industrial Estate, Patia, Bhubaneswar, Odisha 751024, India
c
Department of Operations Research, Modibbo Adama University, P.M.B. 2076, Yola, Nigeria
d
Department of Statistics and Operations Research, Aligarh Muslim University, Aligarh 202 002, India

ARTICLE INFO ABSTRACT


Keywords: Executing complex and time-sensitive operations has become difficult due to the increased acceptance of
Big data optimization Internet of Things (IoT) devices and IoT-generated big data, which can result in problems with power
Internet of Things consumption and lag time. The fog computing layer is a distributed computing option that can handle some
Integer linear programming
of these issues. However, fog computing devices’ limited computing and power capabilities make it difficult
Fog computing
to complete operations under time constraints while minimizing service latency and fog resource energy
Computation offloading
Energy consumption
consumption in delay-sensitive IoT-Fog applications. This paper suggests a dynamic integer linear programming
technique for facilitating optimal task offloading that distributes resources from the fog computing layer to IoT
devices while considering the constraints on timely task execution and resource availability. The offloading
challenge is modeled as an integer linear programming (ILP) problem to ease the burden on finite fog resources
and speed up the completion of time-sensitive operations. Given the large dimensionality of tasks in a dynamic
environment, a task prioritization method is adopted to minimize tasks’ latency and the energy consumption of
fog nodes. The findings demonstrate that the suggested approach performs better than benchmark approaches
regarding energy utilization and service latency. Overall, the proposed technique offers an efficient and
practical response to the issues raised by IoT devices and fog computing while enhancing system efficiency
regarding power consumption and latency.

1. Introduction neighboring edge computing device [3,4]. Although the MEC system
is not as robust as a remote cloud, it can process IoT data much faster
One of the most meaningful shifts in human history is now: the owing to its placement at the network’s border and close vicinity to mo-
Internet’s permeation into every aspect of life. One could consider it bile gadgets [5]. Each edge device gets tasks with unique specifications.
as a component of future devices, and technological advancements will It is up to that device to determine whether to handle the computation
be made to address the challenges that may arise in the future. Recent locally or to pass it along to a neighboring edge node or the cloud. A
years have witnessed a proliferation of mobile applications driven by
Generalized Assignment Problem (GAP) [6], is an NP-hard version of
the proliferation of smartphones, laptops, wearable technologies, and
the Task Assignment Problem (TAP) [7]. Reinforcement learning (RL)
other connected devices, all of which have contributed to the creation
and deep reinforcement learning (Deep RL) are two modern approaches
of the Internet of Things (IoT) [1]. IoT applications, such as smart
that employ machine learning techniques along with dynamic program-
homes, smart healthcare, and smart cities, rely on mobile devices for
their effectiveness and portability. However, there are still obstacles ming to address the issue of GAP [8]. Our investigation presents a
to implementing IoT and mobile technology [2]. Since the meteoric multi-layer framework with a passive knowledge transfer mechanism to
rise of wireless data usage makes it impossible for overburdened mo- boost the efficiency of existing task offloading frameworks by adopting
bile devices to operate substantial IoT applications in order to meet integer linear programming (ILP)-based optimization approaches. Fig. 1
users’ requirements forensuring Quality of Service (QoS), additional demonstrates a complete framework for the processing and analysis
infrastructure is needed. In order to enhance the efficacy of mobile of Big Data in a fog-based cloud-supported model. Data collection,
computation, Mobile Edge Computing (MEC) has emerged as a practical information extraction, feature selection, predictive modeling, and the
method to enable devices to offload their computational tasks to a visualization of data are all components of this set of operations.

∗ Corresponding author.
E-mail addresses: [email protected] (S. Bebortta), [email protected] (S.S. Tripathy), [email protected]
(U.M. Modibbo), [email protected] (I. Ali).

https://doi.org/10.1016/j.dajour.2023.100295
Received 2 May 2023; Received in revised form 29 July 2023; Accepted 31 July 2023
Available online 2 August 2023
2772-6622/© 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
S. Bebortta, S.S. Tripathy, U.M. Modibbo et al. Decision Analytics Journal 8 (2023) 100295

on various attributes such as queuing delay, CPU cycles, and energy


consumption. The proposed approach offers an effective method for
handling the complexity of job scheduling in dynamic situations while
reducing the overall energy consumed by fog nodes and improving the
average service-time latency of the tasks.
The fundamental contributions of the paper are stated as follows:

• A problem incorporating cooperative dynamic offloading of tasks


between the Fog-Cloud layer is presented. In contrast to more
conventional offloading systems, this innovative approach can
modify the way tasks are offloaded based on their current context.
In addition, we provide an exhaustive overview of the issue.
• We classify heterogeneous applications into four distinct cate-
gories based on their need for a constant data stream. Each
application type is also given an appropriate delay, and a novel
method is designed to transition offloading jobs from the fog layer
to the cloud based on the status of the fog layer.
• The simulation results demonstrate that latency delay and en-
ergy consumption for offloading tasks in the suggested technique
Fig. 1. A Fog-based intelligent IoT supported cloud framework. reduced greatly, as compared to three existing state-of-the-art
methods.

Here is how the rest of the paper is structured. The related work is
1.1. Motivations and contributions discussed in Section 2. In Section 3, we present the system architecture.
Section 4 shows our problem formulation for developing the ILP-based
The Edge computing paradigm in combination with AI is considered offloading scheme and analyzing their efficiency. In Section 5, we
a promising technology in the IoT domain. To efficiently learn in discuss the experiments, and in Section 6, we draw the necessary
interactive environments and make autonomous decisions, machine conclusions.
learning methods that leverage integer linear programming can be
explored. These methods utilize mathematical optimization techniques 2. Related work
to model decision-making problems and make optimal choices based
on constraints and objectives. While reinforcement learning is an effec- The prevailing state of research in the field of literature is dis-
tive approach to learning in interactive environments, integer linear cussed. It describes how to delegate computing tasks to the cloud,
programming offers a complementary method to optimize decision- computational offloading to nearby devices, and related edge com-
making and task allocation in the IoT ecosystem. Edge computing task puting activity. To alleviate the processing strain on a device, task
offloading based on ILP is a method for making the most efficient offloading can be used to move some or all of the device’s computing
use of available computing resources at the network’s periphery. The responsibilities to a remote server or cloud infrastructure. In the context
necessity of this method to reduce latency in edge computing systems of big data, where data sets tend to be large and complicated, this
is what inspired its development. In contrast to transferring data to a becomes especially crucial for devices with limited processing power
centralized cloud computing environment, data is processed at or near and memory. A machine learning approach called reinforcement learn-
its point of origin in edge computing. This method has the potential ing can help an agent learn how to make decisions in a given setting
to significantly enhance system performance by decreasing latency. through repeated trial and error. The optimization of job offloading
However, computer resources are often scarce in edge networks, and decisions in big data systems is a typical application of reinforcement
how they are distributed has a major effect on system performance. Op- learning (RL).
timization techniques, when applied to edge computing, can improve Computing-intensive apps cannot be supported on low-powered
resource management by drawing on prior examples to guide future mobile devices without resorting to cloud offloading, as suggested
decisions. As the workload and available resources change in real time, by Mahmoodi et al. [9]. Zheng et al. [10] explored the multiple users
the system can learn how to offload jobs to the best possible computing computing offloading issue in mobile cloud computing, where users’
resources. The system’s overall efficiency and user experience can both online and offline states changed often. Ashok et al. [11] conducted a
benefit from this. The ILP-based task offloading in edge computing is study on offloading computation tasks pertaining to vehicular applica-
driven by a desire to maximize efficiency in resource allocation, lower tions over to a remote cloud computing layer and proposed a dynamic
latency, and enhance system performance. approach towards offloading individual modules or components of
Our article describes an edge-computing architecture in which vehicular applications. He et al. [12] proposes a new big data structure
nodes at the network’s periphery take on work and decide whether with three layers: the storage, processing, and application layers. Guo
to handle it regionally, pass it along to another node, or forward it et al. [13] raised how to offload cost-effective computation within the
to a nearby fog node or to the cloud. We first give a mathematical bounds of a time-constrained application completion [14]. The study
illustration to the task offloading problem and then propose employing by Geng et al. [15] investigated issues for facilitating energy-efficient
ILP optimization methods to improve efficiency further. offloading of tasks on mobile devices based on multicore architec-
To address the issue of making unbiased decisions when the local tures. They offered a new heuristic technique to address offloading
knowledge is limited, our approach is to enhance the capabilities of and scheduling issues simultaneously. Smart devices, according to [16]
agents to outsource the offloading decision to those at a higher layer. must be able to generate at least a portion of their processing power
We developed a multi-tier offloading system where the fog devices from their energy source. In this research, the authors proposed the
can be viewed as autonomous agents that can offload to a higher- Web Worker Offloading Framework (WWOF), which is a versatile
layer agent in certain circumstances. In this system, the higher layer offloading framework for the JavaScript Web Worker. This platform
agent has a system-wide perspective that can make more informed allows for a smooth transition of work from browsers to the cloud.
decisions. The proposed framework discussed in this study is based For optimal tracking sensor node selection and the resulting route
on an ILP optimization approach that optimizes task allocation based to lower energy consumption, a nature-inspired Bat Algorithm was

2
S. Bebortta, S.S. Tripathy, U.M. Modibbo et al. Decision Analytics Journal 8 (2023) 100295

presented [17]. To solve both related issues simultaneously, Chen et al. throughout the Thing-Edge-Cloud architecture. It innovatively catego-
[18] suggested an online algorithm termed as Task offloading and rizes tasks into data-intensive or computation-sensitive, offloading that
Frequency scaling For Energy Efficiency (TOFFEE). Tran and Pompili former to nearby edge servers and uploading the latter to the cloud for
[19] considered Joint Optimization of Task offloading (JOTO) as well execution. This novel method makes use of this cloud layer’s computa-
as compute resources utilization by formulating this problem for of- tional resources to conduct high-computer-power-demand operations,
floading as a mixed-integer nonlinear program (MINLP). Hao et al. [20] resulting in a smaller transfer burden on the network’s infrastructure
presented the novel idea of task caching. In order to improve upon and more minor distribution delays. Furthermore, the algorithm has
prior work, the authors suggested a task cache and offloading (TCO) been observed to attain Nash equilibrium among the three layers. Their
technique that uses an alternating cycle. Chen and Hao [21] employed experiments have demonstrated that Yu et al. [30]’s algorithm can
a software-defined network to facilitate offloading of jobs in highly provide significant performance gains at a reasonable computational
dense networks. The offloading problem was formulated as an NP-hard cost, making it a viable solution for load balancing in multi-layer
mixed-integer nonlinear program. systems.
Task offloading is a crucial research area in the domain of edge Revolutionizing efficient use of power along with load balancing
computing and big data [22]. When devices on a network have vastly in wireless sensor networks is possible through mobile data collection,
different amounts of processing capacity or memory, task offloading which has tremendous potential. However, the intricacies of navigating
can help level the playing field. That heart of the matter is that the complex network environments can present a significant hurdle. To
devices’ handling capacity sometimes meets the task’s requirements. tackle this problem, Liu et al. [31] have introduced a groundbreaking
The uneven distribution of tasks across the network results in an technique called objective-variable tour planning (OVTP), which is
imbalance, leading to varying burdens on different devices [23]. The custom-built for partitioned WSNs. Their approach to handling complex
goal of offloading tasks in this context is to expeditiously complete network environments involves a converging-aware location selection
a task by moving it from a machine with limited computational ca- mechanism that enables us to identify the best rendezvous points (RPs)
pacity to one that sufficient computational power. This ensures that for a short tour. Their approach involves a converging-aware location
network bandwidth are used effectively and expedites the processing selection mechanism that allows us to determine the best rendezvous
of enormous amounts of data [23]. Consequently, task offloading in- points (RPs) for a short time.
volves a wide range of IoT gadgets [24], edge servers, along with An innovative machine-learning approach has been developed to
cloud services [25]. By combining their powerful data processing and manage task offloading in uncertain networks [32]. However, machine
learning focuses on grasping the objects that require task offloading
computing capabilities in a distributed and concurrent fashion, they are
to adapt and adjust to the network environment, posing challenges in
able to work together to finish complicated tasks more quickly. This has
the IoV. MVs often encounter new networks, hindering their ability
resulted in task offloading being an essential topic of discussion.
to learn from experience and achieve favorable outcomes in uncertain
There are four crucial factors affecting task offloading, which in-
network scenarios. Therefore, the approach’s effectiveness relies on
clude the Task Processor (TP) to which work is sent for execution,
MVs learning multiple times in the same network, which can be a
the timing of work handoff, the location of servers, and the processing
challenging task for many of them, leading to a decrease in their
capabilities of servers. The Task Processor (TP) plays a crucial role in
applications’ efficiency [32].
determining the recipient of the work. Devices make real-time judg-
The computation offloading technique can prolong battery life and
ments about the appropriate TP for each task, considering the present
reduce the execution time of computing tasks [33]. However, pre-
network requirements, for optimizing system performance. The timing
vious schemes need to differentiate between task types, making it
of work handoff is also important; work is handed off when the network
unreasonable to offload all application tasks into the cloud. There-
is deemed suitable for task offloading [26,27]. The location of servers
fore, a three-layer task offloading framework named DCC is proposed
is another key factor that influences task offloading time. The servers
[34], consisting of the device, cloudlet, and cloud layers. This ap-
deployed on the fog network have minimal delay for transmitting a
proach allows for an effective reduction of processing delay by avoiding
task, while those located at the cloud layer incur a significant amount large data transmissions to the cloud. It has been shown to achieve
of time delay to upload a task [26,27]. Additionally, computing delay is high performance compared to state-of-the-art computational offload-
affected by the processing power of the server, with greater processing ing techniques by implementing a facial recognition system [34]. Edge
power leading to a reduced delay. However, the quantity of tasks being devices such as smartwatches and smartphones are increasingly be-
processed on the server is too important, as overburdened servers will coming more potent and budget-friendly. However, machine learning
possess the lower computational capacity to allocate to new tasks. applications require additional computational resources, which ulti-
Many Internet of Things devices, like in the study of Guo and mately results in increased energy consumption. To overcome this
Liu [28], are placed in places that require sensing or monitoring in challenge, offloading jobs to co-located edge nodes such as fog or
order to fulfill application requirements. Due to their narrow range of femto-cloud can be a viable solution to manage energy and compute-
communication, IoT gadgets remain anonymous in their assigned roles. intensive tasks. With edge computing, which includes smartphones,
Therefore, [28] adopted a strategy whereby Unmanned Aerial Vehicles health sensors, and wearables, it is possible to detect symptoms and
(UAVs) traverse through the geographical area where the IoT devices confine potential COVID-19 carriers. Furthermore, edge computing
are located as flying edge servers, allowing IoT devices to offload employing machine learning can deliver intelligent and opportunistic
tasks. As a result, the development and functionality of task offload- healthcare (oHealth). In healthcare and safety applications, we utilize
ing schemes are profoundly affected by the identification of network kNN, NB, and SVC algorithms on actual data trace. The empirical
parameters. In what follows, we focus exclusively on this component results provide insight into edge computing machine learning-based
of the investigation. The offloaded task has complex requirements that task offloading [35].
span multiple dimensions, including time, space, and processing power. The intelligent collaborative inference (ICI) approach proposed by
Even hosts have more than one dimension. Therefore, delegating the Li et al. [36] enables intelligent service partitioning and partial job
work to the matching servers is a viable option. A matching-theoretic offloading for computation-intensive services in real-time MEC net-
offloading technique was consequently proposed by Xu et al. [29]. works. As the demand for machine learning algorithms grows with the
To achieve optimal task processing across the Thing-Edge-Cloud increasing amount of data and processing capabilities, businesses are
architecture, load balancing is paramount. A Task Type-based Com- focusing on deep-learning-based industries. In this study, researchers
putation Offloading (TTCO) strategy decision making technique that examine a Pose-Net model-based service for human stance estimation in
takes into account the nature of the tasks has been suggested by the computer vision domain. To test the ICI method, we simulate real-
Yu et al. [30] to meet the demand for optimal workload handling istic parameters on hardware platforms using the python programming

3
S. Bebortta, S.S. Tripathy, U.M. Modibbo et al. Decision Analytics Journal 8 (2023) 100295

language as well as the TensorFlow library. This ICI strategy provided


in this study outshines traditional methods in terms of client energy
usage and service frame rate, as demonstrated by our experiments.
Chen et al. [37] have undertaken a study of dynamic MEC task
offloading scheduling. Our proposal is a hybrid energy supply strategy
that utilizes energy harvesting in IoT devices. In order to reduce system
costs, we have optimized local computing, offloading duration, and
edge computing. DTOME is an online dynamic task offloading solution
for MEC that is based on stochastic optimization and a hybrid energy
supply. Our approach, which takes into account system cost and queue
stability, enables DTOME to offload tasks with ease. The best task
offloading approach is achieved through dynamic programming theory.
Our simulations demonstrate that DTOME is not only effective, but also
more economical than two baseline task offloading techniques.
Benefits of offloading processing and storage to nearby gadgets,
a nearby fog network, or a remote cloud can be used by IoT appli-
cations [38]. Context-sensitive offloading can assist IoT-enabled ser-
vices in fulfilling their performance requirements, as claimed by Bajaj Fig. 2. The overview of the proposed framework.
et al. ?. Comparing the distinctiveness and performance of frameworks
like MAUI, AnyRun Computing (ARC), AutoScaler, Edge computing,
and Context-Sensitive Model for Offloading System (CoSMOS) against single bit [40,41]. The symbol 𝛼𝑖 is used to denote the proportion of
EMCO, MobiCOP-IoT, Autonomic Management Framework, CSOS, and offloaded task’s size with the size of the entire task. The requirement
Fog Computing Framework can aid in proposing future offloading 0 ≤ 𝛼𝑖 ≤ 1 must be met by 𝛼𝑖 , it should be highlighted. Additionally,
scenarios. Proposals for future offloading strategies are based on the the parameters 𝑖 and 𝜏𝑖 stand for the target energy consumption and
results and limitations of the frameworks. Teng et al. [39] introduces
completion time of fog node i, respectively. There are two primary
the innovative concept of Flex-MEC, a task offloading architecture that
modes of operation when a fog node processes computing tasks that
optimizes MEC server task allocation and scheduling (TAS). This inno-
are received. In the first, all compute duties are handled locally within
vative Flex-MEC architecture optimizes MEC server task allocation and
the fog node, whereas in the second, all work is offloaded to the
scheduling (TAS), reducing latency by sending data to the server for
cloud server. It should be emphasized that the subsequent sections of
execution after planning rather than sequential transmitting, planning,
this study will provide extensive elaboration on the computing time
forwarding, and executing. The multi-server multi-task allocation and
and energy efficiency pertaining to both local processing mode and
scheduling (MMAS) problem, which aims to maximize the MEC system
offloading processing mode.
profit for TAS planning, is NP-complete and therefore poses a challenge.
To tackle this challenge, the MMAS problem is transformed into a non-
cooperative game by implementing the distributed strategy. The Nash 3.1. Local fog computing model
Equilibrium (NE) is then validated and an efficient response update
algorithm that converges to NE is proposed with low complexity. On Fog node 𝑖’s CPU computing speed (measured in cycles per sec-
the other hand, the centralized method employs a MEC controller and is ond) is given by the formula 𝑐𝑖𝑙𝑜𝑐 . It is assumed that all fog nodes
driven by greed. A series of experiments have demonstrated that these have equivalent computing power. If the parameter 𝑛𝑖 representing
two approaches surpass the others. the computation tasks processed at the fog computing node 𝑖 locally,
the corresponding computation time 𝜏𝑖𝑙𝑜𝑐 according to [41–43], can be
3. System model expressed as follows:
𝑛𝑖 𝑡𝑖
The IoT computer system that is the subject of this study is depicted 𝜏𝑖𝑙𝑜𝑐 = (1)
in Fig. 2. The system is made up of a number of IoT devices, n fog nodes, 𝑐𝑖𝑙𝑜𝑐
and a remote cloud server. Each industrial equipment produces a siz- The expression in Eq. (1) reveals that the completion time corre-
able amount of data and uses a wireless access point to link to a nearby sponding to local computing hinges on both computation task’s size and
fog node. It is assumed that IoT devices will send their acquired jobs to computational ability of the fog node. Moreover, we introduce a new
the nearby fog node for additional processing due to the constrained variable 𝜀𝑖 to signify the energy usage by the CPU for one computation
computing capability and power of those devices. The computation cycle at fog node 𝑖. This allows us to define the energy consumption
results, uploaded by the neighboring devices, are processed by the fog 𝜀𝑙𝑜𝑐
𝑖 of computation task 𝑡𝑖 as follows:
node, which subsequently sends back the task-processing requests to
the terminal devices. 𝜀𝑙𝑜𝑐
𝑖 = 𝑛𝑖 𝜀𝑖 𝑡𝑖 (2)
Fog nodes must choose the number of tasks to offload to the cloud
server because they have limited computational and storage resources,
3.2. Cloud computing model
particularly for computation-intensive and time-sensitive operations. In
these situations, the data are sent to the cloud server over a fiber-optic
link to enhance the user experience and system performance. It is cru- The cloud server was built with the intention of giving fog nodes
cial to keep in mind that each work can be broken down into as many more computing power. Fog node 𝑖 sends compute jobs to the cloud
tiny subtasks as feasible, which can help to optimize processing time server using wireless communication channels leverage these resources.
and enhance system efficiency. This study aims to provide valuable The cloud server then carries out the calculations required for task 𝑡𝑖
insights into the design and optimization of computing systems for IIoT and sends the outcomes back to the fog computing node 𝑖.
to enhance their efficiency and performance. We introduce the idea of data transmission rate for the fog node 𝑖,
The computing work that a fog node i receives in this study is to describe the communication process. The speed at which data can
represented by a tuple (𝑡𝑖 , 𝑛𝑖 ), where 𝑡𝑖 denotes the task’s size (in be delivered between fog Node 𝑖 and the cloud server is represented
bits) and 𝑛𝑖 denotes the number of CPU cycles needed to process a by this rate. Given that the transmission rate influences the overall

4
S. Bebortta, S.S. Tripathy, U.M. Modibbo et al. Decision Analytics Journal 8 (2023) 100295

effectiveness and performance of the fog computing system, it is a where 𝑁0 is the spectral density of the channel noise power and 𝐵 is
crucial metric to take into account and can be given as [41,43], the bandwidth of a wireless link. With respect to 𝑥 values greater than
( ) zero, this function increases and is convex. In other words, the value of
𝑝𝑡𝑥 2
𝑓 𝑜𝑔 𝑖 𝑔𝑖
𝑇𝑖 = 𝐵log2 1 + (3) 𝑓 (𝑥) increases as 𝑥 increases. Additionally, 𝑓 (𝑥) is growing with respect
𝑁0 𝐵
to 𝑥 at a faster rate. Convexity is the term for this characteristic of the
The wireless link’s bandwidth, the fog node’s transmission power function. In fog computing systems, the convexity of the function is
𝑝𝑡𝑥
𝑖 , the cloud server’s and the fog node’s wireless channel gain 𝑔𝑖 , crucial because it aids in determining the best distribution of compute
and the channel noise power’s spectral density 𝑁0 can all be taken workloads between fog nodes and the cloud server while taking into
into account when calculating transmission rate pertaining to 𝑖th fog account the transmission time and channel noise power.
nodes. The transmission rate pertaining to the data between 𝑖th fog Eq. (7), which refers to the energy above the fog node 𝑖 can be
node and the cloud computing server is greatly influenced by these rewritten in light of Eq. (8) as,
characteristics. ( ) ( )
𝑡𝑖 𝑡𝑖 𝜏𝑖𝑡𝑥 𝑡𝑖
The magnitude of the computing task and the amount of time 𝑐𝑙𝑜𝑢𝑑
𝜀𝑖 = 𝑓 𝑜𝑔 𝑓 = 𝑓 (10)
needed for transmission can also be taken into account when calcu- 𝑇𝑖 𝑔𝑖2 𝜏𝑖𝑡𝑥 𝑔𝑖2 𝜏𝑖𝑡𝑥
lating the data transmission rate. 𝑇𝑖𝑓 𝑜𝑔 = 𝑡𝑖 ∕𝜏𝑖𝑡𝑥 , where 𝜏𝑖𝑡𝑥 represents
To maximize computation offloading in a fog-based environment,
the time slots needed for fog node 𝑖 to offload jobs 𝑡𝑖 , can be used to
the two processing modes—local computing at the fog devices and
describe this as the task size to transmission time ratio.
offloading to the cloud computing server can be combined. To be more
As a function of the data transmission rate, the time cost of sending
precise, we may create a computation offloading optimization model
computation job 𝑡𝑖 can be stated as 𝜏𝑖𝑡𝑥 = 𝑡𝑖 ∕𝑇𝑖𝑓 𝑜𝑔 . The performance and
that enables some computation tasks to be performed locally at fog
efficiency of the fog computing system as a whole are impacted by the
computing nodes while offloading the remainder amount of tasks to
time required to transmit the computation tasks between fog computing
cloud server for processing.
devices and cloud server.
The following is an expression for the time cost needed to complete By considering a number of variables, including the computational
the computation task 𝑡𝑖 on the cloud server. This quantity shows how capabilities of fog nodes, data transmission rates, and the spectral
long it took the cloud server to complete the computation work. density pertaining to the channel noise power, this optimization model
can be used to determine the best allocation of computation tasks
𝑛𝑖 𝑡𝑖
𝜏𝑖𝑒𝑥𝑒 = (4) among fog computing nodes and cloud server. We can improve the
𝑐𝑖𝑐𝑙𝑜𝑢𝑑 fog computing system’s overall speed and efficiency while reducing
where 𝑐𝑖𝑐𝑙𝑜𝑢𝑑 represents the cloud server’s capacity for computation. completion times and energy usage by optimizing the computation
The computational output size is frequently much less than the input offloading procedure.
size for many applications. The time needed to relay the calculation
result back to the fog node might be regarded as minimal as a result. 4. Problem formulation
As a result, the following equation may be used to define the overall
processing time for the calculation task 𝑡𝑖 : The IoT-fog computation offloading optimization challenge is for-
mulated in this section. This optimization problem’s main goal is to
𝑡𝑖 = 𝑚𝑎𝑥(𝑡𝑙𝑜𝑐 𝑐𝑙𝑜𝑢𝑑
𝑖 , 𝑡𝑖 ) + 𝑡𝑡𝑥
𝑖 (5)
reduce all fog nodes’ energy usage while making sure that their task
Here, 𝑡𝑙𝑜𝑐
𝑖 stands for the amount of time spent locally computing completion times are within acceptable bounds. We must take into
on fog node 𝑖 to complete the task 𝑡𝑖 , 𝑡𝑐𝑙𝑜𝑢𝑑
𝑖 for the cost of time spent account a number of variables in order to accomplish this goal, includ-
completing the task on the cloud server, and 𝑡𝑡𝑥𝑖 for the amount of time ing the fog nodes’ computational power, data transfer speeds, and the
spent transmitting the computation task 𝑡𝑖 from fog node 𝑖 to cloud energy needed for computation and communication. We can reduce the
computing servers. system’s energy consumption while retaining the anticipated comple-
The cloud server’s processing time and the transmission time for tion time by optimizing the offloading of computation jobs between
offloaded data are combined to form the expression for completion time fog nodes and the cloud server.
𝜏𝑖𝑐𝑙𝑜𝑢𝑑 as, The optimization problem is presented as a mathematical model
𝑛𝑖 𝑡𝑖 that incorporates the system’s restrictions and goals. We can determine
𝜏𝑖𝑐𝑙𝑜𝑢𝑑 = 𝜏𝑖𝑡𝑥 + 𝜏𝑖𝑒𝑥𝑒 = 𝑡𝑖 ∕𝑇𝑖𝑓 𝑜𝑔 + (6)
𝑐𝑖𝑐𝑙𝑜𝑢𝑑 the best distribution of compute workloads between the fog devices
and cloud server by addressing this optimization problem using the
It is crucial to predominantly take into account the completion time,
proper algorithms and approaches, which reduces the system’s energy
which can be given as sum of local computing time, time cost for cloud
consumption while achieving the anticipated completion time.
server execution, and transmission time, in order to establish the best
According to the definition given in Section 3, the total energy
distribution of compute workloads between fog nodes and the cloud
consumption of the fog node can be written as a function of 𝛼𝑖 when
server. We can increase the effectiveness and performance of the fog
computing system by reducing the overall completion time. the offloading ratio of the fog computing node 𝑖 is equal to 𝛼𝑖 . A
Similarly, it is possible to determine how much energy is used when mathematical term that takes into account the energy consumed by
transmitting compute job 𝑡𝑖 from the fog node 𝑖 as, both local and offloaded compute processes specifically represents the
overall energy consumed by fog node 𝑖. To be more specific, it is
𝑡𝑖
𝜀𝑐𝑙𝑜𝑢𝑑
𝑖 = 𝑝𝑡𝑥 𝑡𝑥
𝑖 𝜏𝑖 = 𝑝𝑡𝑥
𝑖 (7) possible to model energy consumption of 𝑖th fog nodes as a function of
𝑇𝑖𝑓 𝑜𝑔 𝛼𝑖 that accounts for the energy used for both local processing and data
The transmission power 𝑝𝑡𝑥 transfer to and from the cloud server. This mathematical expression
𝑖 can be represented as [42],
( ) enables us to identify the ideal offloading ratio 𝛼𝑖 that optimizes the
𝑡𝑥 1 𝑓 𝑜𝑔
𝑝𝑖 = 𝑓 𝑇𝑖 (8) fog server’s overall energy usage while maintaining the required level
𝑔𝑖2 of compute task completion time.
Here, there are two possible expressions for the transmission rate For each potential offloading ratio, we may assess the energy con-
𝑅𝑖 and the parameter 𝑝𝑡𝑥
𝑖 from Eq. ((8) also
) appears in Eq. (3). sumption of fog node 𝑖 by altering the value of 𝛼𝑖 within the specified
The formula for the function 𝑓 𝑇𝑖𝑓 𝑜𝑔 is given as, range. With this method, we can determine the offloading ratio that
meets the anticipated completion time with the least amount of energy
( )
𝑓 (𝑥) = 𝑁0 𝐵 2𝑥∕𝐵 − 1 (9) consumption. With the help of this optimization strategy, we can keep

5
S. Bebortta, S.S. Tripathy, U.M. Modibbo et al. Decision Analytics Journal 8 (2023) 100295

the fog computing system’s performance and efficiency at a high level Algorithm 1: Proposed optimization algorithm for Fog-Cloud
while minimizing its overall energy usage. offloading.
( ) ( )
 𝛼𝑖 , 𝜏𝑖𝑡𝑥 = 𝜀𝑙𝑜𝑐 1 − 𝛼𝑖 + 𝜀𝑐𝑙𝑜𝑢𝑑 𝛼𝑖 + 𝛥𝜀𝑖 (11) Input : 𝑛𝑖 , 𝑡𝑖 , 𝜀𝑖 , 𝑐𝑖𝑙𝑜𝑐 , 𝑐𝑖𝑐𝑙𝑜𝑢𝑑 , 𝑡𝑡𝑥 𝑙𝑜𝑐 𝑐𝑙𝑜𝑢𝑑 .
𝑖 , 𝑡𝑖 , 𝑡𝑖
𝑖 𝑖
Output : 𝜏𝑖𝑙𝑜𝑐 , 𝜀𝑙𝑜𝑐
𝑖 , 𝜀 𝑐𝑙𝑜𝑢𝑑 , 𝜏 𝑒𝑥𝑒 , 𝜏 𝑐𝑙𝑜𝑢𝑑 .
𝑖 𝑖 𝑖
𝜀𝑙𝑜𝑐
𝑖 is the parameter used to describe the energy used by a fog node 1 Estimate computation time:
to perform a computation locally. Fog node 𝑖’s energy consumption in 2 for 𝑖 do
its idle state is denoted by the symbol 𝛥𝜀𝑖 , which is equal to 𝛥𝜀𝑖 = 𝑛 𝑖 𝑡𝑖
𝜏𝑖𝑙𝑜𝑐 = 𝑙𝑜𝑐
| ( )| 3
𝑝𝑓𝑖 𝑜𝑔 |𝜏𝑖𝑐𝑙𝑜𝑢𝑑 𝛼𝑖 − 𝜏𝑖𝑙𝑜𝑐 1 − 𝛼𝑖 |. Here, 𝑝𝑓𝑖 𝑜𝑔 depicts the power pertaining to 𝑐𝑖
| | 4 𝜏𝑖𝑒𝑥𝑒 =
𝑛 𝑖 𝑡𝑖
fog node 𝑖 in its idle state, 𝜏𝑖𝑐𝑙𝑜𝑢𝑑 𝛼𝑖 the computation time of the task 𝑡𝑖 ’s 𝑐𝑖𝑐𝑙𝑜𝑢𝑑
offloaded portion, and 𝜏𝑖𝑙𝑜𝑐 the computation time of the task 𝑡𝑖 ’s local 5 𝜏𝑖𝑐𝑙𝑜𝑢𝑑 = 𝜏𝑖𝑡𝑥 + 𝜏𝑖𝑒𝑥𝑒 = 𝑡𝑖 ∕𝑇𝑖𝑓 𝑜𝑔 +
𝑛𝑖 𝑡𝑖
𝑐𝑖𝑐𝑙𝑜𝑢𝑑
portion.
This model’s major objective is to reduce fog nodes’ energy con- 6 end
sumption while keeping the anticipated time delay [40,41,43,44]. The 7 Estimate energy consumed at each layer:
optimization model considers issues with restricted channel bandwidth 8 for 𝑖 do
and computing power, as well as issues with fog node energy consump- 9 𝜀𝑙𝑜𝑐
𝑖 = 𝑛𝑖 𝜀𝑖 𝑡𝑖
𝑡
tion and task completion times [45]. The suggested model provides a 10 𝜀𝑐𝑙𝑜𝑢𝑑
𝑖 = 𝑓𝑖𝑜𝑔 𝑝𝑡𝑥
𝑖
𝑇𝑖
comprehensive way to improve the overall effectiveness of the IIoT 𝑡𝑥
( )
𝜏 𝑡
system by tackling these aspects simultaneously. We construct the 11 𝜀𝑐𝑙𝑜𝑢𝑑
𝑖 = 𝑖2 𝑓 𝜏 𝑡𝑥𝑖
𝑔𝑖 𝑖
optimization problem for our offloading framework as follows, 12 end

𝑛
( ) ∑
𝑛
( ) 13 Minimize:
( )
min  𝛼𝑖 , 𝜏𝑖𝑡𝑥 = min 𝜀𝑙𝑜𝑐
𝑖 1 − 𝛼𝑖 + 𝜀𝑐𝑙𝑜𝑢𝑑
𝑖 𝛼𝑖 + 𝛥𝜀𝑖 (12a) 14  𝛼𝑖 , 𝜏𝑖𝑡𝑥
𝛼𝑖 ,𝜏𝑖𝑡𝑥 𝑖=1 𝛼𝑖 ,𝜏𝑖𝑡𝑥 𝑖=1
( )
15 return  𝛼𝑖 , 𝜏𝑖𝑡𝑥 ;
{ ( ) }
𝑠.𝑡. max 𝜏𝑖𝑙𝑜𝑐 1 − 𝛼𝑖 , 𝜏𝑖𝑐𝑙𝑜𝑢𝑑 𝛼𝑖 ≤ 𝑖 , (12b) 16 exit

( )
𝜀𝑙𝑜𝑐
𝑖 1 − 𝛼𝑖 + 𝜀𝑐𝑙𝑜𝑢𝑑
𝑖 𝛼𝑖 + 𝛥𝜀𝑖 ≤ 𝑖 , (12c)


𝑛
𝑡𝑖
≤ 𝐵, (12d)
𝑖=1 𝜏𝑖𝑡𝑥
∑𝑛
𝛼𝑖 𝑛𝑖 𝑡𝑖 ≤ 𝐶, (12e)
𝑖=1

0 ≤ 𝛼𝑖 ≤ 1. (12f)

The goal of the aforementioned optimization model is to reduce


the overall energy used by all fog nodes while still guaranteeing that
the jobs are finished within the anticipated time frame. Numerous
constraints are incorporated into the optimization model to control
energy consumption, ensure that the channel bandwidth, and ensure
that the computation resources are not exceeded.
When processing computation tasks, the objective function (12a)
aims to reduce the overall energy consumption of all fog nodes. The
work must be finished within the specified delay 𝑖 , which is guaran-
teed by constraint (12b). The energy consumed by fog node 𝑖, including
the energy used for performing local computation, transmission, and
waiting, is constrained by constraint (12c) to be less than the target
energy consumption 𝑖 . The usage of compute resources is constrained
by constraints (12d) and (12e), respectively. The offloading ratio for
each computation work is constrained by constraint (12f).
Algorithm 1 describes the proposed optimization procedure for task
offloading and resource allocation in this study. For each fog device,
the communication network and underlying network’s settings are
initialized. Each IoT device chooses an offloading site at random from
the set of 𝑖th fog nodes during the first time slot. The minimal overall Fig. 3. Illustration of the flowchart pertaining to the proposed offloading framework.
energy usage was calculated subject to constraints like 𝑖 , 𝑖 , 𝐵, and
𝐶 after the initial resource allocation strategy, service response delay,
and energy consumption were computed.
The flowchart begins with the input phase, where the user provides
Integer Linear Programming (ILP) is used for the task of offloading
information such as the threshold characteristics of the fog nodes
to fog. The process for choosing the best job distribution and resource (e.g., processing capacity, energy availability), task size, and parame-
utilization in a fog computing environment is visually represented by a ters like connectivity and bandwidth as illustrated in Algorithm 1. Next,
flowchart in Fig. 3. Fog computing is the paradigm of offloading com- the flowchart proceeds to the ILP modeling phase, where the problem is
pute duties from neighboring fog nodes, which have more computing formulated as an ILP model. This involves defining decision variables,
power, to resource-constrained IoT devices. To effectively distribute objective function, and constraints. Decision variables could include
jobs across fog nodes, the flowchart makes use of ILP, to facilitate task assignment, resource allocation, and communication paths. Once
optimization as discussed above. It tries to ensure optimal resource the ILP model is formulated, the flowchart advances to the optimization
utilization while reducing latency, energy use, and network congestion. phase. The ILP solver is invoked to find the optimal solution that

6
S. Bebortta, S.S. Tripathy, U.M. Modibbo et al. Decision Analytics Journal 8 (2023) 100295

Table 1
Summary of simulation parameters for experimental setup.
Parameters Value
Fog nodes [5, 25]
Speed of fog nodes 2e6 Cycles/s
No. of cores 4
Maximum latency of fog nodes 600 ms
Energy consumption 1.5 Joules
Cloud Server 1
No. of cores 16
Operating system Cloud Linux
Architecture X86
Speed of cloud server 2e8 Cycles/s
Bandwidth 100 Mbps
Clock speed of compute nodes 8 GHz
Average task size (in KB) [100, 1000]

minimizes the objective function in Eq. (12) while satisfying the defined Fig. 4. Comparison of the average computations for each task and the delay for
constraints. TOFFEE, JOTO, TCO, and the proposed framework.

5. Results and discussions

This section presents and discusses the simulation setup considering


fixed range of fog nodes and a cloud computing server alternatively.

5.1. Simulation setup

In an IoT system, we consider a fixed range of [5, 25] fog nodes and
single cloud computing server [40,41,43]. In a case where the voltage is
fixed, the calculation rates of fog nodes, denoted by 𝜏𝑖𝑙𝑜𝑐 , are uniformly
set to 𝜏𝑖𝑙𝑜𝑐 = 2𝑒6 cycles/s for 𝑖 ∈ 1, 2, … , 10. The cloud’s processing
speed is 𝜏𝑖𝑐𝑙𝑜𝑢𝑑 = 2𝑒8 cycles/s. We set 𝐵 to be equal to 100 Mb/s, 𝐶 to
be 8 GHz, 𝑁0 to be 10−10 W, 𝑔𝑖2 to be 1, 𝑛𝑖 to be 1000 cycles/bit, 𝜀𝑖
to be 10−8 Joules/cycle, and 𝑝𝑓𝑖 𝑜𝑔 to be 10−3 W. Different fog nodes all
have uniform calculation task sizes, which range from 100 to 1000 Kb.
The targeted latency 𝑖 and energy consumption 𝑖 are 1.5 Joule and
600 ms, respectively (see Table 1). Fig. 5. Comparison of the average computations for each task and the energy
The graphs in Figs. 4 and 5 demonstrate a strong association be- consumption for TOFFEE, JOTO, TCO, and the proposed framework.

tween a task’s mean computation quantity, delay, and energy cost.


Compared to tasks requiring less processing, those requiring more
computation take longer to perform and use more energy. In terms
of task offloading delays and energy usage, the proposed offloading
method beats three competing benchmark offloading systems, namely
TOFFEE [18], JOTO [19], and TCO [20]. This is due to the fact that the
proposed model carefully takes into account the computation volume
and data size of the task content in order to optimize both task delay
and energy cost. As a result, the proposed strategy offers an offloading
solution that is optimized and strikes a balance between compute and
communication overhead, resulting in enhanced performance.
According to the findings displayed in Figs. 6 and 7, there is a
direct relationship between a task’s data size and both its energy use
and delay. Particularly, compared to tasks with smaller data sizes,
tasks with larger data sizes typically take longer to process and use
more energy. Additionally, with fewer task delays and less energy
use, our suggested offloading system outperforms other benchmark
offloading strategies like TOFFEE, JOTO, and TCO. This implies that Fig. 6. Comparison of average task size and delay for TOFFEE, JOTO, TCO, and the
our suggested approach is more successful in reducing the detrimental proposed framework.
effects of big data quantities on job duration and energy usage. Our
suggested strategy provides an effective solution that improves overall
performance pertaining to task length and energy usage by optimizing tasks rises, affects the system’s performance. The delay and energy con-
the offloading technique based on both data size and computation sumed by fog computing nodes were observed to grow as the number
amount. of jobs grew, resulting in longer service times for the tasks run on these
In this experiment, we looked at how the performance of the nodes. This result suggests that the workload and resource availability
suggested algorithm would change if there were more jobs while main- considerably affect the performance of the proposed algorithm since
taining the number of fog nodes constant at five. The findings shown in an increase in the number of jobs has a major impact on the system’s
Figs. 8 and 9 demonstrate how the burden, which rises as the number of overall performance. To guarantee that fog computing systems operate

7
S. Bebortta, S.S. Tripathy, U.M. Modibbo et al. Decision Analytics Journal 8 (2023) 100295

Fig. 7. Plot for comparison of average task size and the energy consumption for Fig. 9. Comparison of the number of tasks and the energy consumption for TOFFEE,
TOFFEE, JOTO, TCO, and the proposed framework. JOTO, TCO, and the proposed framework.

Fig. 8. Comparison of the number of tasks and the delay for TOFFEE, JOTO, TCO, Fig. 10. Comparison of the different number of fog nodes and the delay for TOFFEE,
and the proposed framework. JOTO, TCO, and the proposed framework.

effectively and efficiently, it is essential to take these characteristics more resources become available, reducing the strain on each node and
into account while building and implementing them. ultimately resulting in lower latency periods. Our suggested approach
The suggested method has a shorter delay time when compared chooses the most appropriate fog node to lower the load on each node
to other algorithms examined in the study, as seen in Fig. 8. This is and minimize delay, in contrast to previous offloading algorithms that
because, in contrast to the JOTO algorithm, the proposed algorithm merely choose the fog node with the lowest energy usage. With this
evaluates the load on the chosen fog node before carrying out a strategy, resources are allocated more effectively, which ultimately
particular job associated with that node. This technique can reduce reduces delay times.
latency by evenly distributing the workload among the fog nodes As shown in Fig. 11, the suggested approach also results in a
and avoiding overloading any one node, which eventually improves decrease in energy usage. The proposed method greatly lowers the
overall performance in terms of job completion time. These findings system’s overall energy usage by taking into account the workload on
emphasize the significance of taking computational task offloading into each node and optimizing job distribution. These results highlight the
account when designing and implementing fog computing systems as significance of resource offloading and intelligent deep reinforcement
it can have a significant impact on their performance. Additionally, learning-based resource allocation in the design and development of
Fig. 9 demonstrates that the proposed model uses less energy than the fog computing systems. The findings imply that optimizing resource
other algorithms looked at in the study. This is because the proposed allocation and offloading workloads strategically can boost system
framework was specifically created to decrease the IoT-Fog system’s performance and energy efficiency.
energy usage and service delay by optimizing the offloading mechanism
based on both data size and computation quantity of the task. The pro- 6. Conclusion and future work
posed approach demonstrated a reduction in energy usage, highlighting
its potential to offer an effective solution that enhances the general With the increased flow of data and information, cloud computing
performance of fog computing systems. has grown in popularity. The unique method of ‘‘fog computing’’ has
The goal of this experiment was to determine how the system a number of benefits, including affordability, network bandwidth, and
performs when the number of fog computing nodes is increased from 5 low latency. In order to solve the issues of energy consumption and
to 25 while keeping the number of jobs constant at 100. The results average service latency of jobs in a fog computing environment, this
shown in Fig. 10 demonstrate that the delay time decreases as the research present a Dynamic Fog-Cloud-based Task Offloading technique
number of fog nodes rises. This decrease in the delay is observed for managing IoT-generated big data. Performance measures including
as a consequence of the increase in the number of fog nodes, hence network delay and energy usage were identified in order to address the

8
S. Bebortta, S.S. Tripathy, U.M. Modibbo et al. Decision Analytics Journal 8 (2023) 100295

Table 2
Comparison of mean computations per task (in megacycles) versus delay (in s) for
TOFFEE, JOTO, TCO, and Proposed approach..
Mean Computations TOFFEE JOTO TCO Proposed
per task (×106 cycles)
0.50 9.4133 10.0144 10.6891 8.1112
1.00 10.8901 11.1263 11.9825 10.0141
1.50 13.0001 13.6824 14.0443 12.0021
2.00 15.1205 16.0181 16.9851 14.6217
2.50 16.4286 17.1124 18.0141 15.8991

Table 3
Comparison of mean computations per task (in megacycles) versus energy consumption
(in joules) for TOFFEE, JOTO, TCO, and Proposed approach..
Mean Computations TOFFEE JOTO TCO Proposed
per task (×106 cycles)
0.50 265.2976 289.9144 278.6891 250.1209
1.00 313.0921 309.1263 321.9825 295.0125
Fig. 11. Comparison of different number of fog nodes and the energy consumption for 1.50 343.0001 341.6824 364.0443 335.1973
TOFFEE, JOTO, TCO, and the proposed framework. 2.00 390.1205 386.0181 399.9851 378.2671
2.50 448.4286 437.1124 458.0141 422.1027

high dimensionality problem of jobs in a dynamic IoT-fog environment. Table 4


The proposed model was developed by shifting tasks generated at the Comparison of mean size of tasks (in MB) versus delay (in s) for TOFFEE, JOTO, TCO,
IoT layer to the subsequent fog-cloud layers using an integer program- and Proposed approach..

based optimization problem formulation. Experimental results showed Mean size of tasks TOFFEE JOTO TCO Proposed
in (MB)
improved service time and energy utilization when the suggested of-
5 12.1186 11.8929 12.6891 11.0143
floading technique was compared to existing benchmark algorithms.
10 13.2981 12.9104 13.9825 12.1471
Through the scheduling of tasks that are offloaded to fog nodes, re- 15 14.0148 13.7829 15.0443 13.0126
inforcement learning has been used to reduce energy consumption 20 15.0279 14.9918 15.9851 14.2168
and task latency. The integer programming approach considered the 25 16.6701 15.9991 17.0141 15.1557
priority weights of tasks to start the incentive mechanism and priori-
tized activities based on their characteristics including queuing time, Table 5
CPU cycles, energy usage, and latency. The results showed that the Comparison of mean size of tasks (in MB) versus energy consumption (in joules) for
proposed model was beneficial in improving system performance and TOFFEE, JOTO, TCO, and Proposed approach..
optimizing energy consumption pertaining to fog computing systems. Mean size of tasks TOFFEE JOTO TCO Proposed
The suggested methodology gives a clear and practical way to deal with (in MB)
the complexity of task scheduling under changing circumstances. 5 322.1186 311.8929 332.6891 311.0143
Future studies will examine task offloading with possible mobile 10 343.2981 333.9104 353.9825 332.1471
15 374.0148 365.7829 385.0443 363.0126
users in more complicated deployment settings. We would also focus on
20 395.0279 389.9918 405.9851 384.2168
emerging deep learning and optimization techniques to create special 25 416.6701 408.9991 427.0141 405.1557
strategies that can adapt to shifting network conditions and user mo-
bility patterns in order to enable dynamic and adaptive task offloading
Table 6
based on the location and resource availability of fog nodes as well as
Comparison of No. of tasks versus delay (in s) for TOFFEE, JOTO, TCO, and Proposed
user mobility patterns. approach..
No. of tasks TOFFEE JOTO TCO Proposed
Declaration of competing interest
50 32.1186 31.8929 33.6891 31.0143
100 33.2981 32.9104 34.9825 32.1471
The authors declare that they have no known competing finan- 150 34.0148 33.7829 35.0443 33.0126
cial interests or personal relationships that could have appeared to 200 35.0279 34.9918 36.9851 34.2168
influence the work reported in this paper. 250 36.6701 35.9991 37.0141 35.1557

Data availability Table 7


Comparison of No. of tasks versus energy consumption (in joules) for TOFFEE, JOTO,
Data will be made available on request. TCO, and Proposed approach..
No. of tasks TOFFEE JOTO TCO Proposed
Appendix 50 632.1186 541.8929 733.6891 531.0143
100 633.2981 542.9104 734.9825 532.1471
This section presents the summary of simulation outcomes in tab- 150 634.0148 533.7829 835.0443 533.0126
200 635.0279 634.9918 836.9851 534.2168
ulated form obtained for the experimentations discussed in this study 250 636.6701 635.9991 837.0141 535.1557
over different performance parameters. Table 2 provides comparison
of mean computations per task (in megacycles) versus delay (in s)
for TOFFEE, JOTO, TCO, and Proposed approach. Table 3 depicts
comparison of mean computations per task (in megacycles) versus energy consumption (in joules) for TOFFEE, JOTO, TCO, and Proposed
energy consumption (in joules) for TOFFEE, JOTO, TCO, and Proposed approach. Table 6 gives comparison of No. of tasks versus delay (in
approach. Table 4 shows comparison of mean size of tasks (in MB) s) for TOFFEE, JOTO, TCO, and Proposed approach. Table 7 presents
versus delay (in s) for TOFFEE, JOTO, TCO, and Proposed approach. comparison of No. of tasks versus energy consumption (in joules)
Table 5 represents comparison of mean size of tasks (in MB) versus for TOFFEE, JOTO, TCO, and Proposed approach. Table 8 provides

9
S. Bebortta, S.S. Tripathy, U.M. Modibbo et al. Decision Analytics Journal 8 (2023) 100295

Table 8 [18] Y. Chen, N. Zhang, Y. Zhang, X. Chen, W. Wu, X.S. Shen, TOFFEE: Task
Comparison of No. of fog nodes versus delay (in s) for TOFFEE, JOTO, TCO, and offloading and frequency scaling for energy efficiency of mobile devices in mobile
Proposed approach.. edge computing, IEEE Trans. Cloud Comput. 9 (4) (2019) 1634–1644.
No. of fog nodes TOFFEE JOTO TCO Proposed [19] T.X. Tran, D. Pompili, Joint task offloading and resource allocation for multi-
server mobile-edge computing networks, IEEE Trans. Veh. Technol. 68 (1) (2018)
5 32.1186 21.8929 34.6891 21.0143
856–868.
10 30.2981 20.9104 32.9825 19.1471
[20] Y. Hao, M. Chen, L. Hu, M.S. Hossain, A. Ghoneim, Energy efficient task caching
15 28.0148 18.7829 30.0443 17.0126
and offloading for mobile edge computing, Ieee Access 6 (2018) 11365–11373.
20 26.0279 16.9918 27.9851 15.2168
[21] M. Chen, Y. Hao, Task offloading for mobile edge computing in software defined
25 22.6701 11.9991 24.0141 10.1557
ultra-dense network, IEEE J. Sel. Areas Commun. 36 (3) (2018) 587–597.
[22] X. Zhu, Y. Luo, A. Liu, M.Z.A. Bhuiyan, S. Zhang, Multiagent deep reinforcement
learning for vehicular computation offloading in IoT, IEEE Int. Things J. 8 (12)
Table 9
(2020) 9763–9773.
Comparison of No. of fog nodes versus energy consumption (in joules) for TOFFEE,
[23] S. Liu, Q. Yang, S. Zhang, T. Wang, N.N. Xiong, MIDP: An MDP-based intelligent
JOTO, TCO, and Proposed approach..
big data processing scheme for vehicular edge computing, J. Parallel Distrib.
No. of fog nodes TOFFEE JOTO TCO Proposed Comput. 167 (2022) 1–17.
5 122.1186 219.8929 232.6891 111.0143 [24] M. Huang, A. Liu, N.N. Xiong, J. Wu, A UAV-assisted ubiquitous trust commu-
10 173.2981 270.9104 283.9825 162.1471 nication system in 5G and beyond networks, IEEE J. Selected Areas Commun.
15 234.0148 345.7829 355.0443 223.0126 39 (11) (2021) 3444–3458.
20 295.0279 389.9918 405.9851 284.2168 [25] K. Xiong, Y. Liu, L. Zhang, B. Gao, J. Cao, P. Fan, K.B. Letaief, Joint optimization
25 336.6701 428.9991 447.0141 325.1557 of trajectory, task offloading, and CPU control in UAV-assisted wireless powered
fog computing networks, IEEE Trans. Green Commun. Netw. 6 (3) (2022)
1833–1845.
[26] R. Malik, M. Vu, Energy-efficient computation offloading in delay-constrained
massive MIMO enabled edge network using data partitioning, IEEE Trans.
comparison of No. of fog nodes versus delay (in s) for TOFFEE, JOTO,
Wireless Commun. 19 (10) (2020) 6977–6991.
TCO, and Proposed approach. Table 9 depicts comparison of No. of fog [27] J. Wang, J. Hu, G. Min, A.Y. Zomaya, N. Georgalas, Fast adaptive task offloading
nodes versus energy consumption (in joules) for TOFFEE, JOTO, TCO, in edge computing based on meta reinforcement learning, IEEE Trans. Parallel
and Proposed approach. Distrib. Syst. 32 (1) (2020) 242–253.
[28] H. Guo, J. Liu, UAV-enhanced intelligent offloading for internet of things at the
edge, IEEE Trans. Ind. Inform. 16 (4) (2019) 2737–2746.
References [29] Q. Xu, Z. Su, M. Dai, S. Yu, APIS: Privacy-preserving incentive for sensing task
allocation in cloud and edge-cooperation mobile Internet of Things with SDN,
[1] A. Almadhor, A. Alharbi, A.M. Alshamrani, W. Alosaimi, H. Alyami, A new IEEE Internet Things J. 7 (7) (2019) 5892–5905.
offloading method in the green mobile cloud computing based on a hybrid [30] M. Yu, A. Liu, N.N. Xiong, T. Wang, An intelligent game-based offloading scheme
meta-heuristic algorithm, Sustain. Comput.: Inform. Syst. 36 (2022) 100812. for maximizing benefits of IoT-edge-cloud ecosystems, IEEE Int. Things J. 9 (8)
[2] C. Li, X. Zuo, A.S. Mohammed, A new fuzzy-based method for energy-aware re- (2020) 5600–5616.
source allocation in vehicular cloud computing using a nature-inspired algorithm, [31] X. Liu, P. Lin, T. Liu, T. Wang, A. Liu, W. Xu, Objective-variable tour planning for
Sustain. Comput.: Inform. Syst. 36 (2022) 100806. mobile data collection in partitioned sensor networks, IEEE Trans. Mob. Comput.
[3] J. Huang, H. Gao, S. Wan, Y. Chen, Aoi-aware energy control and computation 21 (1) (2020) 239–251.
offloading for industrial IoT, Future Gener. Comput. Syst. 139 (2023) 29–37. [32] X. Zhu, Y. Luo, A. Liu, W. Tang, M.Z.A. Bhuiyan, A deep learning-based mobile
[4] K. Gasmi, S. Dilek, S. Tosun, S. Ozdemir, A survey on computation offloading and crowdsensing scheme by predicting vehicle mobility, IEEE Trans. Intell. Transp.
service placement in fog computing-based IoT, J. Supercomput. 78 (2) (2022) Syst. 22 (7) (2020) 4648–4659.
1983–2014. [33] K. Sadatdiynov, L. Cui, L. Zhang, J.Z. Huang, S. Salloum, M.S. Mahmud, A review
[5] A.M. Seid, J. Lu, H.N. Abishu, T.A. Ayall, Blockchain-enabled task offloading of optimization methods for computation offloading in edge computing networks,
with energy harvesting in multi-UAV-assisted IoT networks: A multi-agent DRL Digit. Commun. Netw. (2022).
approach, IEEE J. Sel. Areas Commun. 40 (12) (2022) 3517–3532. [34] H. Lu, C. Gu, F. Luo, W. Ding, X. Liu, Optimization of lightweight task offloading
[6] D.R. Morales, H.E. Romeijn, The generalized assignment problem and extensions, strategy for mobile edge computing based on deep reinforcement learning, Future
in: Handbook of Combinatorial Optimization: Supplement Volume B, Springer, Gener. Comput. Syst. 102 (2020) 847–861.
2005, pp. 259–311. [35] M. Aazam, S. Zeadally, E.F. Flushing, Task offloading in edge computing for
[7] K. Raza, V. Patle, S. Arya, A review on green computing for eco-friendly and machine learning-based smart healthcare, Comput. Netw. 191 (2021) 108019.
sustainable it, J. Comput. Intell. Electron. Syst. 1 (1) (2012) 3–16. [36] X. Li, Y. Qin, H. Zhou, Z. Zhang, An intelligent collaborative inference approach
[8] A. Shakarami, M. Ghobaei-Arani, A. Shahidinejad, A survey on the computation of service partitioning and task offloading for deep learning based service in
offloading approaches in mobile edge computing: A machine learning-based mobile edge computing networks, Trans. Emerg. Telecommun. Technol. 32 (9)
perspective, Comput. Netw. 182 (2020) 107496. (2021) e4263.
[9] S.E. Mahmoodi, R. Uma, K. Subbalakshmi, Optimal joint scheduling and cloud [37] Y. Chen, F. Zhao, Y. Lu, X. Chen, Dynamic task offloading for mobile edge
offloading for mobile applications, IEEE Trans. Cloud Comput. 7 (2) (2016) computing with hybrid energy supply, Tsinghua Sci. Technol. 28 (3) (2022)
301–313. 421–432.
[10] J. Zheng, Y. Cai, Y. Wu, X. Shen, Dynamic computation offloading for mobile [38] Q. Shen, B.-J. Hu, E. Xia, Dependency-aware task offloading and service
cloud computing: A stochastic game-theoretic approach, IEEE Trans. Mob. caching in vehicular edge computing, IEEE Trans. Veh. Technol. 71 (12) (2022)
Comput. 18 (4) (2018) 771–786. 13182–13197.
[11] A. Ashok, P. Steenkiste, F. Bai, Vehicular cloud computing through dynamic [39] H. Teng, Z. Li, K. Cao, S. Long, S. Guo, A. Liu, Game theoretical task offloading
computation offloading, Comput. Commun. 120 (2018) 125–137. for profit maximization in mobile edge computing, IEEE Trans. Mob. Comput.
[12] X. He, K. Wang, H. Huang, B. Liu, QoE-driven big data architecture for smart (2022).
city, IEEE Commun. Mag. 56 (2) (2018) 88–93. [40] H. Gupta, A. Vahid Dastjerdi, S.K. Ghosh, R. Buyya, IFogSim: A toolkit for
[13] S. Guo, B. Xiao, Y. Yang, Y. Yang, Energy-efficient dynamic offloading and modeling and simulation of resource management techniques in the Internet
resource scheduling in mobile cloud computing, in: IEEE INFOCOM 2016-the of Things, Edge and Fog computing environments, Softw. - Pract. Exp. 47 (9)
35th Annual IEEE International Conference on Computer Communications, IEEE, (2017) 1275–1296.
2016, pp. 1–9. [41] R. Mahmud, R. Buyya, Modelling and simulation of fog and edge computing
[14] X. Lyu, H. Tian, C. Sengul, P. Zhang, Multiuser joint task offloading and resource environments using iFogSim toolkit, in: Fog and Edge Computing: Principles and
optimization in proximate clouds, IEEE Trans. Veh. Technol. 66 (4) (2016) Paradigms, Wiley New York, NY, USA, 2019, pp. 1–35.
3435–3447. [42] P.M. Shankar, Fading and Shadowing in Wireless Systems, Springer, 2017.
[15] Y. Geng, Y. Yang, G. Cao, Energy-efficient computation offloading for multicore- [43] R. Mahmud, S. Pallewatta, M. Goudarzi, R. Buyya, iFogSim2: An extended
based mobile devices, in: IEEE INFOCOM 2018-IEEE Conference on Computer iFogSim simulator for mobility, clustering, and microservice management in edge
Communications, IEEE, 2018, pp. 46–54. and fog computing environments, J. Syst. Softw. 190 (2022) 111351.
[16] G. Zhang, W. Zhang, Y. Cao, D. Li, L. Wang, Energy-delay tradeoff for dynamic [44] D.-S. Chen, R.G. Batson, Y. Dang, Applied Integer Programming: Modeling and
offloading in mobile-edge computing system with energy harvesting devices, IEEE Solution, John Wiley & Sons, 2011.
Trans. Ind. Inform. 14 (10) (2018) 4642–4655. [45] M.I. Bala, M.A. Chishti, Offloading in cloud and fog hybrid infrastructure using
[17] D. Chatzopoulos, C. Bermejo, S. Kosta, P. Hui, Offloading computations to mobile iFogSim, in: 2020 10th International Conference on Cloud Computing, Data
devices and cloudlets via an upgraded NFC communication protocol, IEEE Trans. Science & Engineering (Confluence), IEEE, 2020, pp. 421–426.
Mob. Comput. 19 (3) (2019) 640–653.

10

You might also like