Optimizing Resource Allocation and Energy Ef Ciency in Federated Fog Computing For Iot
Optimizing Resource Allocation and Energy Ef Ciency in Federated Fog Computing For Iot
Abstract—Fog computing signicantly enhances the efciency highlighting the importance of adaptive resource allocation in
of IoT applications by providing computation, storage, and environments with high device mobility [6].
networking resources at the edge of the network. In this paper, In recent years, there has been a growing interest in integrat-
we propose a federated fog computing framework designed to
optimize resource management, minimize latency, and reduce ing federated learning into distributed frameworks to preserve
energy consumption across distributed IoT environments. Our data privacy and enhance the trustworthiness of learning
framework incorporates predictive scheduling, energy-aware re- systems. Surveys and comprehensive reviews have discussed
source allocation, and adaptive mobility management strategies. the challenges and prospects of trustworthy federated learning
Experimental results obtained from extensive simulations using [7], [10]. Furthermore, novel algorithms have been proposed to
the OMNeT++ environment demonstrate that our federated
approach outperforms traditional non-federated architectures in optimize differential privacy and client selection in federated
terms of resource utilization, latency, energy efciency, task settings [8], [9]. Such efforts not only secure data exchange
execution time, and scalability. These ndings underline the but also improve the efficiency of edge and fog computing
suitability and effectiveness of the proposed framework for frameworks.
supporting sustainable and high-performance IoT services. In addition to the aforementioned approaches, further en-
Index Terms—component, formatting, style, styling, insert
hancements have been made in vehicular networks and IoV
scenarios. Global aggregation node selection and dynamic
I. I NTRODUCTION
client selection strategies have been proposed for federated
The rapid growth of the Internet of Things (IoT) and the learning in Internet of Vehicles (IoV) [12], [13], while fuzzy-
proliferation of connected devices have led to significant chal- based task offloading mechanisms address latency-sensitive
lenges in managing computation, communication, and energy applications [14]. Moreover, educational tools such as ex-
resources in distributed environments. Fog computing has tended network simulators have been developed to enhance
emerged as a promising paradigm to address these challenges Fog/Edge computing education, further emphasizing the need
by extending cloud capabilities to the network edge [1]. for robust simulation platforms [15].
Frameworks such as FogNetSim++ have provided simulation Alongside these technological advancements, efforts have
tools for distributed fog environments, enabling researchers to been made to integrate game theory and explainable AI
model complex scenarios with heterogeneous resources [1], (XAI) for improved sample and client selection in split fed-
[11]. erated learning [16]. Complementary to this, cybersecurity
Recent studies have leveraged fog computing for diverse education has seen innovative approaches including hands-on
applications. For instance, sustainable smart farming has been DNS spoofing attack labs using virtual platforms [17], [18].
achieved by utilizing distributed simulation techniques to opti- Furthermore, emerging applications in medical diagnosis using
mize resource usage and reduce latency in agricultural settings machine learning [19] and quantum-enhanced convolutional
[2]. In parallel, trajectory design for UAV-based data collection neural networks for image classification [20] illustrate the
has been investigated using clustering models to enhance data broad impact of these research directions.
acquisition in smart farming scenarios [3]. Similarly, multi- This paper builds upon these contributions by proposing
level resource sharing frameworks have been proposed to a unified framework that integrates advanced fog resource
enable collaborative fog environments for smart cities [4]. management with federated learning techniques. Our approach
Despite these advances, challenges remain in effectively leverages predictive scheduling, energy harvesting integration,
managing energy consumption and ensuring quality of ser- and secure client selection mechanisms to ensure sustainable
vice (QoS) in highly dynamic environments. XFogSim has and reliable IoT services. The remainder of the paper is
addressed some of these challenges by proposing a distributed organized as follows: Section II reviews the related work, Sec-
fog resource management framework that supports sustainable tion III details the proposed framework, Section IV presents
IoT services [24]. Moreover, mobility-aware solutions have simulation results and performance evaluation, and Section V
been developed to cater to Industrial IoT (IIoT) scenarios, concludes the paper with future research directions.
II. L ITERATURE R EVIEW Tuli et al. [36] developed FogBus, integrating blockchain
Fog computing has emerged as a critical solution to address technology with fog computing to address data integrity and
latency and resource management challenges in IoT environ- security in IoT applications, underscoring privacy-preserving
ments. Several studies have contributed significantly to this methodologies.
domain, particularly in resource allocation, energy efficiency, Li et al. [37] proposed Virtual Fog, a virtualization-enabled
and federated fog computing. fog computing framework, enhancing scalability and flexibility
Malik et al. [24] introduced xFogSim, a distributed resource in IoT deployments.
management framework designed specifically for sustainable Finally, Coutinho et al. [38] introduced FogBed, a rapid-
IoT services. Their work emphasizes multi-objective optimiza- prototyping emulator enabling real-world fog and cloud infras-
tion that considers trade-offs between cost, availability, and tructure simulations, particularly effective in healthcare IoT
performance, particularly in fog federations. applications.
In parallel, Gupta et al. [25] proposed iFogSim, a pioneering The aforementioned studies collectively highlight ongoing
framework for simulating fog computing scenarios, emphasiz- advancements in fog computing, emphasizing the significance
ing latency and network congestion management. However, of resource allocation, mobility management, and energy effi-
it does not support fog federation, a gap addressed later by ciency in federated fog environments.
Malik et al. III. S YSTEM M ODEL
A comprehensive review by Yousefpour et al. [26] pro-
vides insights into fog computing and related edge computing In this section, we describe the system model used in our
paradigms, highlighting key issues and future directions for proposed framework, emphasizing the mathematical formula-
large-scale deployments. tions, definitions of key variables, and the related algorithms
Mao et al. [27] examined mobile edge computing exten- for resource management within the federated fog computing
sively, focusing on communication perspectives that directly environment.
influence latency and throughput in edge environments. Their A. System Architecture
survey highlights the significance of communication efficiency
We consider a federated fog computing environment repre-
for effective edge deployments.
sented by an undirected graph G = (N, E), where the node
Dastjerdi and Buyya [28] further discussed challenges and
set N consists of users U , fog nodes F , and broker nodes B,
solutions within fog computing, providing a solid foundation
i.e., N = U ∪F ∪B. The edge set E represents communication
regarding service delivery, latency management, and resource
links between these nodes. We assume L fog locations, each
allocation strategies.
containing multiple fog nodes managed by a broker.
Recent works by Ni et al. [29] introduced advanced resource
allocation techniques based on priced timed Petri nets, signifi- B. Notations and Denitions
cantly improving resource efficiency within fog environments. The primary notations and definitions utilized in this system
On the topic of mobility, Xiao and Krunz [30] explored model are presented in Table I.
distributed optimization strategies for energy efficiency in
fog computing, particularly within tactile internet contexts, TABLE I
emphasizing latency-sensitive applications. K EY N OTATIONS
Hong et al. [31] investigated mobile fog computing models,
Symbol Denition
proposing architectures designed to support large-scale IoT N Set of nodes
applications efficiently, highlighting mobility support and real- U Set of users
time processing capabilities. F Set of fog nodes
B Set of broker nodes
Pu et al. [32] introduced D2D fogging, an innovative task E Set of edges
offloading framework that leverages device-to-device collab- L Set of fog locations
oration, enhancing energy efficiency and reducing latency λ Average request arrival rate
ϕi Queuing capacity at fog location i
significantly. ϑi Execution rate at fog location i
Further advancements were made by Gao et al. [33], who ti Average waiting time at fog location i
proposed FogRoute, a delay-tolerant network model designed cd Cost of service delay
cq Queuing cost
specifically for fog computing scenarios, addressing critical h Network delay
data dissemination challenges. tr Response time
Brogi and Forti [34] focused on QoS-aware deployment Pf Failure probability of fog nodes
A Availability of resources
strategies for IoT applications in fog infrastructures, providing
a foundational approach to ensure service quality through
optimized resource allocation. C. Mathematical Formulation
Sonmez et al. [35] presented EdgeCloudSim, an effective The average arrival rate λ for mobile devices k ∈ N can be
simulation environment for evaluating the performance of calculated as:
edge computing systems, incorporating mobility and handover λ= λk (1)
management. k∈N
Fog location i accepts requests based on queuing capacity Algorithm 1 Fog Resource Allocation (FRA)
ϕi : 1: S ← ∅
1 if ϕi > λ, 2: F ← list of available fog nodes
ϕi = ϕi (2) 3: for each request do
λ otherwise
4: Select node with minimal response time tr
The execution rate ϑi is calculated as: 5: Validate cost p and availability A
6: Update S with optimal node
ϑi = λ · ϕ i (3)
7: end for
Using queuing theory, average waiting time ti at fog loca- 8: return S
tion i is derived as:
κλ 1 Algorithm 2 Availability Calculation
ti = + (4)
κρi − λ ρi 1: X, Y ← 1
2: Compute node failure probability Pf
Service delay cost cd is given by:
3: for each fog node f do
cd = h + cq (5) 4: Update cumulative failure probability
5: end for
Queuing cost cq is computed by: 6: Calculate total availability A = 1 − Y
ϑi 7: return A
cq = q (6)
λ − ϑi
Network delay h, considering distances d, is modeled as:
IV. E XPERIMENTAL S ETUP
h = β1 db,u + da,b (7) In this section, we detail our experimental environment,
a∈Bleased simulation parameters, and the scenarios employed to evaluate
Total cost cr at fog location li is: the performance of our proposed fog computing framework.
cr = px + py (8) A. Simulation Environment and Parameters
x=∀k,k∈li y=∀k,k∈l
/ i
The experiments are conducted using the OMNeT++ simu-
Response time tr consists of local tlocal
r and leased tleased
r lation environment integrated with the INET framework. The
resources: parameters used in simulations are summarized in Table II.
tr = tlocal
r + tleased
r (9)
tlocal
r and tleased
r are defined respectively as: TABLE II
S IMULATION PARAMETERS
1
tlocal
r = 1 + β2 · d(b, u) + dh · cc (10) Parameter Value
x∈li txr Fog locations 5
n Fog nodes per location 2-5
1 Broker nodes 1 per fog location
tleased
r = 1 + cd (11) Cloud data centers 2
x∈li txr i=1 Wireless sensors 200-1000
Wireless access points 20-100
Availability A calculation: Mobility models Linear, circular, random waypoint
Communication link 10 Gbps (Broker-to-broker)
A=1− Pf (k) (12) Mobile device range 250m
k Request arrival rates 0.5s, 1.0s, 1.5s
Packet error rate 10−3
D. Algorithms for Resource Management Simulation duration 500s
The key algorithms involved are:
Algorithm 1 (Fog Resource Allocation - FRA) handles
dynamic allocation of resources based on response time, cost, B. Performance Metrics
and availability. It is detailed in Algorithm 1.
Algorithm 2 (Availability Calculation) computes the avail- We evaluate the proposed framework using the following
ability of selected fog nodes, factoring in node and broker performance metrics:
failures. Detailed steps are given in Algorithm 2. • Resource utilization
The detailed formulations and algorithms in this section • Latency
establish the foundational models that our proposed framework • Energy consumption
leverages for efficient resource allocation and management • Task execution time
within the fog federated environment. • Scalability
Fig. 1. Resource utilization with varying numbers of users. Fig. 3. Energy consumption across different fog nodes over time.
Fig. 4. Distribution of task execution times for small and large tasks.