2 An Efficient Machine Learning-Based Resource Allocation Scheme For SDN-Enabled Fog Computing Environment
2 An Efficient Machine Learning-Based Resource Allocation Scheme For SDN-Enabled Fog Computing Environment
6, JUNE 2023
0018-9545 © 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8005
that utilizes several distributed edge-fog nodes and servers to to achieve the goal of resource allocation in a fog computing en-
hold local datasets to train an algorithm without transferring the vironment. Following are the major contributions of this article.
data samples. The main advantages of using CML are that it r We propose a resource allocation scheme with SDN in the
consumes less power, preserves privacy protection, and is non- scalable fog computing environment.
interference with device performance. r We then implemented the model for an optimal resource
Resource allocation is used to control data centers, which allocation with CML for the IoT-fog-cloud computing
reduces carbon emissions and enhances resource utilization. applications.
Resource allocation is of two types: optimization-based and r We compare and analyze the results with other existing
auction-based. The auction-based resource allocation is related techniques in the literature and found that the proposed
to the market pricing technique. It extends the demand and scheme outperformed the existing techniques using various
quantity of fog nodes with the support of bid requests [5], [6]. performance evaluation metrics.
Regarding optimized ways, resource allocation is utilized as a Rest of the article is organized as follows. Section II dis-
double-matching concern because fog nodes and cloud servers cusses the related work based on the allocation of resources in
are connected to end-users [7]. fog computing environment. The system model is discussed in
In this article, an optimal resource allocation technique is Section III. Section IV elaborates the proposed scheme in detail.
proposed using CML in an SDN-enabled fog computing en- In Section V, the implementation and evaluation of the proposed
vironment. The unification of both helps to improve results with scheme are discussed in different network scenarios alongwith
respect to network bandwidth usage, energy consumption, and the dataset used in experiments performed, experimental setup,
time-based attributes. The fog servers are programmed to train and performance evaluation using various parameters. Lastly,
the model locally, to utilize the cloud service to train the model if Section VI concludes the article.
the local server is overwhelmed. The fundamental idea is to cre-
ate a global model shared by all nodes by training local model(s)
on local datasets and periodically transferring information and II. RELATED WORK
attributes between these local nodes. In this direction, SDN also
Agarwal et al. [11] discussed the elastic resource allocation
plays an important role in networking to automatically re-route
architecture in fog computing. The main focus of the proposed
around an outage and keep the connections up and running as per
technique was to minimize response time, maximize throughput,
the requirements. It also enables us to enhance the effectiveness
and optimize resource usage to reduce the limitations of cloud
and performance of the network. In addition, it is more flexible,
computing. Ni et al. [5] proposed a dynamic allocation of
scalable, and efficient than conventional networking. It allows
resources method for the fog computing related to the Priced
the fog environment to save time, energy, and performance of
Timed Petri Nets to enhance fog resources and users’ Quality of
the overall system.
Services (QoS) requirements. The proposed technique achieved
more efficiency than the static allocation of resources planning
A. Motivation
in price and job completion time.
In the fog environment, the devices are heterogeneous in na- Xu et al. [7] proposed a Dynamic Resource Allocation Method
ture, as it uses different types of network connections to connect (DRAM) to balance the load of user tasks in fog computing.
different devices. It makes the resource allocation process more The framework was designed for dynamic migration of service
complex in comparison to the cloud computing environment. and static resource allocation to manage load balance for fog
Resource allocation problem is challenging in fog computing systems. Birhanie et al. [12] recommended vehicular fog com-
due to the dynamic behavior and unstable characteristics of puting architecture for vehicle parking to serve the user’s de-
the fog environment [8]. For IoT applications, the resource mands as the available resources. The Markov Decision Process
requirement for computing nodes has different demands in terms (MDP) and dynamic programming are applied to resolve the
of storage capacity, bandwidth and computing power. Due to issue of resource allocation in the fog computing environment.
the dynamic resource demand at the IoT layer, it is essential Wu et al. [13] also proposed VFC architecture to decrease the
to estimate the resource allocation of the network. Various computing time of different vehicular applications.
authors proposed different techniques for resource allocation Jia et al. [14] examined the issue of resource allocation
and management in fog computing. But, most of the proposals in three-tiered fog networks. The authors suggested a double
evaluated different parameters related to fog computing while matching technique related to the cost efficiency for resource
implementing resource allocation techniques but did not con- allocation in a fog network. Zhang et al. [15] presented the
sider the integration of SDN with CML. Various network layers security-related hurdles in current resource allocation proce-
can be considered using SDN to enable the network to be fully dures and suggested robust privacy-preserving resource allo-
programmable and adaptable while reducing network costs and cation schemes for the fog framework. Jiang et al. [16] also
congestion [9], [10]. CML is used to cooperate on model training proposed a secure computing framework for resource allocation
within fog nodes even when the cloud is not available. This framework capable of treating computing requests and allocat-
integration of both improves the results in terms of bandwidth ing resources at the fog server. The method could hold the
usage, power consumption, and time-based attributes. To fill security intrusion of particular nodes, fog servers, and other
these research gaps, this article proposes an optimal technique computing devices.
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
8006 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO. 6, JUNE 2023
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8007
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
8008 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO. 6, JUNE 2023
network with {R1 , R2 , . . .Rn } regions. Those fog regions are TABLE I
LIST OF NOTATIONS
connected with cloud C1 through different F Z network zone.
4) Local and Fog Training Model: The end users generally
have devices with low computation and storage. The end-user i
has a local dataset and wants to offload a part dataset di from
the whole dataset Di to the fog server. The dataset left with the
end-user is processed in the end-user device, and ωi represents
the weight vector. The goal of user i is to minimise the training
loss (l), which is described in (4), in order to maximise the weight
vector wi :
min(ωi ∈ Rn ) = l(ωi , xk , yk ) (4)
k∈D̂i
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8009
Firstly, the Fog Servers (FS) are deployed in each fog region are subdivided as per available resources because the fog node
to maintain the record of available resources in a scalable fog en- may be less capacity to handle complete one job. The priority
vironment. FS has a Fog Server Manager (FSM) to keep a record list of each sub-task is prepared as per Algorithm 2.
of fog resources and its capabilities to check the availability of The FSM allocates the resources to tasks or jobs in the
resources. FSM checks the availability of resources within the following condition a) if the resources are accessible as per the
fog region and takes care of the complete matching of tasks job, then FS allocates resources, processes the job, and ACK
concerning available resources. The different fog regions and to the user. b) if some resources are available, the whole job is
devices are connected using the SDN and share resources per divided into sub-tasks per resource. Priority is assigned to each
requirement. sub-tasks. High-priority tasks are processed first, and other tasks
Secondly, classification and ML techniques are used to clas- can wait for some time within the fog region. c) if all sub-divided
sify the user tasks and prepare the local model using CML. In the tasks have high priority, then some tasks are migrated to the near
next step, identify how many resources (like memory, storage, neighbor region for processing with the help of SDN services
computing, bandwidth, etc.) are required for each job completion and processed the sub-tasks and ACK to SDN after process
and each resource’s execution time. The records of classified jobs completion. SDN can ACK to FSM to complete the job d) If
and resource consumption are maintained at FSM. The mapping resources are unavailable at the fog layer, then FSM migrated the
of required resources for each job is done as per the need. Jobs job to the Cloud Service Provider (CSP). This may increase the
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
8010 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO. 6, JUNE 2023
N
|Sn |
WG ←− wk (8)
n=1
|S|
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8011
Algorithm 2: Prepare Priority of Tasks Lists Algorithm. Algorithm 4: Process of Global-Model Aggregation With
Local Nodes.
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
8012 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO. 6, JUNE 2023
TABLE II
SAMPLE DATA
5) Sgpt “Alamine Aminotransferase”: having range 10 to r IoT Device : Use an IoT device to record the Health data
2000 as input.
6) Alkphos “Alkaline Phosphatase”: having range 63 to r Gateway: SDN Configured Controller With WiFi, LAN
2110 Connectivity, Smart mobiles, Router, etc.
7) Sgot “Aspartate aminotransferase”: having range 10 to r Fog Server Manager / Master Node: Lenovo ThinkPad
4929 T440p, Intel Core i5 4th Gen 4300 M, CPU @ 2.6 GHz
8) TP “Total Proteins”: having a range of 2.7 to 9.6 4 GB DDR3 RAM, Intel Integrated HD Graphics 4600,
9) A/G Ratio “Albumin & Globulin Ratio”: having a range Connectivity options include Wi-Fi 802.11 b/g64-bit,
of 0.3 to 2.8 Ubuntu 20.04, Apache Server 2.4.41, MySQL 8.0.23, Java-
10) ALB “Albumin” : having range 0.9 to 5.5 SE Runtime Environment.
11) ”class” (selector): represented as the disease with value r Cloud: Setup of Cloud server for Heavy computing.
“1” means liver disease present, and “2” signifies liver r Worker / Fog Nodes: Raspberry-Pi 3B, Quad-C 1.2 GHz
disease not present. Broadcom BCM 2837 64 b CPU 1 GB LPDDR2 SDRAM,
Table II illustrates the details of sample data for 10 patients, wireless-LAN, Bluetooth Low Energy, Raspberry Pi OS
and the entire dataset plot is depicted in Fig. 4. The graphical with desktop Kernel version: 5.10, Apache Server 2.4.41,
representation of dataset values are shown to analyze how the JRE and MySQL 8.0.23.
importance of different attributes like TB, DB, Sgpt, Alkphos, To test the performance of the proposed scheme at fog-node
Sgot, TP, ALB, and A/G Ratio having different range varies. The with CML, the dataset was divided into two parts. The first 70%
values of Total Bilirubin vary between 0.4 to 75, Direct Bilirubin of dataset was used to train the model, and the rest 30% was
varies between 0.1 to 19.7, Alamine Aminotransferase varies used to test model parameters. The numerical parameters used
between 10 to 2000, Aspartate Aminotransferase varies between are shown in Fig. 4. During the experiments, the performance
10 to 4929, Total Proteins varies between 2.7 to 9.6, Albumin of the different characteristics was evaluated using the liver
& Globulin Ratio varies between 0.3 to 2.8 and Albumin vary disease data within the proposed fog computing environment
between 0.9 to 5.5. such as time-based attributes, power consumption, and network
Experimental Setup: A real testbed setup was used for the bandwidth usage.
implementation of the proposed method. Following hardware The time-based attributes are the time delay, execution time,
devices were used in the experiments. and latency were compared for different fog settings: with
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8013
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
8014 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO. 6, JUNE 2023
TABLE III
CONFIGURATION OF DIFFERENT DEVICES IN IFOGSIM
Fig. 10 shows the energy usage of fog nodes. The x-axis Nu = Lai × m (10)
displays the number of fog nodes used in the simulation, and i=1
the y-axis shows the amount of energy consumed. Fig. 10 also Fig. 11 represents the comparison of network usage of the
illustrates the energy used by the NBA, SJF, FCFS, MPSO, proposed model with other existing approaches. The x-axis
TRAM and proposed model [31]. The comparison is made based shows the number of fog nodes used in the simulation, and the
on different techniques proposed by different authors with the y-axis shows the amount of network bandwidth used. Fig. 11
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8015
Fig. 15. Time consumption for local and global model training concerning the
fraction of offloading dataset.
Fig. 12. Compare the results of execution time with different techniques.
shows the network usage by the SJF, TRAM, FCFS, MPSO, and
proposed model. If the number of nodes is 10, then the proposed
model uses more network usage than SJF and is almost the same
as other techniques. When the numbers of fogs nodes increase
to 20, 30, or more, then the network usage is less than TRAM,
MPSO, SJF, and FCFS.
Execution Time: The amount of time, the process spent to
carry out a specific task, including time spent to carry out
run-time or device services on its behalf, is referred as the task’s
execution time. It’s the difference between a task’s start and
Fig. 18. Compare results based on the processing time.
end time to complete the whole process. Fig. 12 represents
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
8016 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 72, NO. 6, JUNE 2023
the comparison of the execution time of the proposed model is more than the transference rate. However, the total time is
with other existing approaches. The x-axis displays the number raised concerning offloading data. Fig. 15 shows the time taken
of fog nodes used in the simulation, and the y-axis shows the for the global and local training to train the CML model at the
amount of execution time (In Milliseconds). Fig. 12 shows the Fog and cloud layer with respect to the fraction of the offloading
comparison of the execution time of SJF, TRAM, FCFS, and dataset. Local model training is done at the fog & IoT level,
MPSO with the proposed model. If the number of nodes is 10, and Global model training is done at the Cloud layer. The time
then the execution time of almost the same with MPSO and less consumption for local model training is less compared to Global
than TRAM, FCFS, and SJF. When the number of fog nodes model training.
increases to 20, 30, or more, then the execution time of our
model remains less than others. VI. CONCLUSION
This article proposes an optimal resource allocation technique
C. Result Analysis and Discussions using collaborative machine learning in an SDN-enabled fog
In this section, we discussed the results of the proposed model. computing environment. The fog servers are enabled to train
The experimental testing was performed based on two scenarios. the model locally first; if overloaded, otherwise, use the cloud
In the First scenario, the testing was done on the real testbed setup server to train the local model. It helps the fog environment to
using the FogBus framework. The liver disease dataset was used save time and energy for better performance. The time, energy
to train the CML base model for allocating resources within consumption, and extra network usage are recovered using the
the fog environment. The performance was evaluated based on proposed optimal resource allocation technique. The proposed
the different characteristics: time delay, execution time, latency, system model has considered the uplinks and dataset offloading
network bandwidth consumption, and power consumption. Each in the fog-cloud ecosystem. The implementation of the proposed
characteristic’s performance was compared with different fog work is performed using the FogBus and iFogSim simulation for
settings with a master node only, one fog region, more than the fog computing framework to evaluate and validate the per-
one fog region, and only cloud computing infrastructure. We formance. The results are evaluated based on the network band-
compared the results with other authors’ techniques used for width usage, energy consumption, and time-based attributes like
resource allocation in fog computing environments. The results latency, delay, and execution time. Finally, the results are also
of different techniques like TACRM [40], Logistic Regression, compared with the other existing techniques. The experimental
and Multi-criteria [18] are compared to the response time (ms) results show that the overall system’s performance is superior in
with the proposed model and shown in Fig. 16. The results are terms of various evalaution metrics. To overcome the limitation
also compared based on the time delay (ms) with 10 number of the proposed work, testing can be done on a large-scale fog
application requests as the authors’ proposed techniques like environment by adding more fog regions and devices at the
Resource ranking and Provisioning (ReRaP) algorithms [24], fog and IoT layer. In future work, the proposed techniques will
Resource-aware algorithms [41], Latency-Aware [42], Multi- be tested on the different datasets by adding more sensors and
criteria (MC) based, Quality of Experience (QoE)-aware [43]. devices at the fog and IoT layer in a real testbed environment.
Fig. 17 shows the comparison concerning time delay with the
proposed model. The processing time of the proposed model REFERENCES
with 10 number applications requests is reduced as compared [1] E. Liu, E. Effiok, and J. Hitchcock, “Survey on health care applications in
to the results of ReRaP algorithms [24], Resource-Aware algo- 5G networks,” IET Commun., vol. 14, no. 7, pp. 1073–1080, 2020.
[2] N. Kumar, S. Misra, and M. S. Obaidat, “Collaborative learning automata-
rithms [41], Latency-Aware [42] techniques used for resource based routing for rescue operations in dense urban regions using vehicular
allocation in fog computing. Fig. 18 shows the comparison of sensor networks,” IEEE Syst. J., vol. 9, no. 3, pp. 1081–1090, Sep. 2015.
processing time (ms) with other authors’ techniques. [3] R. Yu and P. Li, “Toward resource-efficient federated learning in mobile
edge computing,” IEEE Netw., vol. 35, no. 1, pp. 148–155, Jan./Feb. 2021.
In the second scenario, the iFogSim was used to implement and [4] C. W. Zaw, S. R. Pandey, K. Kim, and C. S. Hong, “Energy-aware resource
evaluate the performance of the proposed model. The experiment management for federated learning in multi-access edge computing sys-
was conducted on the simulation to measure processing time, tems,” IEEE Access, vol. 9, pp. 34938–34950, 2021.
[5] L. Ni, J. Zhang, C. Jiang, C. Yan, and K. Yu, “Resource allocation strategy
delay, and energy consumption. The results were computed in fog computing based on priced timed petri nets,” IEEE Internet Things
using the different number of fog nodes used in simulation envi- J., vol. 4, no. 5, pp. 1216–1228, Oct. 2017.
ronments. The processing time, delay, and energy consumption [6] N. Kumar, J. J. P. C. Rodrigues, and N. Chilamkurti, “Bayesian coalition
game as-a-service for content distribution in internet of vehicles,” IEEE
results were compared with the other author’s techniques like Internet Things J., vol. 1, no. 6, pp. 544–555, Dec. 2014.
NBIHA, SJF, FCFS, MPSO, and TRAM. Fig. 13 shows the CML [7] X. Xu et al., “Dynamic resource allocation for load balancing in fog
model fraction for computing resources allocation concerning environment,” Wireless Commun. Mobile Comput., vol. 2018, pp. 1–15,
2018.
Power consumption, Time, and Loss. The time consumption [8] C. Mouradian, D. Naboulsi, S. Yangui, R. H. Glitho, M. J. Morrow, and
and loss are minimized when more resources are utilized for P. A. Polakos, “A comprehensive survey on fog computing: State-of-the-
data training. On the other hand, power consumption increases art and research challenges,” IEEE Commun. Surv. Tut., vol. 20, no. 1,
pp. 416–464, Jan.–Mar. 2018.
the concern of computing resource allotment. Fig. 14 shows the [9] J. Gao et al., “A blockchain-SDN-enabled internet of vehicles environment
power consumption and loss reduced as the amount of offloaded for fog computing and 5G networks,” IEEE Internet Things J., vol. 7, no. 5,
data rose because the power consumption for the model training pp. 4278–4291, May 2020.
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.
SINGH et al.: EFFICIENT MACHINE LEARNING-BASED RESOURCE ALLOCATION SCHEME 8017
[10] R. Chaudhary, N. Kumar, and S. Zeadally, “Network service chaining in [27] S. Tong, Y. Liu, M. Cheriet, M. Kadoch, and B. Shen, “UCAA: User-centric
fog and cloud computing for the 5G environment: Data management and user association and resource allocation in fog computing networks,” IEEE
security challenges,” IEEE Commun. Mag., vol. 55, no. 11, pp. 114–122, Access, vol. 8, pp. 10671–10685, 2020.
Nov. 2017. [28] X. Huang, W. Fan, Q. Chen, and J. Zhang, “Energy-efficient re-
[11] S. Agarwal, S. Yadav, and A. K. Yadav, “An architecture for elastic source allocation in fog computing networks with the candidate mech-
resource allocation in fog computing,” Int. J. Comput. Sci. Commun., anism,” IEEE Internet Things J., vol. 7, no. 9, pp. 8502–8512,
vol. 6, no. 2, pp. 201–207, 2015. [Online]. Available: http://csjournals. Sep. 2020.
com/IJCSC/PDF6-2/31.Swati.pdf [29] Z. Chang, L. Liu, X. Guo, and Q. Sheng, “Dynamic resource allocation
[12] H. M. Birhanie, M. A. Messous, S. M. Senouci, E. H. Aglzim, and A. M. and computation offloading for IoT fog computing system,” IEEE Trans.
Ahmed, “MDP-Based resource allocation scheme towards a vehicular fog Ind. Inform., vol. 17, no. 5, pp. 3348–3357, May 2021.
computing with energy constraints,” in Proc. IEEE Glob. Commun. Conf., [30] B. Cao, Z. Sun, J. Zhang, and Y. Gu, “Resource allocation in 5G IoV
2018, pp. 1–6. architecture based on SDN and fog-cloud computing,” IEEE Trans. Intell.
[13] X. Wu, S. Zhao, R. Zhang, and L. Yang, “Mobility prediction-based joint Transp. Syst., vol. 22, no. 6, pp. 3832–3840, Jun. 2021.
task assignment and resource allocation in vehicular fog computing,” in [31] H. Rafique, M. A. Shah, S. U. Islam, T. Maqsood, S. Khan, and C. Maple,
Proc. IEEE Conf. Wireless Commun. Netw., 2020, pp. 1–6. “A novel bio-inspired hybrid algorithm (NBIHA) for efficient resource
[14] B. Jia, H. Hu, Y. Zeng, T. Xu, and Y. Yang, “Double-matching resource management in fog computing,” IEEE Access, vol. 7, pp. 115760–115773,
allocation strategy in fog computing networks based on cost efficiency,” 2019.
J. Commun. Netw., vol. 20, no. 3, pp. 237–246, 2018. [32] Y. Gao, L. Liu, X. Zheng, C. Zhang, and H. Ma, “Federated sensing:
[15] L. Zhang and J. Li, “Enabling robust and privacy-preserving resource Edge-cloud elastic collaborative learning for intelligent sensing,” IEEE
allocation in fog computing,” IEEE Access, vol. 6, pp. 50384–50393, Internet Things J., vol. 8, no. 14, pp. 11100–11111, Jul. 2021.
2018. [33] R. S. Bali and N. Kumar, “Secure clustering for efficient data dissemination
[16] J. Jiang, L. Tang, K. Gu, and W. Jia, “Secure computing resource allo- in vehicular cyber–physical systems,” Future Gener. Comput. Syst., vol. 56,
cation framework for open fog computing,” Comput. J., vol. 63, no. 4, pp. 476–492, 2016.
pp. 567–592, 2020. [34] N. Kumar, R. Iqbal, S. Misra, and J. J. Rodrigues, “Bayesian coalition
[17] H. Bashir, S. Lee, and K. H. Kim, “Resource allocation through logistic re- game for contention-aware reliable data forwarding in vehicular mobile
gression and multicriteria decision making method in IoT fog computing,” cloud,” Future Gener. Comput. Syst., vol. 48, pp. 60–72, 2015.
Trans. Emerg. Telecommun. Technol., 2019, vol. 33, no. 2, Art. no. e3824. [35] E. Ahvar, S. Ahvar, S. M. Raza, J. Manuel Sanchez Vilchez, and G. M. Lee,
[18] R. K. Naha and S. Garg, “Multi-criteria-based dynamic user behaviour “Next generation of SDN in cloud-fog for 5G and beyond-enabled appli-
aware resource allocation in fog computing,” vol. 1, no. 1, pp. 1–31, 2019. cations: Opportunities and challenges,” Network, vol. 1, no. 1, pp. 28–49,
[Online]. Available: http://arxiv.org/abs/1912.08319 2021. [Online]. Available: https://www.mdpi.com/2673-8732/1/1/4
[19] Q. Li, J. Zhao, Y. Gong, and Q. Zhang, “Energy-efficient computation [36] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas,
offloading and resource allocation in fog computing for internet of every- “Communication-efficient learning of deep networks from decentralized
thing,” China Commun., vol. 16, no. 3, pp. 32–41, 2019. data,” in Proc. Artif. Intell. Statist., 2017, pp. 1273–1282.
[20] R. Bukhsh, N. Javaid, S. Javaid, M. Ilahi, and I. Fatima, “Efficient resource [37] S. Tuli, R. Mahmud, S. Tuli, and R. Buyya, “FogBus: A blockchain-based
allocation for consumers’ power requests in cloud-fog-based system,” Int. lightweight framework for edge and fog computing,” J. Syst. Softw.,
J. Web Grid Serv., vol. 15, no. 2, pp. 159–190, 2019. vol. 154, pp. 22–36, 2019.
[21] L. H. Kazem, “Efficient resource allocation for time-sensitive IoT applica- [38] J. Singh, S. Bagga, and R. Kaur, “Software-based prediction of liver disease
tions in cloud and fog environments,” Int. J. Recent Technol. Eng., vol. 8, with feature selection and classification techniques,” Procedia Comput.
no. 3, pp. 2356–2363, 2019. Sci., vol. 167, no. 2019, pp. 1970–1980, 2020. [Online]. Available: https:
[22] A. Fatima, N. Javaid, M. Waheed, T. Nazar, S. Shabbir, and T. Sultana, //doi.org/10.1016/j.procs.2020.03.226
“Efficient resource allocation model for residential buildings in smart grid [39] D. Dua and C. Graff, “UCI machine learning repository,” 2019. [Online].
using fog and cloud computing,” Adv. Intell. Syst. Comput., vol. 773, Available: http://archive.ics.uci.edu/ml
pp. 289–298, 2019. [40] W. B. Daoud, M. S. Obaidat, A. Meddeb-Makhlouf, F. Zarai, and K. F.
[23] X. Chen, Y. Zhou, L. Yang, and L. Lv, “User satisfaction oriented resource Hsiao, “TACRM: Trust access control and resource management mech-
allocation for fog computing: A mixed-task paradigm,” IEEE Trans. anism in fog computing,” Hum.-centric Comput. Inf. Sci., vol. 9, no. 1,
Commun., vol. 68, no. 10, pp. 6470–6482, Oct. 2020. pp. 1–18, 2019. [Online]. Available: https://doi.org/10.1186/s13673-019-
[24] R. K. Naha, S. Garg, A. Chan, and S. K. Battula, “Deadline-based 0188-3
dynamic resource allocation and provisioning algorithms in fog-cloud [41] M. Taneja and A. Davy, “Resource aware placement of IoT application
environment,” Future Gener. Comput. Syst., vol. 104, pp. 131–141, 2020. modules in fog-cloud computing paradigm,” in Proc. IFIP/IEEE Symp.
[Online]. Available: https://doi.org/10.1016/j.future.2019.10.018 Integr. Netw. Serv. Manage., 2017, pp. 1222–1228.
[25] S. K. Mani and I. Meenakshisundaram, “Improving quality-of-service [42] A. R. Benamer, H. Teyeb, and N. B. Hadj-Alouane, “Latency-aware place-
in fog computing through efficient resource allocation,” Comput. Intell., ment heuristic in fog computing environment,” in Proc. Move Meaningful
vol. 36, no. 4, pp. 1527–1547, 2020. Internet Syst. OTM Confederated Int. Conf., 2018, pp. 241–257.
[26] K. Gu, L. Tang, J. Jiang, and W. Jia, “Resource allocation scheme [43] R. Mahmud, S. N. Srirama, K. Ramamohanarao, and R. Buyya, “Quality
for community-based fog computing based on reputation mechanism,” of experience (QOE)-aware placement of applications in Fog comput-
IEEE Trans. Computat. Social Syst., vol. 7, no. 5, pp. 1246–1263, Oct. ing environments,” J. Parallel Distrib. Comput., vol. 132, pp. 190–203,
2020. 2019.
Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on July 06,2024 at 14:29:55 UTC from IEEE Xplore. Restrictions apply.