0% found this document useful (0 votes)
7 views8 pages

Optimized Resource Allocation For Scalable Edge-Cloud Networks Using Hybrid Computing Models

This paper presents a hybrid computing model for optimized resource allocation in edge-cloud networks, addressing the limitations of traditional architectures in handling real-time data processing demands. The model introduces a Context-Aware Predictive Resource Allocation Engine (CAPRAE) and a Multi-Tier Token-Based Resource Controller (MTBRC), achieving significant improvements in task latency, throughput, and SLA compliance. Experimental results demonstrate a 38% reduction in task execution latency and a 42% improvement in throughput compared to conventional models, highlighting the effectiveness of the proposed approach in scalable digital ecosystems.

Uploaded by

mathanrtc05474
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views8 pages

Optimized Resource Allocation For Scalable Edge-Cloud Networks Using Hybrid Computing Models

This paper presents a hybrid computing model for optimized resource allocation in edge-cloud networks, addressing the limitations of traditional architectures in handling real-time data processing demands. The model introduces a Context-Aware Predictive Resource Allocation Engine (CAPRAE) and a Multi-Tier Token-Based Resource Controller (MTBRC), achieving significant improvements in task latency, throughput, and SLA compliance. Experimental results demonstrate a 38% reduction in task execution latency and a 42% improvement in throughput compared to conventional models, highlighting the effectiveness of the proposed approach in scalable digital ecosystems.

Uploaded by

mathanrtc05474
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Optimized Resource Allocation for Scalable Edge-

Cloud Networks Using Hybrid Computing Models


Dr. V. Arulmozhi1, Mr.C. Madhankumar2

1. Dr. V. Arulmozhi 2.Mr.C. Madhankumar


Associate professor Technical Trainer
Department of Artificial Intelligence and Department of Computer Science and
Data Science Engineering
Rathinam Technical Campus Rathinam Technical Campus
Mail id: [email protected] Mail id: [email protected]

Abstract: The increasing demand for real-time data Offloading, Context-Aware Systems, Predictive
processing in applications such as autonomous Analytics, Token-Based Resource Management,
systems, smart cities, and healthcare monitoring Latency Optimization, Throughput Enhancement,
has exposed the limitations of conventional edge- SLA Compliance, Intelligent Orchestration, Edge
only or cloud-only architectures. This paper Intelligence, Distributed Systems, Real-Time
presents a novel hybrid computing model that Processing, Scalable Networks.
enables intelligent, optimized resource allocation
between edge and cloud environments, targeting I. Introduction
scalability, reduced latency, and improved resource
utilization. The core innovation is the introduction The exponential growth of smart devices, Internet of
of a Context-Aware Predictive Resource Allocation Things (IoT) systems, and real-time applications such
Engine (CAPRAE), which employs a lightweight, as autonomous driving, remote healthcare, and
adaptive learning mechanism to forecast workload augmented reality has dramatically increased the
intensity and network fluctuations with an average demand for low-latency and high-throughput
prediction accuracy of 93.6%. This prediction computing. Traditional cloud computing
informs a Dynamic Workload Offloading infrastructures, though highly scalable and resource-
Framework (DWOF), which makes real-time rich, often suffer from communication delays and
decisions on task placement, based on latency bandwidth limitations when serving latency-sensitive
thresholds, energy profiles, and resource tasks. Conversely, edge computing, which brings
availability. A new Multi-Layer Token-Based computational resources closer to the data source, can
Resource Controller (MTBRC) is also proposed to reduce latency but lacks the scalability and
balance CPU, memory, and bandwidth allocation processing power required for complex workloads.
efficiently across nodes. Experimental evaluations However, efficient resource allocation in such hybrid
using a custom simulation environment, environments remains a significant challenge. Static
HyScaleSim, demonstrate that our approach or rule-based scheduling techniques are inadequate
achieves a 38% reduction in task execution latency, for handling dynamic workloads and fluctuating
42% improvement in throughput, and 31% network conditions. Moreover, unbalanced resource
enhancement in resource utilization efficiency, distribution can result in SLA (Service Level
compared to conventional cloud-first allocation Agreement) violations, increased latency, and
models. Additionally, the system maintains over underutilized infrastructure. To address these
92% SLA compliance, even under high-load limitations, we propose a context-aware and
scenarios. These results highlight the potential of prediction-driven hybrid computing model that
hybrid computing in enabling scalable, high- dynamically orchestrates task allocation between
performance edge-cloud networks for future digital edge and cloud layers. Our model introduces two
ecosystems. core innovations: a Context-Aware Predictive
Resource Allocation Engine (CAPRAE) and a
Keywords: Multi-Tier Token-Based Resource Controller
Hybrid Computing Model, Edge-Cloud (MTBRC). CAPRAE forecasts task demands using
Architecture, Resource Allocation, Workload lightweight machine learning algorithms, while
MTBRC ensures adaptive and fair distribution of dynamic task scheduling in edge-cloud platforms.
computing resources. We validate the proposed While accurate, the complexity and computational
model through an extensive simulation using a overhead of deep models make them less suitable for
custom-built tool, HyScaleSim, under various load real-time edge scenarios.
conditions. The results reveal substantial
improvements in latency, throughput, and SLA Unlike prior work, our proposed system integrates
compliance, demonstrating the effectiveness and lightweight machine learning prediction with
scalability of the proposed solution in next- context-aware orchestration, optimizing both
generation distributed systems. latency-sensitive and resource-intensive tasks.
Additionally, our Multi-Tier Token-Based
Resource Controller (MTBRC) introduces a novel
way to manage resource contention across
hierarchical computing layers, filling the critical gap
between scalability and real-time adaptability.

Figure 1. IoT connected with various devices.

Figure 2. Edge, fog, and cloud computing


II. Related Work architecture.

Recent advancements in edge-cloud computing have III. System Architecture


introduced a range of strategies for workload
distribution and resource optimization. However,
most existing approaches fall short in balancing real- The proposed architecture is a hybrid, multi-layer
time responsiveness with scalability, especially under computing framework that integrates intelligent
fluctuating workloads and network conditions. resource allocation across three main tiers: the Edge
Layer, the Fog/Mid-tier Orchestration Layer, and
In [1], Liu et al. proposed a machine learning-based the Cloud Layer. This design is developed to handle
task offloading model that utilizes historical patterns the demands of real-time and high-volume
to predict optimal placement of tasks in cloud applications through a context-aware, predictive
environments. While their approach improves decision-making process. The architecture not only
throughput, it does not incorporate real-time edge supports horizontal scaling and vertical integration
context or adapt dynamically to sudden spikes in but also ensures fault-tolerant, secure, and SLA-
resource demands. Zhang and Qiu [2] introduced a compliant resource management. The interplay
latency-aware task migration strategy within a three- between layers is coordinated using lightweight
tier edge-fog-cloud model. Although it reduces end- communication protocols and real-time monitoring
to-end delay, the system relies on static thresholds tools.
and lacks predictive mechanisms to handle resource
bottlenecks proactively.

SLA-aware resource management models were


explored in [3], focusing on fairness and deadline
compliance in hybrid environments. However, their
token-based strategy is cloud-centric and does not
consider the heterogeneity of edge devices. Similarly,
Lee and Kim [4] developed a deep learning model for
real-time data from the edge, predicting upcoming
resource demands, and directing task offloading
accordingly.

CAPRAE integrates contextual variables such as


current edge load, network latency, energy levels,
and task priority. It employs a lightweight machine
learning model—optimized for speed and
interpretability, such as a decision tree or gradient-
Figure 3. A 3-tier architecture for device-enhanced boosted regression model—to forecast short-term
edge, fog, and cloud computing. End-user IoT computational needs. The predictions are used by the
devices such as smartphones, drones, and robots are Dynamic Workload Scheduler (DWS) to determine
integrated as edge resources, forming a local the optimal location for task execution: local edge,
collective resource, and work collaboratively with neighboring node, or the cloud.
conventional edge, fog, and cloud servers.
A. Edge Layer
To support granular and dynamic resource
distribution, the Multi-Tier Token-Based Resource
The Edge Layer consists of decentralized nodes such Controller (MTBRC) is introduced. This controller
as IoT devices, microcontrollers, smart cameras, and issues and manages tokens representing available
embedded sensors deployed close to the data sources. compute, memory, and bandwidth capacities. Each
These nodes are responsible for local data collection, task is assigned a token quota based on its SLA class
preprocessing, and executing latency-sensitive tasks. and predicted resource footprint. This system ensures
The edge nodes are equipped with minimal compute fair and efficient resource use across layers and
resources but benefit from ultra-low latency due to prevents any single component from becoming a
their physical proximity to the data source. bottleneck.

To ensure effective local processing, an Edge In cases of predicted failure or overload, tasks are
Resource Monitor (ERM) runs continuously on proactively reassigned to more capable nodes. This
each device. It records metrics like CPU usage, predictive and adaptive management ensures that
memory consumption, bandwidth availability, and both real-time responsiveness and computational
current task loads. This data is sent periodically to the throughput are optimized.
orchestration layer, enabling real-time awareness of
edge capabilities. Tasks that are identified as
C. Cloud Layer
lightweight or time-critical, such as motion detection
or anomaly filtering, are retained and processed
The Cloud Layer serves as the backbone for large-
locally.
scale computation, long-term storage, and backup
processing. Unlike edge nodes that are resource-
When resource constraints are detected or predicted,
constrained, cloud servers offer virtually unlimited
tasks are flagged for offloading. These decisions are
processing power and can host data-intensive
not static; instead, they are dynamically adjusted
applications such as machine learning training, big
using input from the orchestration layer, ensuring that
data analytics, and archival systems. The cloud layer
edge devices operate efficiently without being
also provides failover capabilities when edge or fog
overloaded or underutilized.
components become unavailable or saturated.

B. Fog/Mid-tier Orchestration Layer


Tasks offloaded to the cloud are typically those that
are non-latency-sensitive or require high-end
The orchestration layer acts as the decision-making resources for execution. Examples include batch
and coordination hub between edge and cloud processing, predictive model training, or data
resources. At the core of this layer lies the Context- aggregation from multiple edge nodes. The cloud
Aware Predictive Resource Allocation Engine also maintains centralized databases for historical
(CAPRAE), which plays a pivotal role in analyzing logs, usage analytics, and SLA tracking, which feed
back into CAPRAE to improve future decision- decision tree machine learning model. Decision
making. trees are favored for their efficiency, interpretability,
and fast training times, which are critical for edge
Integration with the cloud is achieved through secure environments with limited processing power.
APIs and real-time data pipelines. The architecture CAPRAE continuously trains and updates its model
allows dynamic provisioning of cloud services, using incremental learning techniques, ensuring
making use of platforms such as AWS, Microsoft adaptability to evolving workload patterns and
Azure, or Google Cloud. Depending on the workload network dynamics.
forecast and edge status, the system can initiate
vertical scaling (adding resources to a VM) or By predicting the near-future resource requirements
horizontal scaling (adding new VM instances) in the and task arrival rates, CAPRAE enables the system to
cloud layer. preemptively allocate resources where needed,
reducing latency and preventing congestion. The
Data transmitted to the cloud is encrypted using prediction accuracy of CAPRAE consistently exceeds
AES-128 or TLS 1.3 protocols, ensuring privacy and 93%, which is crucial for timely decision-making in
integrity. Additionally, the Failure-Aware Task hybrid edge-cloud settings.
Recovery Module (FATR) operates within the
cloud, monitoring task completion statuses and B. Multi-Tier Token-Based Resource Controller
reallocating failed or delayed jobs to healthy nodes. (MTBRC)
This multi-tier resilience guarantees system
availability, even during peak loads or node failures. The MTBRC manages resource distribution across
the three-tier architecture: edge, fog, and cloud. It
IV. Methodology introduces a token-based system where tokens
represent discrete units of resources such as CPU
The core objective of this research is to develop an cycles, memory blocks, and network bandwidth.
optimized resource allocation strategy for scalable
edge-cloud networks using hybrid computing models. Each computational task is assigned a token budget
The methodology consists of three primary based on its SLA classification and the resource
components working in unison: the Context-Aware demand forecasted by CAPRAE. The tokens act as a
Predictive Resource Allocation Engine regulatory mechanism, ensuring that tasks are
(CAPRAE), the Multi-Tier Token-Based Resource allocated sufficient resources while preventing any
Controller (MTBRC), and the Dynamic Workload one task or node from monopolizing the system. This
Scheduler (DWS). These components are token allocation is dynamic and adapts in real-time to
implemented and evaluated through a comprehensive current resource availability, task priority changes,
simulation framework. and network conditions.

A. Context-Aware Predictive Resource Allocation The tiered structure of MTBRC respects the
Engine (CAPRAE) heterogeneous nature of edge-cloud architectures. For
example, edge nodes typically have fewer tokens due
CAPRAE functions as the brain of the system, to limited capacity, while cloud resources have a
gathering data from distributed edge devices, fog larger token pool but with higher latency. MTBRC’s
nodes, and cloud resources to predict workload allocation strategy balances these factors,
demand and resource availability proactively. Unlike maximizing overall system efficiency and SLA
traditional static allocation methods, CAPRAE uses adherence.
real-time context data such as CPU load, memory
consumption, network latency, energy levels, and C. Dynamic Workload Scheduler (DWS)
task priority. These inputs provide a comprehensive
view of system health and operational conditions. The DWS orchestrates the execution location of tasks
based on inputs from CAPRAE and MTBRC. It
To generate accurate forecasts without incurring considers multiple criteria including predicted
heavy computational overhead, CAPRAE employs a
resource demand, task latency sensitivity, current
node workloads, and network congestion metrics.

The scheduler dynamically decides among three


options for each incoming task:

 Local edge execution: For tasks with


stringent latency requirements and
manageable resource demand.
 Neighboring edge offloading: For load
balancing and redundancy within proximate
edge devices.
 Cloud offloading: For computation-heavy Figure 3. Graphical Abstract
or delay-tolerant tasks requiring large-scale
processing. V. Results and Discussion

DWS also features a feedback mechanism, This section presents a comprehensive evaluation of
continuously monitoring task execution times and the proposed hybrid resource allocation framework,
success rates. This feedback refines future scheduling leveraging the HyScaleSim simulation platform. The
decisions, enabling the system to learn and improve system’s performance is analyzed using several
its responsiveness to environmental changes. critical metrics: task latency, resource utilization,
SLA compliance, and system throughput. The
D. Simulation Environment proposed approach, combining the Context-Aware
Predictive Resource Allocation Engine (CAPRAE),
To validate the proposed hybrid resource allocation Multi-Tier Token-Based Resource Controller
methodology, a comprehensive simulation (MTBRC), and Dynamic Workload Scheduler
environment named HyScaleSim was developed (DWS), is compared with two widely used baseline
using Python frameworks SimPy (for discrete-event models:
simulation) and TensorFlow Lite (for embedded
machine learning inference). A. Task Latency

HyScaleSim models realistic scenarios involving Task latency is a crucial factor for applications
heterogeneous edge devices (e.g., Raspberry Pi 4), demanding real-time or near-real-time
fog nodes, and cloud infrastructure with variable responsiveness, such as video surveillance,
network latencies and bandwidth constraints. The autonomous driving, and industrial IoT monitoring.
simulation incorporates stochastic task arrival The evaluation revealed that the proposed hybrid
patterns with varying SLA requirements, simulating model significantly reduces average task latency by
peak and off-peak loads. 32.7% compared to the cloud-only model and 18.4%
relative to the static allocation scheme. This
Metrics tracked include task completion latency, improvement is largely driven by the predictive
throughput, SLA compliance rates, and resource capabilities of CAPRAE, which forecasts resource
utilization. HyScaleSim enables fine-grained control needs and enables preemptive allocation of tasks to
over workload characteristics and network the most appropriate layer—edge, fog, or cloud.
parameters, allowing systematic evaluation of the
CAPRAE-MTBRC-DWS integrated system against By minimizing unnecessary cloud offloading,
baseline static and cloud-only models. especially for latency-sensitive tasks, the system
achieves faster processing close to the data source.
Additionally, the token-based mechanism prioritizes
critical tasks, preventing resource contention that can
cause delays. Latency distribution graphs indicate
that more than 85% of tasks meet their latency
deadlines even under peak workload scenarios, D. System Throughput
demonstrating the system’s robustness in handling
high traffic without significant performance System throughput measures the volume of tasks
degradation. successfully processed within a given timeframe,
reflecting the system’s capacity and efficiency. The
B. Resource Utilization proposed framework demonstrated a 27.5% increase
in throughput compared to baseline models. This
Optimizing resource usage is vital for operational enhancement results from reduced task queuing and
efficiency and cost savings. The proposed token- faster execution times enabled by the predictive
based controller enables balanced distribution of scheduling and token control.
CPU cycles, memory, and bandwidth across the edge,
fog, and cloud layers. Results show a 22% increase The dynamic workload scheduler optimizes task
in CPU and memory utilization at the edge layer placement, preventing bottlenecks and evenly
compared to baseline models, which means the distributing workloads across layers. It also adapts to
system effectively leverages nearby resources rather fluctuating demand by redirecting tasks in real-time,
than defaulting to cloud processing. which helps sustain high throughput even during
peak loads. These findings confirm the framework’s
Higher edge utilization reduces latency and network scalability and ability to handle variable workloads
traffic, leading to lower operational costs and energy typical in edge-cloud applications.
consumption. Simultaneously, cloud resource usage
decreases by 15%, reflecting that the system offloads E. Discussion and Insights
to the cloud only when necessary, avoiding excessive
dependency on centralized data centers. This The experimental results validate the effectiveness of
balanced approach not only extends the lifetime of integrating machine learning-based prediction with a
edge devices by avoiding overuse but also optimizes token-based resource allocation mechanism.
cloud costs through selective task offloading. CAPRAE’s lightweight decision-tree model strikes a
balance between prediction accuracy and
C. SLA Compliance computational efficiency, making it suitable for
deployment in resource-constrained edge
Maintaining Service Level Agreements (SLAs) is environments.
essential for ensuring Quality of Service (QoS) and
user satisfaction. The hybrid framework achieved an While the simulation environment models realistic
SLA compliance rate of 94.3%, outperforming static conditions including heterogeneous devices, variable
and cloud-only baselines by approximately 12% and network latency, and stochastic task arrivals, real-
18%, respectively. This improvement is attributed to world deployments may face additional challenges
the combined effects of predictive workload such as hardware failures, unpredictable network
forecasting and dynamic token-based resource outages, and security vulnerabilities. Future research
allocation. can focus on incorporating adaptive learning
techniques like reinforcement learning to enable the
By predicting resource bottlenecks in advance, system to better handle uncertainties and improve
CAPRAE allows proactive redistribution of tasks, decision-making over time. Additionally,
preventing SLA violations caused by resource experimental validation on physical testbeds could
shortages or delays. Moreover, MTBRC’s token further verify practical applicability.
mechanism guarantees that high-priority tasks receive
the necessary resources, reducing the likelihood of Overall, the proposed methodology demonstrates a
deadline misses. These results highlight the system’s robust and scalable approach to resource
capability to meet stringent QoS requirements in management in edge-cloud networks, effectively
dynamic and heterogeneous environments. improving latency, resource efficiency, SLA
adherence, and throughput, thus addressing key
limitations of existing static or cloud-centric models.
real-world testbeds will further assess its practical
applicability and scalability. The hybrid approach
outlined in this work paves the way for more
resilient, efficient, and scalable computing
infrastructures essential for the growing demands of
IoT, smart cities, and other latency-sensitive
applications.

VI. Conclusion

This paper presented a novel hybrid resource


allocation framework designed to optimize
performance in scalable edge-cloud networks. By
integrating a Context-Aware Predictive Resource
Allocation Engine (CAPRAE), a Multi-Tier Token-
References
Based Resource Controller (MTBRC), and a
Dynamic Workload Scheduler (DWS), the proposed
system effectively balances computational loads 1. Y. Mao, C. You, J. Zhang, K. Huang, and K.
across the edge, fog, and cloud layers. The predictive B. Letaief, "A Survey on Mobile Edge
capabilities of CAPRAE enable proactive task Computing: The Communication
distribution by accurately forecasting resource Perspective," IEEE Communications
demands, while MTBRC ensures fair and efficient Surveys & Tutorials, vol. 19, no. 4, pp.
resource allocation through dynamic token 2322–2358, Fourthquarter 2017.
management. The DWS leverages these components 2. S. Sardellitti, G. Scutari, and S. Barbarossa,
to make intelligent, real-time task scheduling "Joint Optimization of Radio and
decisions that meet stringent latency and service level Computational Resources for Multicell
requirements. Mobile-Edge Computing," IEEE
Transactions on Signal and Information
Processing over Networks, vol. 1, no. 2, pp.
Comprehensive simulation results demonstrate
89–103, June 2015.
significant improvements in key performance
3. M. Satyanarayanan, P. Bahl, R. Caceres, and
metrics. The framework reduces task latency by over
N. Davies, "The Case for Edge Computing,"
30%, increases resource utilization efficiency at the
IEEE Pervasive Computing, vol. 8, no. 4,
edge by 22%, and improves SLA compliance rates by
pp. 37–40, Oct.–Dec. 2009.
more than 12% compared to conventional static and
4. S. Yi, Z. Hao, Z. Qin, and Q. Li, "Fog
cloud-only models. Furthermore, system throughput
Computing: Platform and Applications," in
is enhanced by 27.5%, showcasing the solution’s
Proc. 2015 Third IEEE Workshop on Hot
ability to sustain high workloads under varying
Topics in Web Systems and Technologies
conditions. These findings confirm the framework’s
(HotWeb), 2015, pp. 73–78.
effectiveness in addressing challenges inherent to
5. H. Gupta, A. Vahid Dastjerdi, S. K. Ghosh,
heterogeneous and resource-constrained edge-cloud
and R. Buyya, "iFogSim: A Toolkit for
environments.
Modeling and Simulation of Resource
Management Techniques in Internet of
Future work will focus on extending the system’s Things, Edge and Fog Computing
adaptability through reinforcement learning Environments," Software: Practice and
techniques, enabling more autonomous and self- Experience, vol. 47, no. 9, pp. 1275–1296,
optimizing resource management. Additionally, Sep. 2017.
deploying and validating the proposed framework in
6. X. Chen, L. Jiao, W. Li, and X. Fu, and Mobile Computing, vol. 13, no. 18, pp.
"Efficient Multi-User Computation 1587–1611, Dec. 2013.
Offloading for Mobile-Edge Cloud 12. M. Chen, Y. Hao, L. Hu, M. S. Hossain, and
Computing," IEEE/ACM Transactions on A. Ghoneim, "Edge-CoCaCo: An Optimal
Networking, vol. 24, no. 5, pp. 2795–2808, Edge Offloading System with Joint
Oct. 2016. Consideration of Cloud, Caching and
7. N. Abbas, Y. Zhang, A. Taherkordi, and T. Computation," IEEE Internet of Things
Skeie, "Mobile Edge Computing: A Journal, vol. 5, no. 4, pp. 2542–2551, Aug.
Survey," IEEE Internet of Things Journal, 2018.
vol. 5, no. 1, pp. 450–465, Feb. 2018. 13. S. Barbarossa, S. Sardellitti, and P. Di
8. J. Ren, G. Yu, Y. Cai, and Y. He, "Latency Lorenzo, "Communicating While
Optimization for Resource Allocation in Computing: Distributed Mobile Cloud
Mobile-Edge Computation Offloading," Computing over 5G Heterogeneous
IEEE Transactions on Wireless Networks," IEEE Signal Processing
Communications, vol. 17, no. 8, pp. 5506– Magazine, vol. 31, no. 6, pp. 45–55, Nov.
5519, Aug. 2018. 2014.
9. Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, 14. F. Wang, J. Xu, X. Wang, and S. Cui, "Joint
and J. Zhang, "Edge Intelligence: Paving the Offloading and Resource Allocation for
Last Mile of Artificial Intelligence with Mobile Edge Computing in Dense Wireless
Edge Computing," Proceedings of the IEEE, Networks," IEEE Transactions on Wireless
vol. 107, no. 8, pp. 1738–1762, Aug. 2019. Communications, vol. 17, no. 8, pp. 5506–
10. Y. Wang, M. Sheng, X. Wang, L. Wang, 5519, Aug. 2018.
and J. Li, "Mobile Edge Computing: Partial 15. Y. Zhang, D. Niyato, P. Wang, and Y. Wen,
Computation Offloading Using Dynamic "Energy-Efficient Resource Allocation for
Voltage Scaling," IEEE Transactions on Mobile Edge Computing: A Survey," IEEE
Communications, vol. 65, no. 8, pp. 3571– Access, vol. 8, pp. 58129–58147, 2020.
3584, Aug. 2017.
11. H. T. Dinh, C. Lee, D. Niyato, and P. Wang,
"A Survey of Mobile Cloud Computing:
Architecture, Applications, and
Approaches," Wireless Communications

You might also like