0% found this document useful (0 votes)
4 views

Chapter 1 (1)

The document discusses the paradigm of edge computing, which enhances data processing by bringing computation closer to data sources, thus reducing latency and improving privacy and security. It highlights the challenges of resource constraints and the need for adaptive optimization techniques, particularly through machine learning and reinforcement learning, to improve code efficiency in dynamic environments. The research aims to develop a Deep Q-Learning optimization framework to enhance the performance, scalability, and resource utilization of edge computing applications.

Uploaded by

Why
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Chapter 1 (1)

The document discusses the paradigm of edge computing, which enhances data processing by bringing computation closer to data sources, thus reducing latency and improving privacy and security. It highlights the challenges of resource constraints and the need for adaptive optimization techniques, particularly through machine learning and reinforcement learning, to improve code efficiency in dynamic environments. The research aims to develop a Deep Q-Learning optimization framework to enhance the performance, scalability, and resource utilization of edge computing applications.

Uploaded by

Why
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Chapter 1: Introduction

1.1 Background

Edge computing is one paradigm that has arrived in modern architecture, changing the way in

which data processing and management are being carried out so far by bringing computation and

storage closer to the sources of data, either end users or IoT devices, rather than to cloud data

centers. The reason for this is the architectural evolution that has resulted from the huge growth

in device-generated data and, thereby, the need to process them in real-time. Unlike the

traditional cloud computing model, usually with high latency and limited bandwidth, in edge

computing, both the physical and logical distances among sources of data and processing units

are decreased, making it possible to respond more quickly and use bandwidth much more

efficiently. Mao et al. explained that proximity enables autonomous vehicles, smart

manufacturing, and augmented reality, which are applications that require real-time decisions

with minimum latency. Liu et al. extend the importance of how edge computing reinforces

scalability that facilitates the processing burden of millions if not several, connected elements in

an upwardly stretched Internet of Things/IoT atmosphere.

Other benefits of edge computing are reduced latency and increased capability for real-time.

Edge computing spreads processing closer to the data sources, substantially improving privacy

and security. It means that sensitive information can be processed without necessarily being

transmitted to a central server, which minimizes the risk of data leakage and unauthorized access.

Xiao et al. (2019) comment that sensitive data does not need to travel much distance, and there

are fewer risks regarding breaches. Also, in the case of a problem, one will notice a higher level

of fault tolerance due to the system's distributed nature in general. As a result, the failure or loss

of an individual node or connections to the central cloud has minimal effects on the overall
system because nearby edge nodes can easily pick up the slack through localized processing and

decision-making. This architectural robustness, enhanced privacy, and real-time processing

capabilities make edge computing one of the cornerstones of next-generation computing

frameworks. However, these advantages in edge computing come with considerable challenges.

One of the most critical issues involves resource constraints. Edge devices usually have a larger

capacity than cloud data centers, which have unlimited computational, memory, and energy

resources. Many edge devices, such as IoT sensors and mobile devices, have minimal processing

power and storage and cannot perform complex computations locally. As Abbas et al. point out,

balancing resource allocation in a resource-constrained environment is difficult, especially when

the workloads are highly variable. This is further exacerbated by the heterogeneous nature of

edge devices, ranging from high-performance edge servers to low-power IoT nodes, each of

which requires optimization approaches for each class of devices. Khan et al. (2019) further

emphasize that the dynamic nature of edge environments adds to the complexity, where

fluctuating network conditions, workload variability, and device mobility make for an operating

environment that is unpredictable and challenging.

Code efficiency becomes crucial, and there is a need for proper resource management in the edge

environment. Efficient code execution forms the basis for optimal system performance in

resource-constrained conditions. Inefficiently optimized code increases execution time, energy

consumption, and underutilization of computational resources, defeating the purpose of edge

computing. As more and more data is being processed at the edge, even minor inefficiencies in

code execution may cause major system bottlenecks and waste a lot of energy. Lin et al. (2019)

argue that traditional optimization techniques, such as manual code refactoring and static

resource allocation, cannot cope with edge environments' dynamic and distributed nature. Many
of these, however, stick either at traditional optimization heuristics that largely fail to be adaptive

to evolving network conditions and changes in workloads and device heterogeneity or use static

optimization methodology that does not consider the unique constraint possibilities associated

with edge computing. For example, while cloud or nearby edge node offloading of code is a

common strategy for resource limitation amelioration, it adds more latency and bandwidth

overheads, especially with unstable network conditions, according to Yu et al. (2017). Lin et al.

(2019) further raised that such hand-tuning code, for the sake of every device individually, is

thoroughly infeasible at scale, given the unimaginable diversity in edge ecosystem devices. In

this regard, to overcome such limitations, integrating intelligent and adaptive techniques,

particularly machine learning, into edge computing systems has become an increasingly active

research area by researchers and practitioners. Machine learning offers the ability to analyze real-

time data and make dynamic decisions about resource allocation, task scheduling, and code

execution strategies. Hassan et al. identified that machine learning can optimize resource

utilization by predicting the workload pattern and performing dynamic resource allocation to

minimize energy consumption and maximize throughput. Among the many subfields under

machine learning, reinforcement learning has been one of the promising ones for handling edge

computing's dynamic and distributed nature. Unlike traditional methods, reinforcement learning

models can learn from their environment and quickly adapt their strategy in real-time, making

them suitable for edge scenarios characterized by constant variability.

Besides resource constraints, machine learning can optimize code efficiency in several ways,

such as finding better execution paths or balancing trade-offs between competing performance

metrics. ML-based adaptive task scheduling algorithms may delay less critical operations and

focus on running latency-sensitive tasks to achieve optimal system responsiveness. Similarly,


federated learning methods train machine learning models across distributed edge devices

without transferring the raw data for a privacy-preserving solution to optimize code execution

and resource management. Such techniques will improve the system's efficiency and meet the

decentralized and privacy-centric approach of edge computing. Edge computing is a paradigm

shift in computing architectures, enabling real-time processing, latency reduction, scalability, and

privacy. However, resource constraints, dynamic environments, and device heterogeneity raise

several challenges that cannot be solved with traditional optimization methods. Improving code

efficiency using adaptive and intelligent techniques, especially those based on machine learning,

will be crucial to fully exploiting the potential of edge computing. As the literature has shown,

concerning these specific challenges, the machine learning-driven approach would provide a

promising pathway by which edge computing systems can ensure efficiency, scalability, and

responsivity in increasingly resource-constrained situations.

1.2 Problem Statement

The concept of edge computing has leveraged the ability of systems to process data closer to

where it's created, addressing most of the significant issues related to latency and bandwidth

limitations in a centralized cloud. Yet, efficiency in edge computing applications is facing

serious challenges within resource-constrained and dynamic environments. Existing efforts

optimize code traditionally-a combination of coding refactoring along with static handling or

reservations of resources-underperform in context due to incapability to adopt dynamically

changing, variable workloads and heterogeneous sets, including device capabilities paired with

network vagaries. Liu et al. reveal that these gains significantly hinder the performance under

latency-sensitive systems, such as autonomous driving tasks, which Liu et al. 2019 discuss when

only a slight processing delay in situ computational outcomes occur. The increased latency,
coupled with inefficient use of computational resources, will not only degrade the system

performance but also aggravate energy consumption, acting as a serious barrier to the scalability

and sustainability of edge computing systems.

One major limitation of the traditional approaches is their reliance on static and deterministic

optimization strategies. Cao et al. (2020) point out that such methods are improper for an edge

environment characterized by a truly dynamic and distributed context, from high-performance

edge servers to resource-constrained IoT nodes. The heterogeneity requires, in fact, an adaptive

optimization technique that may be conveniently set to runtime variations of device capabilities

and environment conditions. Hartmann and Hashmi (2022) further present that, in the presence

of such inefficiency in resource allocation and code execution, most of the available resources

remain underutilized, hence reduced throughput with compromised quality of service, especially

in critical domains about innovative healthcare and industrial IoT systems. The inability to

dynamically optimize these constitutes a limitation to the potential of edge computing in

providing high-quality, reliable services in diverse application areas.

While these challenges have increasingly been recognized, there has been a further lack of

investigations in existing research about constructed intelligent and adaptive optimization

frameworks that explicitly fit an Edge computing environment. Classic optimization techniques

operate well within static and well-defined contexts but cannot scale up efficiently to be applied

in Dynamic Edge scenarios. Yang et al. (2019) indicate that most of the current works emphasize

resource management and task offloading without appropriately tackling how to optimize real-

time code execution. This gap is significant given edge systems' increasing complexity and

variability, which call for more sophisticated solutions with learning and adaptation capabilities.
Another potential path toward this might be leveraging machine learning in general and

specifically reinforcement learning to attain efficiency in code at the edge environment. Deep Q-

learning, a subclass of reinforcement learning, has been quite promising in dynamic decision-

making and optimization problems. This application at the edge computing level has yet to be

widely explored. Although reinforcement learning has already seen successful applications in

network routing and energy management, its application to code execution and resource

allocation optimization in edge systems is still in its infancy. Cao et al. (2020) believe that DQN,

with its capability to learn an optimal policy from environmental feedback, might provide a

robust solution for real-time code efficiency optimization that can overcome traditional methods'

limitations.

Atop that, multi-objective optimization in DQN enables maximizing several performance

indicators, executing time reduction, shrinking consumption of energy, or maximizing resource

utilization. Hartmann and Hashmi proceed further and point out that such multiobjective

optimization will be integral for applications. In innovative health care, either the latency needs

to be short or equally efficient energy consumptively adapted. However, DQN's integration into

an edge computing framework should be carefully considered, especially with a particular

emphasis on specific constraints, such as computational scarcity or low latency of decision-

making. Yang et al. (2019) pinpoint that some research effort is needed to identify how effective

adaptation of the DQN concept could be achieved to take due consideration of the constraints

concerned for realizing practical benefits in terms of enhancements in performance at the edge.

That is to say, inefficiency on the side of conventional code optimization techniques and an

intelligent and adaptive framework have characterized the significant challenges to be dealt with

within edge computing. Innovative approaches must adapt to real-time changes while optimizing
code executions across heterogeneous systems. It's where reinforcement learning works quite

promisingly, primarily through DQN. This is yet another unexplored area. A DQN-based

optimization framework, in development and validation to fill these gaps, may create new

frontiers for edge computing system improvement in terms of efficiency, scalability, and

sustainability.

1.3 Research Objectives

Since edge computing is new, ensuring the code is efficient over distributed, resource-

constrained environments is of prime importance. The dynamic and complex environments call

for optimized methodologies in a way such that traditional optimization methodologies are

veering off. This research is precisely about filling the gap using advanced machine-learning

methods with a special focus on the Deep Q-learning approach. In this view, this work aims to

significantly improve the performance, scalability, and resource utilization of edge computing

applications by focusing on a reinforcement learning optimization framework developed in this

work, tested, and deployed. The specific goals of this research, which are detailed improvements

obtained, are outlined below.

Primary Objective

Design the optimization framework, developed based on reinforcement learning using Deep Q-

learning, to increase the effectiveness of the code in edge computing applications.

Specific Objectives

RO1: Develop a realistic simulation environment that will be fine-tuned to test the DQN-based

optimization framework.
This will focus on developing a detailed simulation environment that maps the realistic edge

computing conditions, including fluctuating network latencies, varying computational loads, and

availability of different resources, for the continuous testing and refinement process of the DQN

model under controlled conditions.

RO2: Investigate, through case studies, the effectiveness of DQN in improving primary code

efficiency metrics involving execution time, energy consumption, and resource utilization.

This objective will quantitatively assess how the proposed DQN-based optimization framework

contributes to objective performance metrics. A set of experiments placed within this simulation

environment to measure improvement in some performance metrics—total execution time,

energy efficiency, and overall resource utilization—will, in turn, determine the practical benefits

of the proposed approach.

RO3: Validate the performance and scalability of the DQN-based optimization framework in

real-world edge computing scenarios.

This objective deals with deploying the DQN-based framework into real edge-computing

environments, such as an IoT network or mobile edge devices. This builds confidence in the

validity of the simulation results, allowing the framework to be adapted to real-world conditions

and consistently improve code efficiency.

RQ4: Compare the DQN-based optimization framework with traditional code optimization

techniques while underlining the approach's strong points and possible limitations.

This will attempt to position the framework based on a DQN in a broader context of already

existing optimization strategies. It basically pinpoints the areas in which the DQN approach
performs better than the traditional approaches and the areas in which improvements need to be

made.

RO5: The scalability of the proposed optimization framework of DQN will be investigated using

a wide range of edge computing applications, finding adaptability and potential for

implementation.

This objective will attempt to understand whether or not the DQN framework is scalable and

versatile enough to support an array of edge computing scenarios. The study will establish this

by considering adaptability in the application and its further potential for wide and general uses

across a variety of different edge computing environments, which the framework has targeted.

These delineated research objectives collectively target further developing and validating a novel

machined learning framework to enhance code efficiency within an edge computing

environment. These concrete goals would contribute to the genuine opportunities that

technologies promise for the advancement of edge computing and, in turn, bring about

substantial, scalable solutions to the optimization of code execution in environments growingly

complex and dynamic. Doing this effectively will demonstrate the feasibility of reinforcement

learning in this context, setting the stage for other further innovations in optimization on edge

computing.

1.4 Research Questions

In light of these facts, some key questions emerge in the quest for increased efficiency for edge

computing code. Key among them shall be the investigation since understanding the factors

affecting performance will entail understanding the underpinning and probably looking into the

plausible use of advanced ML techniques—techniques such as deep Q-learning to optimize the


factors that influence the performance. The following research questions emanate from the

objectives identified in the previous section and have been developed to ensure the investigation

meets its aim of developing new insights and perhaps practical solutions. These questions will

help dissect the issues surrounding edge computing, assess the place of reinforcement learning,

and evaluate the benefits of the proposed optimization framework.

Main Research Question

In what way does reinforcement learning, particularly Deep Q-Learning, improve code

efficiency, even on the edge computing level?

Specific Research Questions

RQ1: Key factors that affect code efficiency in Edge Computing and how they vary in distinct

environments.

This one tries to find the main variables that affect the efficiency of the code in edge computing.

It will probe computational load, network latency, resource availability, and device heterogeneity

to shape performance outcomes for a foundational understanding of challenges experienced

within the environments.

RQ2: How can Deep Q-Learning optimize code execution about dynamic conditions in edge

computing environments?

This question investigates the applicability of DQN, which can manage and optimize the

dynamic and often unpredictable conditions of edge computing. Therefore, it explores how DQN

can be applied to make real-time decisions toward better code execution efficiency under

different scenarios.
RQ3: What are the measurable impacts of the DQN-based optimization on relevant key

performance metrics like execution time, energy consumption, and resource utilization?

This research question was wanted: "How does the DQN framework empirically evaluate

improvements in execution time, energy efficiency, and resource utilization through quantitative

evidence of the impact on edge computing performance?"

RQ4: How efficient, scalable, and adaptive is the generated DQN-based optimization framework

concerning traditional code optimization techniques?

This question focuses on comparing the proposed DQN framework to existing optimization

methods to understand the relative benefits and drawbacks of each method in greater detail.

Furthermore, this will highlight the unique advantages of using reinforcement learning in the

edge computing preposition, trying not to miss out on the limitations that might exist.

RQ5: What are the challenges and possible solutions for scaling the DQN-based optimization

framework across different types of edge computing applications?

This question only sets forth the scale-constraint issues in the DQN framework—it is very hard

to shade varied scenarios related to edge computing. It will determine how the framework will

adapt better and may find a solution to keep performance similarly effective in different

applications and surroundings.

Therefore, the questions above are framed so that their answers will lead to an overall

investigation of whether learning may be reinforced to create the most effective code for

efficiency at the edge. By answering this question, the research will establish findings relating to

the main driving factors of performance under such environments, the feasibility of adopting

Deep Q-Learning in realistic manners for the scenarios above, and what that means for the future
of edge computing. These questions will act as the basis for the research, with a focused and

systematic investigation that brings valuable knowledge to the field.

1.5 Significance of the Study

This research is of enormous academic and practical importance because it will contribute to the

development of edge computing by applying reinforcement learning in an innovative way,

namely DQN. This research will also help solve some of the critical challenges regarding code

efficiency and resource optimization from the theoretical and practical aspects of intelligent

systems in distributed computing environments.

From an academic point of view, this work extends the use of reinforcement learning in the

context of edge computing. While DQN has been successfully used in domains like network

routing and energy management, its use for code execution optimization in edge computing has

not been explored well. It fills the significant lacuna in the literature by providing how DQN

could be adapted for unique constraints, including edge environments such as limited

computation resources, dynamic network conditions, and heterogeneous capabilities of edge

devices. The paper places DQN at the heart of the edge computing frameworks and furthers

knowledge of adaptive optimization techniques in a widely distributed system. This provides

new insights into how reinforcement learning can be leveraged to optimize code execution paths

dynamically, manage resources, and balance trade-offs between competing performance metrics

such as latency, energy consumption, and resource utilization. Such contributions extend the

theoretical basis of reinforcement learning and provide a roadmap for its practical deployment in

complex computing scenarios.


In practice, the study would significantly transform how edge computing systems function in

practical applications. The proposed framework based on DQN transforms an adaptive solution

for optimizing code efficiency in real-time, one of the hot topics for research in edge computing.

It dynamically adapts to workload, network conditions, and device capability changes to make

edge systems run efficiently in dynamic conditions. This is very useful for latency-sensitive

applications in IoT networking, smart cities, and autonomous systems, where delays in

processing or inefficient usage of resources might have critical implications.

The present study enforces scalability and sustainability in edge computing environments. The

proposed DQN-based framework optimizes resource allocation and reduces energy consumption

to contribute toward greener and more sustainable computing systems. This is relevant in the

rapidly accelerating proliferation of IoT devices and edge systems that demand efficient and

scalable solutions. Its handling of diverse and complex scenarios makes it practical to manage

the burgeoning demands of edge computing infrastructures so that they remain resilient and

responsive amidst the increase in complexity.

In all, it makes a twin contribution: first, academically, to the development of deeper

understanding in reinforcement learning applications in edge computing, which closes the critical

gaps between theory and practice in optimal methods for systems that are distributed in

application; secondly, practically, to put forward an unprecedented adaptive framework able to

tackle real-world problems in IoT Networks, Smart Cities, or Autonomous Systems scenarios

with significant upgrades in scalability and sustainability, and with better management of

resources in general in all edge computing ecosystems. These contributions make the research a

valuable addition to the field that could have broad implications in theoretical exploration and

practical implementation.
1.6 Scope and Limitations

This work will be focused on improving code efficiency in an edge computing environment

using reinforcement learning, namely Deep Q-learning. It develops an adaptive optimization

framework for the peculiar difficulties of the Edge Computing Environment: scarce resources

and dynamic network conditions, heterogeneity of device capabilities, and so on. The primary

focus of this research is to enhance these metric performances: execution time, energy

consumption, and resource utilization. Doing so can avoid the grave inefficiencies that taint

traditional optimization methods. The phases of simulation and deployment will be addressed

within the research study to ensure that the proposed framework goes through strict testing

within a controlled environment and gets verified in realistic scenarios concerning edge

computing. This dual evaluation approach provides a complete understanding of the

effectiveness and scalability of the framework in a wide variety of environments.

This use of DQN as the core reinforcement learning algorithm demonstrates the potential to

handle dynamic and complex decision-making processes for edge computing. Based on the

capability provided by DQN, the research tries to optimize the code execution path, adaptively

adopt the best resource allocation strategy, and perform this in runtime. Because of this fact, the

framework will be very suitable for those applications that are latency-sensitive and resource-

constrained. These simulations of realistic edge computing scenarios will be performed in

environments like iFogSim or CloudSim. At the same time, their actual deployments will be

validated for performance in practical settings: IoT networks and mobile edge devices.

However, this study does have its limitations. Realizing real-time adaptability across highly

heterogeneous edge environments is the first significant challenge. The framework needs to

adapt its optimization strategy dynamically in the presence of highly heterogeneous edge
devices, ranging from high-performance servers to low-power IoT sensors. Although DQN is

appropriate for adaptive decision-making, it can hardly work well under such an extremely

heterogeneous environment because of the high complexity of modeling such scenarios

comprehensively.

Another limitation is the computational overhead of DQN implementation on resource-

constrained edge devices. While reinforcement learning algorithms, such as DQN, are mighty in

optimization, they usually require heavy computational resources for training and decision-

making. This may not be suitable for low-power devices with limited computational and energy

resources. Such challenges may be mitigated through model simplification or the use of

distributed learning, but this adds another layer of complications to the study.

Therefore, this scope of study was limited to code optimization, which is intentionally placed

within the research and did not include other significant edge computing issues such as security

and data management. Though optimization in code will allow the realization of better

performance, more considerable challenges are situated around how best to secure the

environment at the edge and also will enable the integrity of data privacy to be ensured, and are

thus out of the scope of the research. These aspects are essential in their ways but demand a

dedicated investigation, presenting a frontier for further studies.

In other words, this research will present a targeted study on how best DQN can be used to

optimize code efficiency at the edge, the scope of which will involve simulation-based

development and real-world validation. While these contributions are essential to addressing

inefficiencies at the edge, limitations related to real-time adaptability, computational overhead,

and the exclusion of security and data management bring out the limitations inherent in the
research. These limitations promise that avenues remain for further study in those areas, hence

the advancement of edge computing optimization frameworks.

1.7 Thesis Structure

This thesis is well-structured to describe a systematic evolution of the stated research problem

with developing and testing the proposed method to enhance code efficiency in an edge

computing environment using reinforcement learning. So, the successive chapters are as follows:

Chapter 1: Introduction

This chapter presents the background and context of edge computing, the challenges associated

with it, and, more importantly, the issues related to code efficiency. It states the problem

statement and clearly defines the research objectives and questions, the significance of the study,

and the scopes and limitations. In this respect, the present chapter lays the ground for the

research. It justifies the rationale behind the proposed DQN-based optimization framework and

its applicability for addressing the inefficiencies in edge computing systems.

Chapter 2: Literature Review

The second chapter broadly reviews the literature on edge computing, code optimization,

machine learning, and reinforcement learning. It explores both the theoretical and practical

aspects of these domains, outlines gaps in current research, and justifies the necessity of this

study. Particular attention is paid to analyzing the limitations of traditional optimization

techniques and the potential of Deep Q-learning for tackling the dynamic and resource-

constrained nature of the environment in edge devices.

Chapter 3: Methodology
This chapter presents the research design and methodology for developing and testing the

proposed DQN-based framework. The chapter covers the simulation environment and tools used

for modeling edge computing conditions, the development of the DQN algorithm, and key

components of the framework: state space, action space, and reward functions. The procedures

for training and testing simulation-based and real-world edge computing scenarios, ensuring the

rigors and comprehensiveness of the evaluation, are also outlined.

Chapter 4: Results and Analysis

Chapter 4: Results The chapter describes the research results, depicting in detail how the

performance was realized using a DQN-based framework to optimize code efficiency. This

includes key metrics analyses on execution time, energy consumption, resource utilization, and

scalability. The underlying comparison results from the proposed framework and traditional

optimization methods indicate certain advantages over disadvantages in the DQN framework's

approach. Insights into visualizations, statistical analyses, and discussions on the framework's

effectiveness within diverse edge computing environments are presented.

Chapter 5: Discussion

This chapter interprets the results in the context of the research questions and objectives. It

discusses the implications of the findings, highlighting the study's contributions to both the

academic and practical fields. The chapter also discusses the challenges faced during the

research, such as computational overhead and real-time adaptability, and possible solutions and

directions for future research.

Chapter 6: Conclusion
The final chapter summarizes the key findings and contributions of the study, reiterating its

importance in advancing edge computing. It highlights the practical implications of the proposed

framework and its potential to address real-world challenges in IoT networks, smart cities, and

autonomous systems. Limitations of the study will also be discussed in this chapter, with

recommendations for future research, underlining the necessity to continue the exploration of

intelligent optimization techniques in edge computing.

This structure will ensure a logical flow from problem identification and justification of the

research to the development and validation of the proposed solution. Each chapter flows from the

previous one to culminate in a comprehensive analysis and discussion of the research findings

and implications.

References

Abbas, N., Zhang, Y., & Taherkordi, A. (2017). Mobile edge computing: A survey. IEEE

Internet of Things Journal.


Abreha, H. G., Hayajneh, M., & Serhani, M. A. (2022). Federated learning in edge

computing: A systematic survey. Sensors, 22(2), 450.

Cao, K., Liu, Y., Meng, G., & Sun, Q. (2020). An overview of edge computing research.

IEEE Access.

Hartmann, M., & Hashmi, U. S. (2022). Edge computing in innovative healthcare systems:

Review, challenges, and research directions. Wiley Online Library.

Hassan, N., Yau, K. L. A., & Wu, C. (2019). Edge computing in 5G: A review. IEEE Access.

Khan, W. Z., Ahmed, E., Hakak, S., & Yaqoob, I. (2019). Edge computing: A survey. Future

Generation Computer Systems.

Lin, L., Liao, X., Jin, H., & Li, P. (2019). Computation offloading toward edge computing.

IEEE Proceedings.

Liu, F., Tang, G., Cai, Z., & Zhang, X. (2019). A survey on edge computing systems and

tools. IEEE Xplore.

Liu, S., Liu, L., Tang, J., & Yu, B. (2019). Edge computing for autonomous driving:

Opportunities and challenges. Proceedings of the IEEE.

Mao, Y., You, C., Zhang, J., & Huang, K. (2017). Mobile edge computing: Survey and

research outlook. ResearchGate.

Xiao, Y., Jia, Y., Liu, C., & Cheng, X. (2019). Edge computing security: State of the art and

challenges. Proceedings of the IEEE.

Yang, R., Yu, F. R., & Si, P. (2019). Integrated blockchain and edge computing systems: A

survey, research issues, and challenges. IEEE Surveys & Tutorials.


Yu, W., Liang, F., He, X., & Lin, J. (2017). A survey on edge computing for the Internet of

Things. IEEE Access.

You might also like