0% found this document useful (0 votes)
17 views3 pages

Literature Survey Draft

this is my major proj

Uploaded by

Sauban Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views3 pages

Literature Survey Draft

this is my major proj

Uploaded by

Sauban Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Literature Survey Draft

The rise of the Internet of Things (IoT) has increased the demand for efficient
resource management, particularly in smart cities, where IoT devices generate vast
amounts of data that need to be processed in real-time. Edge computing, which
processes data closer to these devices, has emerged as a solution to reduce latency
and improve efficiency. However, edge nodes have limited computational resources,
creating the need for intelligent resource allocation mechanisms. In the proposed
system, machine learning algorithms prioritize IoT tasks based on urgency and
resource consumption. This dynamic allocation aims to optimize the performance of
edge nodes while ensuring minimal delays, especially in critical applications like
digital health and traffic management systems [1].

The methodology of the proposed system involves a task classification model,


utilizing k-Nearest Neighbors (kNN), Support Vector Machines (SVM), and Logistic
Regression to categorize tasks into high, medium, and low priority levels. Requests
from IoT devices are assigned to edge nodes based on real-time resource
monitoring, ensuring efficient resource use. The system employs Docker for
simulating the edge environment and the MQTT protocol for communication
between devices. A key
innovation is its failure control mechanism, which reassigns tasks from failed nodes
to functioning ones, thereby maintaining service continuity. The results show that
the kNN model outperformed other algorithms, achieving 92% accuracy, with
balanced CPU and memory usage across edge nodes, even under varying loads [1].

Despite the system's success, there are challenges in handling the complexity of
real-time data monitoring and the computational costs associated with machine
learning models. The system may encounter difficulties scaling as more edge nodes
and requests are introduced. To mitigate these drawbacks, optimization of machine
learning algorithms is necessary, potentially reducing computational overhead.
Moreover, robust failure management mechanisms are essential for enhancing
reliability in high-demand scenarios. These improvements could further refine
resource allocation and ensure uninterrupted service in critical IoT applications,
making the system more scalable and efficient in smart city environments [1].

The rise of IoT devices in smart city applications, such as real-time traffic
management, has led to a demand for effective task scheduling and resource
allocation techniques. Traditional cloud computing infrastructures are often
insufficient due to latency caused by the geographical distance between IoT devices
and cloud data centers. To address this, edge and fog computing have been
introduced, allowing data to be processed closer to the source. However, managing
IoT tasks and resources at the edge remains challenging, particularly in maintaining
low latency and optimal resource utilization. In this context, task scheduling
methods play a pivotal role, and existing solutions often address this issue from a
limited perspective. [2] proposes a novel solution using game theory to schedule
IoT-edge tasks, considering both the preferences and constraints of IoT devices and
edge nodes.

[2] introduces an autonomous, multi-factor IoT-Edge scheduling method based on


game theory. Their methodology comprises two primary components: first, an
interaction mechanism where IoT devices and edge nodes evaluate each other
based on factors like latency and resource usage; second, the implementation of
centralized and distributed scheduling models. A unique Preference-Based Stable
Matching (PBSM) algorithm is proposed, which allows both IoT devices and edge
nodes to express their preferences in pairing with one another. The proposed
solution's goal is to reduce latency while maximizing resource utilization at the
edge. Simulation results show that the approach outperforms widely used Min-Min
and Max-Min scheduling algorithms, providing up to 40% better efficiency in
resource usage and a significant reduction in task execution time.

While the results demonstrate significant improvements, the proposed solution has
some limitations. The reliance on preference-based matching can complicate
implementation, especially in large-scale IoT environments with frequent
fluctuations in task and resource demands. Furthermore, centralized scheduling
models may introduce single points of failure, reducing the system's scalability and
resilience. To address these issues, future research could explore hybrid models that
combine centralized and distributed scheduling, dynamic preference adjustment
algorithms, and the integration of machine learning techniques to predict task
demands and optimize resource allocation in real time. Incorporating fault-tolerant
mechanisms would also improve the system's robustness, ensuring consistent
performance in large-scale IoT networks [2].

References:

1. G. Alves Araújo, S. F. da Costa Bezerra, and A. R. da Rocha, “Resource


Allocation Based on Task Priority and Resource Consumption in Edge
Computing,” Journal of Internet Services and Applications, vol. 15, pp. 1–25,
Sept. 2024.
2. A. Bandyopadhyay, V. Mishra, S. Swain, K. Chatterjee, S. Dey, S. Mallik, A. Al-
Rasheed, M. Abbas, and B. O. Soufiene, "EdgeMatch: A Smart Approach for
Scheduling IoT-Edge Tasks with Multiple Criteria Using Game Theory," IEEE
Access, vol. 12, pp. 7609–7621, Jan. 2024.

You might also like