Literature Survey Draft
Literature Survey Draft
The rise of the Internet of Things (IoT) has increased the demand for efficient
resource management, particularly in smart cities, where IoT devices generate vast
amounts of data that need to be processed in real-time. Edge computing, which
processes data closer to these devices, has emerged as a solution to reduce latency
and improve efficiency. However, edge nodes have limited computational resources,
creating the need for intelligent resource allocation mechanisms. In the proposed
system, machine learning algorithms prioritize IoT tasks based on urgency and
resource consumption. This dynamic allocation aims to optimize the performance of
edge nodes while ensuring minimal delays, especially in critical applications like
digital health and traffic management systems [1].
Despite the system's success, there are challenges in handling the complexity of
real-time data monitoring and the computational costs associated with machine
learning models. The system may encounter difficulties scaling as more edge nodes
and requests are introduced. To mitigate these drawbacks, optimization of machine
learning algorithms is necessary, potentially reducing computational overhead.
Moreover, robust failure management mechanisms are essential for enhancing
reliability in high-demand scenarios. These improvements could further refine
resource allocation and ensure uninterrupted service in critical IoT applications,
making the system more scalable and efficient in smart city environments [1].
The rise of IoT devices in smart city applications, such as real-time traffic
management, has led to a demand for effective task scheduling and resource
allocation techniques. Traditional cloud computing infrastructures are often
insufficient due to latency caused by the geographical distance between IoT devices
and cloud data centers. To address this, edge and fog computing have been
introduced, allowing data to be processed closer to the source. However, managing
IoT tasks and resources at the edge remains challenging, particularly in maintaining
low latency and optimal resource utilization. In this context, task scheduling
methods play a pivotal role, and existing solutions often address this issue from a
limited perspective. [2] proposes a novel solution using game theory to schedule
IoT-edge tasks, considering both the preferences and constraints of IoT devices and
edge nodes.
While the results demonstrate significant improvements, the proposed solution has
some limitations. The reliance on preference-based matching can complicate
implementation, especially in large-scale IoT environments with frequent
fluctuations in task and resource demands. Furthermore, centralized scheduling
models may introduce single points of failure, reducing the system's scalability and
resilience. To address these issues, future research could explore hybrid models that
combine centralized and distributed scheduling, dynamic preference adjustment
algorithms, and the integration of machine learning techniques to predict task
demands and optimize resource allocation in real time. Incorporating fault-tolerant
mechanisms would also improve the system's robustness, ensuring consistent
performance in large-scale IoT networks [2].
References: