0% found this document useful (0 votes)
28 views

MTD-DHJS Makespan-Optimized Task Scheduling Algorithm For Cloud Computing With Dynamic Computational Time Prediction

This paper proposes a novel task scheduling algorithm called MTD-DHJS for optimizing resource utilization in cloud computing environments. The algorithm adapts the Johnson Sequencing method, originally used for manufacturing scheduling, to schedule tasks across three servers. The goal is to minimize the makespan, or total time to complete all tasks. The algorithm analyzes job dependencies, allocates jobs to servers, and uses Dynamic Heuristic Johnson Sequencing to determine the optimal order of jobs on each server. Simulations show the proposed method significantly reduces makespan and improves resource utilization compared to other scheduling algorithms.

Uploaded by

akshay ambekar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

MTD-DHJS Makespan-Optimized Task Scheduling Algorithm For Cloud Computing With Dynamic Computational Time Prediction

This paper proposes a novel task scheduling algorithm called MTD-DHJS for optimizing resource utilization in cloud computing environments. The algorithm adapts the Johnson Sequencing method, originally used for manufacturing scheduling, to schedule tasks across three servers. The goal is to minimize the makespan, or total time to complete all tasks. The algorithm analyzes job dependencies, allocates jobs to servers, and uses Dynamic Heuristic Johnson Sequencing to determine the optimal order of jobs on each server. Simulations show the proposed method significantly reduces makespan and improves resource utilization compared to other scheduling algorithms.

Uploaded by

akshay ambekar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Received 6 August 2023, accepted 19 September 2023, date of publication 25 September 2023, date of current version 2 October 2023.

Digital Object Identifier 10.1109/ACCESS.2023.3318553

MTD-DHJS: Makespan-Optimized Task


Scheduling Algorithm for Cloud Computing
With Dynamic Computational
Time Prediction
PALLAB BANERJEE1 , SHARMISTHA ROY1 , ANURAG SINHA 2 ,
MD. MEHEDI HASSAN 3 , (Member, IEEE), SHRIKANT BURJE4 , ANUPAM AGRAWAL5 ,
ANUPAM KUMAR BAIRAGI 3 , (Senior Member, IEEE), SAMAH ALSHATHRI 6 ,
AND WALID EL-SHAFAI 7,8 , (Senior Member, IEEE)
1 Department of Computing and Information Technology, Usha Martin University, Ranchi 835103, India
2 Department of Computer Science and Information Technology, Indira Gandhi National Open University (IGNOU), New Delhi 110068, India
3 Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh
4 Department of Electronics and Telecommunication, Christian College of Engineering and Technology, Bhilai, Chhattisgarh 490026, India
5 Department of Electrical and Electronics Engineering, Bhilai Institute of Technology (BIT), Durg, Chhattisgarh 491001, India
6 Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University,

Riyadh 11671, Saudi Arabia


7 Security Engineering Laboratory, Computer Science Department, Prince Sultan University, Riyadh 11586, Saudi Arabia
8 Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt

Corresponding authors: Samah Alshathri ([email protected]), Md. Mehedi Hassan ([email protected]), and
Walid El-Shafai ([email protected])
This work was supported by Princess Nourah bint Abdulrahman University Researchers, Princess Nourah bint Abdulrahman University,
Riyadh, Saudi Arabia, under Project PNURSP2023R197.

ABSTRACT Cloud computing has revolutionized the management and analysis of data for organizations,
offering scalability, flexibility, and cost-effectiveness. Effective task scheduling in cloud systems is crucial
to optimize resource utilization and ensure timely job completion. This research presents a novel method
for job scheduling in cloud computing, employing the Johnson Sequencing algorithm across three servers.
Originally developed for scheduling tasks in a manufacturing context, the Johnson Sequencing method
has proven successful in resolving task scheduling challenges. Here, we adapt this method to address
job scheduling among three servers within a cloud computing environment. The primary objective of the
algorithm is to minimize the makespan, representing the total time required to complete all tasks. This
study considers a scenario where a diverse set of jobs, each with varying processing durations, needs
to be distributed across three servers using the Johnson Sequencing method. The algorithm strategically
determines the optimal order for task execution on each server while accounting for job interdependencies
and processing times on the individual servers. To put the Johnson Sequencing algorithm into practice
for cloud computing job scheduling, we propose a three-step approach. First, we construct a precedence
graph by analyzing the relationships among jobs. Subsequently, the precedence graph is transformed into a
two-machine Johnson Sequencing problem by allocating jobs to servers. Finally, we employ the Dynamic
Heuristic Johnson Sequencing method to determine the best order of jobs on each server, effectively
minimizing the makespan. Through comprehensive simulations and testing, we compare the performance of
our suggested Dynamic Heuristic Johnson Sequencing technique with existing scheduling algorithms. The
results demonstrate significant improvements in terms of makespan reduction and resource utilization when
employing our proposed method with three servers. Furthermore, our approach exhibits remarkable scala-
bility and effectiveness in resolving complex job scheduling challenges within cloud computing settings.

The associate editor coordinating the review of this manuscript and


approving it for publication was Zhiwu Li .

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.


105578 For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ VOLUME 11, 2023
P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

The outcomes of this research contribute to the optimization of resource allocation and task management in cloud
systems, offering potential benefits to a wide range of industries and applications.

INDEX TERMS Johnson sequencing, dynamic heuristic Johnson sequencing analysis, makespan, priority schedul-
ing, round robin scheduling, FCFS scheduling.

I. INTRODUCTION utilization, better service quality, and enhanced customer


The contemporary landscape of technology and science has satisfaction in cloud computing environments.
been significantly influenced by the remarkable relevance of Cloud computing has revolutionized the way customers
cloud computing. This transformative technology empowers interact with cloud tiers, allowing them to introduce programs
users with access to resources on a ‘‘pay per use’’ basis, lead- and reap benefits based on their specific needs [7]. A key
ing to efficient resource allocation and utilization. The diverse player in this ecosystem is the cloud broker, which provides
cloud models incorporate several scheduling algorithms and a platform to collect user information, analyze it, and com-
virtual machine (VM) allocation processes to cater to varying municate with Cloud Service Providers (CSPs) on behalf of
demands. However, ensuring seamless resource provisioning customers while also offering billing services. The cloud bro-
in response to fluctuating workloads remains a formidable ker’s information-integrating capabilities can be seamlessly
challenge for cloud service providers [1]. To enhance sys- integrated into any cloud networking add-on [8], enabling
tem efficiency and meet service level agreements, effective customers to monitor the execution times of their requests,
resource management strategies, dynamic resource alloca- track resource utilization, and assess waiting times.
tion, and strategic planning are of paramount importance in To improve user experience and optimize resource allo-
orchestrating the capacity workflow. This paper delves into cation, this research explores the integration of Johnson
the intricacies of resource management and workload opti- Queuing theory and scheduling techniques [9]. By leveraging
mization within cloud computing environments, with a focus these methods, wait times for user requests can be effectively
on addressing customer demands while efficiently utilizing reduced, enhancing overall service efficiency. Cloud dealers
the resources available in data centers (DCs), which comprise seek streamlined access to expert co-ops’ cloud administra-
a collection of numerous physical devices. tions to augment their service offerings [10]. In turn, clients
Cloud technology facilitates the generation of virtual can benefit from leveraging the Cloud Merchant platform
machines on physical computers based on user requests [2]. to gather advantages and introduce tailored programs at the
Customer requirements for cloud services are influenced cloud level, facilitated by the cloud broker’s comprehensive
by a multitude of factors, encompassing deadline con- support in information handling and service interactions.
straints, cost considerations, compensation rates, start times, This paper delves into the intricacies of cloud brokering,
execution durations, and the number of virtual machines analyzing its role in enhancing resource management, opti-
needed [3]. Efficient cloud computing entails managing mul- mizing service delivery, and streamlining user interactions
tiple applications concurrently and effectively distributing in cloud computing environments. By exploring the poten-
diverse resources. Capacity management systems play a crit- tials of Johnson Queuing theory and scheduling techniques,
ical role in allocating resources among various applications, this study aims to offer novel insights into improving cloud
recycling resources from completed tasks, and optimizing services and enriching customer experiences. The proposed
their deployment to meet demand [4]. framework, coupled with the Cloud Merchant’s capabilities,
Cloud service providers (CSPs) meticulously employ has the potential to shape a more efficient and user-centric
resource management methods, as resources such as RAM, cloud computing landscape.
memory, processors, input/output (I/O) devices, additional In the context of cloud computing, cloud brokers play a
data centers (DCs), and network traffic are inherently limited. pivotal role by providing valuable information-integrating
Consequently, a pay-per-use-demand model is adopted to fur- capabilities to any cloud resource additive [11]. These capa-
nish users with specific resource quantities, thus preventing bilities empower users to monitor various operations, such
resource underutilization and overutilization [5]. as the execution duration of each user request, the utiliza-
To maximize resource utilization and system efficiency, tion of data facilities, and the waiting time for each request.
cloud computing necessitates the implementation of robust To optimize user request scheduling and reduce wait times,
scheduling methodologies [6]. Cloud service providers strive cloud computing leverages scheduling techniques like John-
for seamless access to skilled cooperatives who can augment son scheduling and queuing theory.
their services and enhance the overall cloud infrastruc- This research addresses the task-scheduling challenge
ture. This paper explores diverse resource management and in cloud computing environments through the adapta-
scheduling strategies to optimize cloud service provisioning, tion of a modified dynamic ROUND ROBIN scheduling
considering the intricacies of resource allocation, applica- algorithm [12]. The algorithm is aimed at enhancing task
tion deployment, and the dynamic nature of user demands. scheduling efficiency, benefiting both cloud service providers
The proposed approaches aim to achieve improved resource (CSPs) and users. Cloud infrastructures typically comprise

VOLUME 11, 2023 105579


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

numerous data centers housing multiple physical machines, center possesses unlimited resources to meet all client
each hosting several virtual machines (VMs) responsible for demands, especially during peak hours. Consequently, mul-
executing client tasks with diverse Quality of Service (QoS) tiple data centers collaborate, offering various services to
requirements. clients, leading to the emergence of multi-cloud environments
By integrating cloud broker services with dynamic as a prevailing trend in distributed computing.
scheduling techniques, this study seeks to improve cloud However, scheduling tasks in multi-cloud environments
resource management, optimize task scheduling, and enhance becomes considerably more complex due to each cloud hav-
overall user experience. The proposed approach is expected ing its own task scheduler. The term ‘‘makespan’’ denotes
to foster better resource allocation, reduced wait times for the total time taken from task submission to task comple-
user requests, and improved utilization of cloud resources. tion. Ensuring timely task completion is vital, but resource
The findings of this research contribute to the advancement utilization, particularly with virtual machines, must also be
of cloud computing practices, offering potential benefits to optimized to maximize resource efficiency. Existing task
both cloud service providers and end-users seeking efficient scheduling algorithms tend to prioritize either the schedule
and reliable cloud services. or resource utilization. Striking a balance between these com-
Pay-as-you-go basis [13]. However, several factors con- peting objectives is crucial to achieve optimal outcomes [19].
tribute to delays in processing client requests, including This research aims to address the task scheduling challenge
holding periods, return time for clinical solicitations, pro- in multi-cloud environments by devising novel algorithms
cessor waste, and resource inefficiencies. Addressing these that strike a balance between makespan reduction and
challenges necessitates effective task scheduling and resource enhanced resource utilization. By leveraging state-of-the-art
allocation strategies. The Task Scheduling Problem (TSP), scheduling techniques and resource allocation strategies, the
an NP-hard computational challenge, plays a critical role study seeks to offer valuable insights into the optimization
in efficiently allocating processing resources to application of task execution in multi-cloud environments. The findings
tasks [14]. of this research contribute to advancing the field of cloud
Distributed computing has emerged as a virtualized computing, ultimately enhancing service efficiency and user
paradigm where programs are executed transparently, experience across diverse cloud-based applications.
shielded from the complexities by the cloud infrastructure.
In parallel with essential utilities like water and energy,
cloud computing has acquired significant importance [15]. A. OBJECTIVE OF THE STUDY
It offers dynamic provisioning of resources and a robust In this paper, we have comprehensively explored various
platform to address various challenges, including efficient task scheduling algorithms in the context of cloud comput-
request management under the pay-as-you-go model [16]. ing. Specifically, we investigated the First-Come-First-Serve
Owing to its reliability, scalability, and cost-effectiveness, (FCFS), Round Robin, and Priority Scheduling using a Single
cloud computing has gained immense popularity in tackling Server, as well as FCFS and Johnson Sequencing using a
diverse computational challenges [17]. Two-Server Machine. Additionally, we delved into FCFS and
Services in cloud computing are provided to clients based Dynamic Heuristic Johnson Sequencing using a Multi-Server
on mutual understanding between the client and the Cloud Machine in the preceding sections.
Service Provider (CSP). These services are executed across a The objective of this study is to propose a novel model
set of tasks, giving rise to the concept of re-serving, wherein aimed at minimizing the processing time of jobs within cloud
tasks may be reallocated for optimal efficiency. This research computing environments. To achieve this, we utilized Gantt
aims to optimize client experience and service efficiency in charts to analyze a specific sample of jobs or tasks and
cloud computing by exploring dynamic resource allocation determine their total execution time. Experimental analysis,
and task scheduling methodologies. By addressing issues presented through tables, enabled the identification of the
such as delays in processing clinical solicitations and effi- total execution time for each task.
cient processor and resource utilization, this study seeks to Notably, our proposed Dynamic Heuristic Round Robin
contribute valuable insights to the advancement of cloud Scheduling approach exhibits significant advantages. It effec-
computing practices, leading to improved service quality and tively reduces several key performance metrics, including the
customer satisfaction. system’s total turnaround time (TAT), average waiting time
Task scheduling in the context of cloud computing (AWT), mean number of tasks waiting in the queue, mean
presents a challenging computational problem, known as number of tasks waiting in the system, average waiting time
NP-Complete [18]. The objective of task scheduling is to of tasks in the queue, and average waiting time of tasks in the
optimize specific parameters such as makespan, resource system.
utilization, and power consumption by determining the order By employing advanced scheduling techniques and
in which tasks are executed on virtualized machines. Cloud- dynamic heuristics, we contribute to the optimization of task
specialized companies deploy diverse machine types in their execution in cloud computing environments. The findings
data centers to provide timely services. However, no data of this research pave the way for more efficient resource

105580 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

utilization, reduced waiting times, and improved overall scheduling method, aimed at minimizing waiting times. The
system performance. The proposed model presents a promis- rapid parallel processing of data is imperative for effec-
ing step towards enhancing the efficiency and effectiveness tive distributed resource allocation, project assignment, and
of cloud computing services, benefitting both cloud ser- data analysis in cloud computing environments. To optimize
vice providers and end-users. However, further research resource provisioning and scaling and align them with the
and validation are essential to assess the proposed model’s assigned network file devices, a foundational level of work-
performance under diverse cloud scenarios and workloads. load stability is essential.
Moreover, real-world implementation and experimentation i The DHJS algorithm has proven to be a widely-used
will be critical to ascertain its practical applicability and solution for resolving complex engineering and schedul-
efficiency in actual cloud computing environments. As cloud ing problems, with the goal of maximizing efficiency
technology continues to evolve, ongoing research in task and cost-effectiveness. In this research, we present a
scheduling remains crucial to meet the growing demands of novel method that accurately enhances resource avail-
cloud-based applications and services. By addressing these ability in the context of parallel processing demands
research gaps, we can unlock the full potential of cloud within cloud environments [20].
computing and ensure its continued success in supporting ii Leveraging insights from earlier scheduling approaches,
diverse industries and applications. this study proposes a two-level strategy to improve
work scheduling performance and reduce inefficiencies
B. PROBLEM SATMENT
through the Johnson Bayes design principle for task
In a cloud computing environment, challenges arise due to scheduling. By integrating Johnson’s rule and Heuris-
resource heterogeneity, uncertainty, and dispersion, leading tic Dynamic Round Robin, we consider the unique
to issues in resource allocation that remain unaddressed by characteristics of multiprocessor scheduling, leading to
existing policies. To mitigate these challenges and ensure effi- accelerated algorithm convergence. Johnson’s rule is
cient workload distribution, the use of an edge load balancer strategically employed throughout the decoding process
becomes imperative. The edge load balancer aims to evenly to maximize makespan for each machine.
distribute workloads across available processors, minimizing iii As a result of this approach, a diverse set of virtual
congestion and delays. machines (VMs) is created, expediting the generation of
In this context, network routes play a critical role in virtual machines within the context of task scheduling.
enhancing network resilience, and traffic sharing is employed The second level entails dynamic task matching with
in routing and assignment processes to bolster network specific VMs, necessitating dynamic task scheduling
robustness. As depicted in Figure 1, the block diagram methods. Test findings validate that the proposed meth-
illustrates that the proposed approach exhibits superior per- ods effectively balance resource demands and enhance
formance compared to existing dynamic load balancing cloud scheduling performance when compared to exist-
methods. Johnson’s pioneering study focused on a seemingly ing approaches. Addressing task scheduling remains one
simple problem of managing n jobs on two machines, A and of the prominent challenges in cloud computing.
B, with strict constraints on job sequencing and execution. iv A significant contribution of our research lies in the
In the flow shop model, all tasks performed by machine A reduction of resource management costs for numerous
must also be executed by machine B, and vice versa for tasks tenant user infrastructures, achieved through an equi-
completed by machine A second. tably dispersed data center approach [21].
Addressing resource allocation challenges in cloud envi-
ronments necessitates innovative solutions that consider the Our research showcases the potency of the combined John-
intricacies of resource heterogeneity and network dynamics. son’s rule and Heuristic Dynamic Round Robin algorithm,
The proposed edge load balancer and routing strategies offer which caters to the nuances of multiprocessor scheduling.
promising avenues to enhance resource utilization, reduce With the incorporation of new components and processes,
latencies, and achieve efficient task scheduling in cloud com- we have accelerated the algorithm’s convergence during
puting settings. Nevertheless, further research and empirical the decoding process, leading to enhanced performance.
evaluations are vital to validate the proposed approaches’ To ascertain the effectiveness of the DHJS algorithm, we con-
effectiveness and scalability under diverse workload con- ducted comparisons with the list scheduling technique and an
ditions and real-world cloud scenarios. By continuously improved version through comprehensive simulations. The
exploring advancements in resource allocation policies and results unequivocally affirm the DHJS algorithm’s reliability
load balancing techniques, we can better address the com- and efficacy in addressing task scheduling challenges within
plexities of cloud computing and provide optimized services cloud computing environments.
to users and organizations alike.
D. PAPER ORGANIZATION
C. MAJOR CONTRIBUTIONS In this study, we conducted a comprehensive examination
The device structure has been meticulously designed to of various distinctive scheduling algorithms to discern the
accommodate a modified dynamic heuristic round-robin relevant qualities that merit consideration and those that may

VOLUME 11, 2023 105581


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

FIGURE 1. Block diagram of proposed model illustrated as component and resources in real time three server-based scheduling system for
cloud environment.

be deemed less relevant in specific systems. The literature essential to develop scheduling algorithms specifically tai-
review encompasses diverse perspectives and is thoughtfully lored for cloud systems [22]. By addressing the unique
organized across the subsequent sections. Firstly, we provide challenges posed by cloud environments, these customized
an extensive evaluation of numerous scheduling techniques scheduling algorithms can optimize resource allocation,
that have been extensively discussed in the literature over the reduce latencies, and improve overall system performance,
past decade. This section serves as a comprehensive repos- ultimately leading to enhanced benefits for cloud service
itory of valuable insights into the strengths and limitations providers and users alike. Effective task scheduling is instru-
of each scheduling approach. Then, we systematically orga- mental in harnessing the full potential of cloud computing
nize prior task scheduling initiatives based on the adopted and meeting the growing demands of diverse applications and
methodologies, tools, and parameter-based metrics. Lastly, services in the digital era. As researchers, exploring novel
we conclude the paper by highlighting the key findings of our and efficient scheduling algorithms tailored to cloud environ-
study and offering suggestions for future research directions. ments is crucial to continuously enhance cloud services and
By identifying research gaps and potential areas of explo- drive advancements in the field of distributed computing.
ration, this section aims to inspire further advancements in the In the domain of VM selection for application scheduling,
field of task scheduling for cloud computing environments. Naik et al. [23] have proposed an innovative hybrid multi-
objective heuristic technique, integrating the Non-dominated
II. RELATED WORK Sorting Genetic Algorithm-2 (NSGA-II) and the Gravita-
Task scheduling is a critical concern in distributed comput- tional Search Algorithm (GSA). By combining the strengths
ing environments, particularly in cloud computing. Effective of both NSGA-II and GSA, this hybrid approach aims to
scheduling strategies aim to minimize task wait times and enhance the efficiency and effectiveness of the scheduling
enhance overall cloud functionality to maximize benefits. process. While GSA utilizes good solutions to search for
The objective of employing various scheduling algorithms optimal answers and avoid algorithmic stagnation, NSGA-II
is to identify an appropriate task order that minimizes the widens the exploration area through comprehensive inves-
overall task execution time. Given the distributed and hetero- tigation. The primary objective of this hybrid algorithm is
geneous nature of cloud environments, traditional scheduling to achieve superior job scheduling outcomes, focusing on
algorithms may not be directly applicable. Thus, it becomes three key aspects: maximizing the number of scheduled

105582 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

jobs, minimizing overall energy consumption, and simul- To address the VM scheduling problem and optimize
taneously attaining the shortest response time and lowest system performance, Maguluri et al. introduced a novel
cost. By jointly optimizing these multiple objectives, the approach that departs from traditional assumptions. Their
proposed algorithm seeks to strike a balance between perfor- methodology encompasses two key components: the joint-
mance metrics, enabling better VM selection for application shortest-queue (JSQ) routing technique and the Myopic
scheduling. It is important to note that existing scheduling MaxWeight scheduling policy. By categorizing VMs into
algorithms across VMs do not address the specific require- distinct groups, corresponding to specific resource pools such
ments and objectives considered in this hybrid approach. as processor, storage, and space, the researchers aimed to
Thus, the proposed NSGA-II and GSA hybrid technique efficiently allocate incoming requests. The JSQ routing tech-
introduces a novel and promising solution to address the nique plays a pivotal role in this approach, directing incoming
complexities of VM selection and application scheduling, connections to virtual servers with the shortest queue lengths.
with potential implications for optimizing cloud comput- Moreover, considering the user-requested VM type, the
ing performance and resource utilization. However, further incoming requests are intelligently distributed across the
research and evaluation are needed to validate the efficacy and virtual servers, leading to an enhanced allocation strategy.
scalability of this hybrid algorithm under varying workload A significant contribution of the work is the development of
conditions and across diverse cloud computing environments. virtually throughput-optimal rules, which can be achieved by
As researchers, we continue to explore innovative methodolo- selecting appropriately long frame lengths, as demonstrated
gies and algorithms to advance the field of cloud computing through theoretical research [25]. Notably, the simulation
and ensure the provision of efficient and cost-effective cloud results reinforce the efficacy of the proposed rules in gen-
services to users and organizations. erating favorable latency outcomes, further validating the
Keshk et al. [24] introduced the Modified Ant Colony potential of this approach in optimizing system performance.
Optimization for Load Balancing (MACOLB) method to However, while the findings are promising, there remain
efficiently distribute incoming workloads among virtual opportunities for further investigation and analysis. Future
machines (VMs) in cloud computing environments. The research could explore the scalability and robustness of the
MACOLB method employs a workload balancing strategy JSQ routing technique and Myopic MaxWeight scheduling
that considers the processing capacities of VMs. The distri- policy under varying workload conditions and diverse cloud
bution of jobs to VMs is done in a descending order based computing environments. Additionally, comparative studies
on their processing capabilities, with tasks allocated first to with other state-of-the-art VM scheduling approaches could
the most powerful VM and so on. The primary objectives provide valuable insights into the relative advantages and
of the MACOLB method include reducing the makespan limitations of this novel methodology. As researchers, our
(i.e., the total time to complete all tasks), achieving a bal- goal is to continuously advance the field of VM scheduling
anced system load, and optimizing resource allocation for and resource allocation in cloud computing environments.
batch jobs in public cloud environments. By effectively bal- By exploring innovative techniques and conducting empirical
ancing workloads and resource allocation, the MACOLB evaluations, we aim to contribute to the ongoing efforts in
method aims to enhance system performance and response enhancing the efficiency, responsiveness, and overall perfor-
times, ultimately leading to improved cloud service quality. mance of cloud services, ultimately benefiting cloud service
Despite its strengths, one notable limitation of the MACOLB providers and end-users alike.
method lies in its approach to workload sharing across VMs. In the realm of public cloud computing, numerous heuristic
This weakness could potentially impact the algorithm’s over- algorithms have been developed and employed to effectively
all efficiency and resource utilization in certain scenarios. schedule diverse jobs. Some of the most noteworthy advance-
As researchers, we recognize the significance of address- ments in heuristic methods include the First Come, First
ing such limitations to advance load balancing techniques Serve (FCFS) algorithm, the Min-Max algorithm, the Min-
and ensure optimal resource allocation in cloud computing Min algorithm, and the Suffrage computation. Additionally,
environments. Future research could focus on refining the Greedy Scheduling, Shortest Task First (STF), Sequence
MACOLB method to overcome its limitations, as well as Scheduling, Balance Scheduling (BS), Opportunistic Load
conducting comparative evaluations with other state-of-the- Balancing, and Min-Min Opportunistic Load Balancing are
art load balancing algorithms. Moreover, investigating the among the other significant breakthroughs in this domain
scalability and performance of the MACOLB method under [14], [26], [27]. These heuristic algorithms play a crucial
different cloud workloads and configurations will contribute role in task scheduling within the public cloud, aiming to
to a comprehensive understanding of its applicability and optimize various performance metrics such as job comple-
effectiveness. By continuously exploring innovative load bal- tion times, resource utilization, and system efficiency. Each
ancing methodologies, we aim to enhance the efficiency algorithm approaches the scheduling problem from a differ-
and resource utilization of cloud computing systems, thereby ent perspective, employing specific rules and strategies to
offering more reliable and responsive cloud services to achieve the desired objectives. As researchers, it is imper-
end-users. ative to continuously explore and evaluate the efficacy of

VOLUME 11, 2023 105583


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

these heuristic algorithms under varying workloads and cloud in the evaluation of the algorithm’s performance. The
environments. Comparative studies that assess the strengths outcomes demonstrate that the suggested technique signifi-
and weaknesses of different algorithms will aid in identifying cantly improves system efficiency, offering more favorable
the most suitable approach for specific use cases and cloud turnaround times and reaction times compared to BATS and
service scenarios. Furthermore, devising novel heuristic algo- IDEA. The effective utilization of resources in the public
rithms that address emerging challenges in cloud computing, cloud is a key highlight of this research. By optimizing task
such as scalability, energy efficiency, and load balancing, will scheduling and resource allocation, the proposed algorithm
further advance the state-of-the-art in cloud task schedul- contributes to enhanced resource efficiency and overall sys-
ing. Our ongoing efforts in refining and developing heuristic tem performance. These findings have important implications
algorithms will contribute to the continuous enhancement of for cloud service providers, as improved turnaround times
cloud computing services, offering more efficient and reli- and reaction times can lead to better service quality and
able solutions to meet the evolving demands of users and user satisfaction. As researchers, we recognize the potential
organizations in the digital era. By leveraging these heuristic impact of these results and acknowledge the need for further
advancements, we can unlock the full potential of the public exploration and validation in various cloud computing envi-
cloud, ensuring optimal resource allocation and improved ronments. Additional comparative studies with other state-
overall performance for cloud service providers and users of-the-art scheduling algorithms will help to establish the
alike. competitiveness and applicability of the proposed technique
Furthermore, a novel scheduling method that takes into across diverse cloud workloads and scenarios. Furthermore,
account resource constraints was employed, resulting in investigating the scalability and robustness of the algorithm
improved task acceptability ratio and reduced task fail- under different conditions will provide valuable insights
ure ratio. The modified Round Robin (MRR) scheduling into its practicality and effectiveness in real-world cloud
approach not only aims to minimize latency but also effec- deployments. Our commitment to advancing cloud com-
tively addresses issues related to starvation, ensures fairness, puting research drives us to continue exploring innovative
and facilitates high availability [28]. In addition, researchers methodologies that optimize resource utilization, improve
enhanced resource consumption through the implementa- system performance, and ultimately enhance the delivery of
tion of a more intelligent Round Robin (RR) scheduling cloud services to users and organizations. By leveraging the
model [29]. By incorporating intelligence into the RR advantages of this proposed algorithm, we can contribute to
scheduling, the new model optimizes resource allocation the continual evolution and refinement of cloud computing
and utilization, contributing to improved system efficiency technologies, addressing the ever-growing demands of the
and performance. These advancements in scheduling meth- digital era.
ods underscore the significance of addressing the complex In the domain of offline cloud scheduling algorithms,
challenges in cloud computing environments. By integrating deep reinforcement learning methodologies have garnered
considerations of resource limitations, the MRR approach attention as promising approaches [31]. Notably, DeepRM
enhances the overall task management and success rates, and DeepRM2 have been enhanced to address resource
offering a more efficient and reliable scheduling strategy scheduling challenges by extending their capabilities beyond
for cloud service providers and users. Furthermore, the handling CPU and memory parameters alone. Instead, these
intelligent RR scheduling model opens up possibilities for updated approaches encompass a broader range of schedul-
optimizing resource consumption, leading to better resource ing strategies, including the shortest job first (SJF), longest
utilization and overall system performance. As researchers, job first (LJF), attempts-based, and random methods. The
we acknowledge the importance of continuously investigat- incorporation of reinforcement learning techniques in cloud
ing and refining scheduling algorithms to meet the evolving scheduling reflects a growing interest in leveraging artificial
demands of cloud computing. The ongoing pursuit of inno- intelligence for solving optimization problems in cloud com-
vative scheduling techniques will contribute to the continual puting environments. Deep reinforcement learning offers a
improvement of cloud services, enhancing user experiences powerful paradigm for learning optimal scheduling policies
and optimizing resource allocation for a wide range of appli- through interactions with the environment and reward-driven
cations and workloads. By leveraging these advancements, decision-making. To further advance the application of deep
we can further harness the potential of cloud computing, driv- reinforcement learning in offline cloud scheduling, ongo-
ing advancements in the field and offering valuable solutions ing research should focus on addressing various challenges
to diverse industries and domains. and complexities. Comparative evaluations with traditional
The algorithm proposed in [30] exhibits notable advan- scheduling algorithms will help elucidate the advantages and
tages over existing frequency adaptive divided sequenc- limitations of the proposed approaches. Moreover, investigat-
ing (BATS) and enhanced differential evolution algorithm ing the impact of varying workload characteristics and cloud
(IDEA) systems in terms of turnaround time and reac- system configurations on the performance of deep reinforce-
tion time. The study utilizes actual scientific operations ment learning algorithms will contribute to a more compre-
from CyberShake and epigenomics as representative tasks hensive understanding of their effectiveness and adaptability.

105584 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

As researchers, our commitment to exploring cutting-edge of both accuracy and reliability. Detailed result comparisons
technologies and methodologies drives us to continually push and overviews are provided in Table 1 for a comprehensive
the boundaries of knowledge in cloud computing. By har- understanding of the performance improvements achieved
nessing the potential of deep reinforcement learning in cloud by the proposed approach. The novel classification method
scheduling, we can pave the way for more efficient and intelli- presented in this research addresses critical challenges in data
gent resource allocation, ultimately enhancing the quality of analysis and classification tasks, where accurate and efficient
cloud services for users and offering practical solutions for classification from limited labeled data is of utmost impor-
cloud service providers. tance. By leveraging the principles of LAD, the proposed
Real-time achievement of Quality of Service (QoS) for method demonstrates enhanced precision and robustness,
resource allocation poses significant challenges. To address offering valuable insights for applications in various domains,
this, a supervised machine learning approach is employed including cloud computing, artificial intelligence, and data
in [32], which compares evaluated data of the current state analytics. Future research directions may involve exploring
with statistical data. Based on historical circumstances, the scalability and generalizability of the proposed method to
resources are then allocated according to the best or nearly handle larger datasets and diverse data types. Comparative
best option using this novel design methodology. This evaluations with other state-of-the-art classification algo-
methodology is specifically focused on distributing unused rithms will provide further validation of its efficacy and
spectrum efficiently. In various application scenarios, such superiority. Additionally, investigating the interpretability
as textual, picture, multimedia, traffic, medical, and big data, of the generated classification rules and their applicability
classification mining methods play a vital role. In [33], a par- to real-world datasets will offer valuable insights into the
allelizing structure is proposed, significantly reducing the practicality and reliability of the approach for real-world
required resource quantity in terms of space for implementing applications. As researchers, we recognize the significance of
axis-parallel binary Decision Tree Classifier (DTC). advancing data classification techniques to meet the growing
For sequence classification, a pioneering approach is pre- demands of data-driven decision-making in today’s digi-
sented in [34], where rules composed of intriguing patterns tal era. The continued exploration and refinement of novel
discovered from a collection of tagged episodes and support- classification methodologies will contribute to the continual
ing class labels are utilized. The interest of a pattern in a improvement of data analysis, offering valuable contributions
specific class of sequences is gauged by combining pattern to scientific research and industrial applications alike. Further
cohesiveness and support. The study describes two alterna- result overviews are elaborated in Table 1.
tive approaches for developing a classifier and effectively Upon analyzing the data presented in Table 1, several key
employs the discovered patterns to create a reliable clas- observations and insights have been garnered, which con-
sification model. The structures generated by the proposed tribute to the advancement of our proposed method for task
program accurately describe the patterns and have demon- scheduling in cloud environments. Cloud performance can
strated superior performance compared to other machine be effectively measured through real-time and collaborative
learning algorithms in training sets. activities, allowing for predictions of throughput and down-
Additionally, [34] proposes a Bayesian classification time using batch systems. To ensure timeliness and fairness,
method utilizing class-specific characteristics for automated a real-time, dynamic monitoring system can be employed to
text categorization, offering a valuable contribution to text grade task deadlines. Business and efficiency considerations
classification tasks. The utilization of machine learning emerge as primary focal points within the third category.
methodologies in these studies showcases their poten- Our primary goal is to minimize the execution time while
tial to enhance resource allocation, achieve more accurate considering various guidelines for task and performance
classifications, and improve automated text categorization. mapping. In market-oriented objectives, cost becomes the
As researchers, it is essential to continually explore and sole consideration. Static scheduling offers the flexibility
refine these approaches, considering their effectiveness under to utilize a wide array of accepted scheduling techniques,
various conditions and exploring their scalability for real- such as round robin, First-Come-First-Serve (FCFS), Shortest
world cloud computing and data-intensive applications. The Job First (SJF), and priority-based approaches. Meanwhile,
integration of machine learning in resource allocation and dynamic load balancing harnesses the potential of various
data classification will further drive advancements in cloud metaheuristic optimization techniques, including simulated
computing and various domains relying on data-driven annealing, particle swarm optimization (PSO), ant colony
decision-making. optimization, and dynamic list scheduling.
In [35], a novel method for rapid and precise data classi- In dynamic scheduling, computation time is modified as
fication is proposed, capable of learning classification rules tasks are completed, enabling adaptability in the number of
even from a potentially small sample of pre-classified data. tasks, server positions, and resource allocations. However,
The foundation of this approach lies in the ‘‘Logical Analysis the delivery time of jobs cannot be determined prior to sub-
of the Data’’ (LAD) methodology. Notably, the suggested mission. This type of task-based scheduling is frequently
method surpasses the conventional LAD algorithm in terms utilized for recurring activities, where tasks are executed

VOLUME 11, 2023 105585


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

TABLE 1. Research gaps and their advantages and disadvantages.

105586 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

TABLE 2. Categorization based on scheduling technique.

upon completion. The subcategories of dynamic schedul- III. FLOW CHART OF DIFFERENT SCHEDULING
ing encompass group methods and web-based scheduling, ALGORITHMS
involving techniques such as grouping, group-style queuing, A. FLOW CHART OF DIFFERENT SCHEDULING
and timed task completion. ALGORITHMS
Table 2 categorizes algorithms based on their scheduling The First-Come-First-Served (FCFS) task scheduling
techniques, highlighting distinctions among job-based, static, algorithm operates on the principle of executing tasks in
dynamic, workflow-oriented, and cloud-based approaches. the order they arrive, following a non-preemptive approach.
Among these categories, the Johnson Sequencing algorithm The average waiting time and total turnaround time for each
emerges as particularly well-suited for different environ- task are influenced by the size and timing of their arrival.
ments, demonstrating its versatility and efficacy in job-based, In a cloud computing environment, multiple clients request
dynamic, workflow, and cloud-based scenarios. resources from the data center controller, and these requests
As researchers, we recognize the value of a comprehen- are directed to the FCFS virtual machine load balancer.
sive understanding of various scheduling techniques and As depicted in Fig. 1, [36], [37], [38], [39], the FCFS virtual
their implications in cloud computing. The findings from machine load balancer executes the tasks based on the order
Table 1 provide valuable insights to guide our proposed of client request arrival.
method, enabling the development of a robust and efficient The FCFS algorithm has been widely studied and imple-
task scheduling approach. Moving forward, we will further mented in cloud computing due to its simplicity and fair
investigate the adaptability and performance of the Johnson allocation of resources based on arrival times. However,
Sequencing algorithm in diverse cloud computing settings, it may lead to inefficient resource utilization and longer
aiming to enhance resource allocation, execution time, and waiting times for tasks with varying sizes and priorities.
overall cloud system efficiency. To address these limitations, researchers have explored other
VOLUME 11, 2023 105587
P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

task scheduling algorithms, such as Round Robin, Priority C. FLOW CHART OF PRIORITY SCHEDULING ALGORITHM
Scheduling, and Johnson Sequencing, among others, each The Priority Scheduling Algorithm is a fundamental task
offering distinct advantages and tailored approaches to opti- scheduling approach that prioritizes tasks based on their
mize cloud resource allocation and performance. Further assigned priorities. A flow chart is a visual representation of
investigations into the performance of these algorithms under the algorithm’s steps, providing a clear and concise overview
various scenarios and workload conditions are essential to of its functioning. Below, I present the flow chart of the
enhance task scheduling efficiency in cloud computing envi- Priority Scheduling Algorithm, detailing each step in the
ronments. process:
• Initialization: The algorithm begins by initializing the
B. FLOW CHART OF FCFS SCHEDULING ALGORITHM list of tasks and their corresponding priorities. Each
The Priority Scheduling algorithm operates on the principle task is represented by a process or job, and its priority
of executing tasks based on their assigned priorities, with is assigned based on predefined criteria, such as task
higher priority tasks being executed before lower priority importance, deadline constraints, or user-defined pref-
ones. This scheduling technique is commonly used in oper- erences.
ating systems where a multitude of tasks require execution, • Sort Tasks: Next, the list of tasks is sorted based on their
and their priorities determine the order of execution. Pri- priorities in descending order. This sorting ensures that
ority Scheduling can also be implemented as a preemptive higher priority tasks appear at the top of the list, while
algorithm, allowing a task with a higher priority to preempt lower priority tasks are placed towards the bottom.
the execution of lower priority tasks, as illustrated in Fig. 2 • Execution: The algorithm proceeds with executing tasks
[40], [41], [42], [43]. in accordance with their priority order. The task with
The priority-based approach in task scheduling is advanta- the highest priority is selected first for execution. The
geous for real-time systems and applications where certain execution process may vary depending on whether the
tasks must be given precedence over others based on crit- algorithm is preemptive or non-preemptive.
icality or urgency. However, the use of priority scheduling – Preemptive Priority Scheduling: If the algorithm is
may lead to potential issues like starvation, where lower preemptive, the currently running task may be inter-
priority tasks may suffer from prolonged delays in execution. rupted by a higher priority task. The system checks
Balancing priority levels and considering task characteristics for any higher priority tasks that arrive during the
are vital to ensure fair allocation of resources and prevent execution of a task. If a higher priority task is found,
situations of indefinite postponement for low-priority tasks. the current task is preempted, and the higher priority
As research on task scheduling in cloud computing contin- task is scheduled for execution.
ues to evolve, it is essential to explore the performance of – Non-Preemptive Priority Scheduling: In non-
various scheduling algorithms, including Priority Scheduling, preemptive mode, the current task is allowed to
under different scenarios, workload distributions, and system complete its execution before the next task with the
configurations. This investigation will contribute to a com- highest priority is selected and scheduled.
prehensive understanding of the strengths and weaknesses of • Task Completion: Once a task is completed, the
each approach, facilitating the development of more efficient algorithm proceeds to the next task in the priority order.
and robust scheduling strategies for cloud-based environ- The process continues until all tasks are executed.
ments. • Task Arrival: During task execution, new tasks may
arrive in the system. If the algorithm is preemptive, the
arriving task’s priority is compared with the priority of
the currently executing task. If the arriving task has a
higher priority, it preempts the ongoing task, and the new
task is scheduled for execution.
• Task Termination: As tasks complete their execution,
they are removed from the list, and the algorithm con-
tinues to select the next task with the highest priority for
execution.
• Completion Check: The algorithm continues executing
tasks until all tasks in the list are completed. Once all
tasks have been executed, the scheduling process termi-
nates.
The flow chart of the Priority Scheduling Algorithm in
Fig. 3 provides an intuitive representation of the scheduling
procedure, making it easier to understand and analyze its
FIGURE 2. Flow chart of FCFS algorithm. behavior. It serves as a valuable tool for researchers and

105588 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

• Time Quantum: The time quantum is a critical parameter


in the RR algorithm. It determines how long each task
is allowed to run before being preempted. The choice
of an appropriate time quantum influences the balance
between system responsiveness and context switching
overhead.
• FCFS Order: Tasks are arranged in a queue based on
their arrival time, and the RR algorithm follows the
FCFS order for task execution. This ensures that tasks
are served in the order they arrived, maintaining fairness
in resource allocation.
• Preemption Handling: When a task’s time quantum
expires, the processor saves its state and switches to the
next task in the queue. The preempted task is placed back
at the end of the queue to await its turn again.
Overall, the Round-Robin scheduling algorithm shown in
Fig. 4, provides a practical approach to task scheduling in
various computing environments. However, its performance
can be influenced by the choice of the time quantum, the
nature of the tasks being executed, and the overall system
load. Researchers continue to explore variations and opti-
mizations of RR to enhance its effectiveness and adaptability
to different scenarios [25], [44], [45], [46].
FIGURE 3. Flow chart of priority scheduling algorithm.

practitioners in the field of task scheduling, helping them


evaluate the algorithm’s performance and identify potential
areas for improvement and optimization.

D. FLOW CHART OF ROUND ROBIN


SCHEDULING ALGORITHM
The Round-Robin (RR) scheduling algorithm is a funda-
mental preemptive scheduling technique employed in various
computing systems. In RR, each task is allocated a fixed time
quantum or time slice by the processor. The tasks are executed
in a First-Come-First-Serve (FCFS) manner, and they are
given a chance to run for the duration of the time quantum.
Once the time quantum is exhausted, the task is preempted,
and the processor switches to the next task in the queue. The
preemption and task switching continue until all tasks in the
system have completed their execution. FIGURE 4. Flow chart of round robin scheduling algorithm.
The RR scheduling algorithm is widely used in operating
systems and distributed computing environments due to its
simplicity and fair resource allocation. It ensures that each E. DYNAMIC HEURISTIC JOHNSON SEQUENCING
task gets a fair share of the processor’s time, preventing ALGORITHM (DHJS)
any single task from monopolizing the CPU for an extended The Dynamic Heuristic Johnson Sequencing (DHJS)
period. By employing a fixed time quantum, RR strikes a bal- algorithm shown in Fig. 5 is a novel approach that combines
ance between responsiveness and efficiency in task execution. dynamic burst time computation and the Johnson sequenc-
The key features of the Round-Robin scheduling algorithm ing technique to optimize task scheduling in a multi-server
are as follows: environment. The DHJS algorithm begins by calculating the
• Preemptive Scheduling: RR operates as a preemptive dynamic time quantum for the tasks based on the mid burst
scheduling algorithm, which means that tasks can be time of the task and the maximum burst time. This time
interrupted and rescheduled even before their time quan- quantum is then used in a Round Robin scheduling approach.
tum is fully utilized. This allows for a dynamic and Subsequently, the Johnson sequencing algorithm is applied to
responsive allocation of resources. determine the optimal execution sequence of tasks.

VOLUME 11, 2023 105589


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

In the scheduling process, three server machines, denoted IV. TASK SCHEDULING MODELLING IN
as M 1, M 2, and M 3, with task indices J = 1, 2, and 3, are CLOUD COMPUTING
used. Tasks are scheduled based on the computed time quan- In the realm of cloud computing, customers are presented
tum. The scheduling algorithm involves finding the minimum with a plethora of choices for their specific tasks. To effi-
value in a matrix of tasks, which determines the machine with ciently manage these tasks and minimize delays, the standard
the shortest processing time for a given task. The task with the queuing model is employed to arrange the scheduled jobs
minimum processing time is selected to execute first on the in the most optimal order. In this research, we leverage the
corresponding machine. Dynamic Heuristic Johnson Sequencing (DHJS) technique
to map the system of models, while also employing queuing
models to implement service pricing in a cloud environment.
Given the simplicity of determining the service time from
the grant chart for each individual task, we consider a batch
of tasks with part-time characteristics. Our primary objec-
tive is to reduce the number of customers waiting in the
queue, thereby enhancing the overall efficiency of the system.
To achieve this, we adopt the M /M /c/K queuing model, which
effectively reduces the total delay time for both the system
and the queue.
By integrating the DHJS technique with the queuing mod-
els, we aim to provide an effective and reliable solution
for optimizing task scheduling in cloud computing. This
approach has the potential to significantly improve resource
utilization, reduce customer waiting times, and enhance over-
all system performance. The experimental evaluation and
comparative analysis of the proposed methodology against
existing techniques will be crucial in establishing its efficacy
and demonstrating its benefits in real-world cloud computing
scenarios [47], [48], [49]. Further research in this direction
will pave the way for more sophisticated and robust task
scheduling solutions, benefiting both cloud service providers
and end-users alike.

FIGURE 5. Flow chart of dynamic johnson sequencing algorithm.


A. SYSTEM MODEL DESIGN
The execution process follows a sequence where each task In this research endeavor, the system model design is
is executed by one machine at a time. For example, if a task is represented through a comprehensive schematic represen-
executed by machine M 1, the remaining processing time of tation. The diagram portrays the organizational phase and
that task is passed on to machine M 2. After M 2 completes its the queuing model, forming the basis for our investigation.
execution, the task is then forwarded to machine M 3 to finish To optimize job order during planning, we have employed the
the remaining processing time. This process ensures efficient Dynamic Heuristic Johnson Sequencing (DHJS) algorithm.
utilization of server resources and minimizes task execution For queuing algorithms that address part-and-parcel waiting
time. times, we have adopted the M /M /c/K paradigm.
Once all tasks are scheduled and executed, they are passed Figure 1 illustrates our overall design paradigm, wherein
through the M /M /c/K queuing model for further analysis. multiple clients seek services from the cloud provider,
The M /M /c/K system evaluates the performance and resource encompassing platform-, infrastructure-, storage-, resource-,
utilization of the executed tasks. Finally, the completed tasks and software-based offerings. The access management pro-
are delivered to the customers. cess is initiated after assessing the capabilities of the cloud
The DHJS algorithm presents a dynamic and heuristic agent, acting as an intermediary service. Following customer
approach to address complex task scheduling challenges in access authorization, SLA (Service Level Agreement) details
multi-server environments. By combining burst time com- and user information are reported to the service provider.
putation, Johnson sequencing, and queuing model analysis, Subsequently, the monitor module collects resource data and
it aims to optimize task execution and resource allocation, tasks from the user for a predetermined time period [50], [51],
leading to improved efficiency and timely task delivery to [52], [53].
customers. Further research and experimentation are encour- The analyzer module identifies the available resources and
aged to validate and refine the performance of the DHJS sends requests if they are accessible, ensuring the provision
algorithm in various real-world cloud computing scenarios. of additional SLA services as needed. The DHJS algorithm

105590 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

scheduler module handles job repair and determines the as the summation of individual job service times (Si ) divided
optimal order for their execution, leading to a reduction in by the total number of jobs (n). Additionally, we evaluate
average wait times. Subsequently, the M/M/c/K queuing sys- the system’s stability through the utilization factor (ρ), rep-
tem, which will be discussed in detail later, facilitates the resented as the ratio of arrival rate (λ) to service rate (µ),
transmission of these jobs within the system. where ρ ≤1 indicates the ideal state of the system and ρ = λ/
By efficiently utilizing services and costs, our approach µ ≤ 1 [62].
aims to maximize system resource utilization while min- By employing these queuing models and performance
imizing complexity and delays [8], [54], [55], [56]. This metrics, our research aims to optimize job scheduling in
comprehensive design framework holds the potential to cloud computing, reducing delays and enhancing overall
significantly enhance task scheduling in cloud computing, resource utilization. Through extensive experimental analy-
resulting in improved performance and user satisfaction. sis, we intend to demonstrate the effectiveness and efficiency
Through rigorous experimental evaluation and comparative of our proposed approach and contribute valuable insights to
analysis, we seek to establish the effectiveness of our pro- the field of cloud resource management.
posed methodology and contribute to advancing the field of In our research, we utilize various mathematical equations
cloud computing resource management. to analyze and model the queuing system for job scheduling
in the cloud environment. These equations play a crucial
role in understanding the performance metrics and optimiz-
ing resource allocation. Let’s discuss each equation and its
significance in detail.
Equation (1) represents the relationship between the arrival
rate (λ) and the average inter-arrival time (E[τ ]). The average
inter-arrival time denotes the mean time interval between
consecutive job arrivals.
1
λ= where E[τ ] = Average inter-Arrival Time (1)
E [τ ]
In Equation (2), we define the exponential distribution
density function a(t), where λ represents the rate parameter
and t is the time at which consumers initiate transactions.
a(t) = λe − λt (2)
FIGURE 6. System design of M/M/c/K queuing system. The Poisson distribution is expressed in Equation (3),
where P(x) denotes the probability of x job arrivals occurring
within a specified time interval. The parameter λ indicates the
B. QUEUING MODEL arrival rate.
In this study, we leverage the queuing model to calcu-
late waiting lines and optimize job scheduling in the cloud P(x) = (λ^x ∗ e^ − (λ))/x!, for = 0, 1, 2 . . . n (3)
environment. The queuing model provides a fundamental where x is the passing of time and P(x) is the arrival
framework by specifying the service process, arrival process, probability.
maximum capacity of locations, and services. It assumes that The average service time, denoted by E(S), is calculated
each job is processed exponentially within the sample, while in Equation (4) by taking the sum of service times (Si ) for
the user’s demand is transmitted to the server according to the each individual job and dividing it by the total number of
Poisson distribution. Specifically, we adopt a non-preemptive jobs (n) [32]. Equation (5) computes the service rate (µ),
system based on the M /M /c/K queuing model, considering which is the reciprocal of the average service time. It rep-
two service centers (SCs) and five places of capacity [57], resents the rate at which tasks are processed by the system.
[58], [59]. Pn
Si
The integration of job scheduling method and queuing Average Service time E(S) = i=0 (4)
n
model streamlines our research approach. Inter-arrival times
and
are treated as independent, identically distributed variables,
1 1
following an exponential distribution for arriving shipments, Service rate will beµ = µ= (5)
denoted by Kendall’s notation. Similarly, service times are E (S) E (S)
exponentially distributed to represent the service distribution. In Equation (6), the probability of the system being idle
The client arrival pattern is considered based on the Poisson (Po ) is calculated. It accounts for the situation when there are
distribution, with λ as the rate parameter and the interval after no tasks in the system, and the system is in an idle state.
the task’s complete execution denoted as µ [59], [60], [61]. #−1
"
XS−1 1  λ n 1 λ S
  

To assess the performance of the system, we utilize the Po = + (6)
n=0 n! µ S! µ Sµ − λ
expected waiting time, denoted by E(S), which is calculated

VOLUME 11, 2023 105591


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

The average number of tasks waiting in the queue (Lq ) is By using these formulas, we can precisely evaluate the
computed in Equation (7). It reflects the average number of performance of our task scheduling algorithm. The lower the
jobs that are waiting in the queue for processing. AWT and TAT, the more efficient and responsive the system is
"  S  # in processing tasks and reducing overall waiting times. These
1 λ λµ metrics play a crucial role in optimizing resource allocation
Lq = ∗ ∗ ∗ P0 (7)
(S − 1)! µ (Sµ − λ)2 and enhancing the user experience in cloud computing envi-
ronments.
The average number of tasks in the system (Ls ) is deter-
mined in Equation (8) by adding the average number of tasks
waiting in the queue to the average number of tasks being D. OBJECTIVE OF THE STUDY
processed. In this research, we have devised a system incorporating the
λ Dynamic Heuristic Johnson Sequencing (DHJS) algorithm
Ls = Lq + (8) with three servers in the cloud environment to minimize
µ
service time. The system caters to a batch of diverse jobs,
Equation (9) calculates the average waiting time of tasks in and to calculate the service time, a Gantt chart is created.
the queue (Wq ), which indicates the average time a job spends The Gantt chart displays the total execution time of each
waiting in the queue before being processed. task. By applying the dynamic heuristic Johnson sequencing
Lq rule to the system, we have observed a reduction in both the
Wq = (9) average number of clients within the queue and the number of
λ
clients inside the machine. Additionally, the average waiting
Finally, the average waiting time of tasks in the system
time within the machine and the queue has been reduced.
(Ws ) is computed in Equation (10) by adding the average
The implementation of the DHJS algorithm has proven
waiting time in the queue to the average service time.
to be effective in enhancing the overall efficiency and
1 performance of the cloud-based task scheduling system.
Ws = Wq + (10)
µ By strategically sequencing the jobs on the servers, we have
By applying these mathematical equations, we gain valu- achieved significant improvements in reducing waiting times
able insights into the queuing model’s behavior, system and optimizing resource allocation. The Gantt chart serves as
performance, and waiting times, enabling us to optimize job a valuable tool for visualizing and analyzing the execution
scheduling and resource allocation in the cloud environment. timeline of tasks, which further aids in the evaluation of
system performance.
C. CALCULATION OF AVERAGE WAITING TIME AND The dynamic nature of the DHJS algorithm allows for
TOTAL TURNAROUND TIME adaptability and responsiveness to changing conditions and
In our study, we employ two important performance met- varying job characteristics. This adaptability ensures that the
rics to evaluate the efficiency of our task scheduling system can efficiently handle diverse workloads and prioritize
algorithm: the Average Waiting Time (AWT) and the Aver- tasks based on their specific requirements.
age Turnaround Time (TAT). These metrics provide valuable As a result of the research, the proposed system holds
insights into the overall performance and responsiveness of promise for achieving improved resource utilization, reduced
the scheduling system. Let’s discuss each metric and its cal- waiting times, and enhanced customer satisfaction in cloud
culation formula in a more professional academic format. computing environments. The findings of this study con-
tribute to the advancement of cloud-based task scheduling
(i) Average Waiting Time (AWT): techniques, paving the way for more efficient and effective
The Average Waiting Time represents the average time a cloud services for a wide range of applications and industries.
task spends waiting in the queue before it is processed. It is Further research and testing of the system on larger and more
calculated by taking the difference between the starting time diverse datasets will be undertaken to validate and refine its
of each task (stTKi ) and its arrival time (atTKi ), and then performance in real-world cloud computing scenarios.
summing up these differences for all tasks in the system.
Xn
AWT = (stTK i − atTK i ) (11) V. ALGORITHMS OF TASK ALLOCATIONS IN CLOUD
i=1
A. FCFS ALGORITHM PSEUDO CODE
(ii) Average Turnaround Time (TAT): Below is the extended and rewritten pseudo code for the
The Average Turnaround Time indicates the average time FCFS (First-Come-First-Serve) algorithm with improved for-
taken for a task to complete its execution, from its arrival to its mat and professional academic presentation:
finish. To calculate the TAT, we take the difference between The above pseudo code represents the step-by-step process
the finish time of each task (ftTKi ) and its arrival time (atTKi ), of performing First-Come-First-Serve (FCFS) scheduling on
and then sum up these differences for all tasks in the system. a set of processes with their burst durations. The algorithm
Xn calculates the waiting time and turnaround time for each
TAT = (ftTK i − atTK i ) (12)
i=1 process and then computes the average waiting time and

105592 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

Algorithm First-Come-First-Serve (FCFS) Scheduling using the formula: Turnaround Time (Process i) = Com-
Input: Processes and their burst durations (bt[]). pletion Time (Process i) - Arrival Time (Process i)
Output: Average waiting time (avg_wt) and average • Waiting Time (WT): It denotes the interval between the
turnaround time (avg_tat). completion time and the burst time of a process.
1. Initialize variables: The waiting time for a process (i) can be calculated using
- total_waiting_time ← 0 the formula: Waiting Time (Process i) = Turnaround
- total_turnaround_time ← 0 Time (Process i) - Burst Time (Process i)
- number_of_processes ← length of bt[] (total num-
By applying the FCFS scheduling technique and employing
ber of processes)
the above calculations, we can obtain the average waiting
2. Set the waiting time for the first process (Process 1 to 0:
time and average turnaround time, which are essential metrics
wt[0] ← 0
for assessing the efficiency and performance of the FCFS
3. Calculate waiting time for each subsequent process (Pro-
scheduling algorithm [62], [63], [64], [65], [66].
cess i) using the formula:
wt[i] ← bt[i - 1] + wt[i - 1] for i = 1 to num-
ber_of_processes - 1 B. ALGORITHM OF PRIORITY SCHEDULING
4. Calculate the turnaround time for each process The priority scheduling algorithm is governed by the follow-
(Process i) using the formula: ing parameters: BT (i), WT (i), and RT (i), representing the
turnaround_time[i] ← bt[i] + wt[i] for i = 0 to num- burst time, waiting time, and remaining time of process i,
ber_of_processes - 1 respectively. In the context of the algorithm, ‘‘Scheduling’’
5. Calculate the total waiting time and total turnaround time: denotes the currently running process, ‘‘DNP’’ represents the
total_waiting_time ← sum of all elements in wt[] ‘‘do not preempt’’ flag, ‘‘Queue’’ signifies the wait state, and
total_turnaround_time ← sum of all elements in ‘‘Schedule’’ denotes the waiting queue. At each cycle, the
turnaround_time[] priority scheduling method evaluates whether a new event
6. Calculate the average waiting time (avg_wt) and average has occurred, such as an arrival or completion of a process.
turnaround time (avg_tat): If the new event is an arrival, the method checks if the queue
avg_wt ← total_waiting_time / number_of_processes is empty and if any processes are currently active. If no
avg_tat ← total_turnaround_time / number_of_processes processes are executing, the DNP flag is reset to 0, and the
7. Output the results: new process becomes the currently active one. If there are
Display avg_wt and avg_tat already processes executing, the new process is added to the
End of Algorithm waiting list, i.e., the queue. On the other hand, if the new
event corresponds to the completion of a procedure, the DNP
flag is set to 0. After these checks for a new event, the
algorithm proceeds as follows: If the waiting queue is not
average turnaround time for all the processes. This FCFS null but no process is currently executing, the method selects
algorithm follows the principle of serving processes in the the task with the least remaining time, denoted as RT(k),
order they arrive, without preemption. The resulting average from the waiting queue and schedules it to be executed next.
waiting time and average turnaround time provide important Through this priority scheduling algorithm, processes are
performance metrics for evaluating the efficiency of the FCFS assigned execution based on their remaining time, prioritizing
scheduling technique. the shortest remaining time for execution. This approach aims
The primary objective of this study is to analyze the FCFS to optimize the overall efficiency and reduce waiting times for
scheduling method to determine the average waiting time processes [67], [68], [69].
and average turnaround time for a set of n processes with 1. search for a new tasks;
their respective burst timings. FCFS is a basic and widely 2. if (event_stat == ‘‘arrival’’)
used scheduling technique, also known as First In, First Out 3. if ((queue, k) == (empty, null))
(FIFO), which prioritizes the execution of processes based on 4. set the new arrival to k; put dnp = 0;
their arrival order. In this method, the first process to arrive 5. end of if block
is the first one to be executed, and subsequent processes wait 6. end of if block
until the preceding one completes its execution. 7. else
Assuming all processes arrive at the same time (arrival time 8. add new arrival to queue;
= 0), we can calculate the completion time, turnaround time, 9. end of else block
and waiting time using the following formulas: 10. else if
• Completion Time: It represents the moment a process (event_stat == ‘‘complete’’)
finishes its execution. 11. change k to
• Turnaround Time: It is defined as the time interval null; put dnp = 0;
between the completion of a process and its arrival time. 12. end of if block
The turnaround time for a process (i) can be calculated 13. if

VOLUME 11, 2023 105593


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

((queue, k) == (non_empty, null)) Moreover, developers must account for the impact of
14. preemption on time-sensitive portions of programs. Certain
identify process k and its minimum value critical sections of code must not be interrupted to maintain
15. rk the correctness and reliability of the system’s operations.
=min_i{rt(i)}; As such, careful consideration of the scheduling algorithm
16. put and its implementation is necessary to strike a balance
k to k; put dnp = 0; between maximizing system throughput and ensuring respon-
17. end siveness to time-sensitive tasks.
of if block In summary, round-robin scheduling offers fairness and
18. else if ((queue, k) == (non_empty, time-sharing capabilities, but the overhead of frequent task
non_null) & (dnp == 0)) switching and the need for context saving must be taken into
19. if(rt(k) ≤ e ∗ bt(k) ) set account during system design and development [70], [71],
dnp = 1; [72], [73].
20. end of if block 1. all processes are placed in ascending order in the
21. end of if block ready queue.
22. else 2. // num_pro = total number of processes
23. find process k with the maximum value 3. // i = loop counter variable
wk = max_i{wt(i)- q ∗ bt(i) 4. //bt = burst time
24. end of else block 5. tq =assigned by cpu
25. if (wk > 0) 6. while (rdy_queue!= null)
26. add k to queue; 7. // rdy_queue = ready queue
27. put k = k; put dnp = 1; 8. tq =time quantum
28. end of if block 9. tq = assigned by cpu
29. else 10. assign for tq to (1 to
30. identify process k and its minimum num_pro) processes for i=0 to num_pro loop
value. rk =min_i{rt(i); 11. pi->tq
31. end of else block 12. end of for loop
32. if (rk < RT(C)) 13. end of while loop
33. add C to Queue; 14. Any processes that are open will be
34. set C = k; set DNP = 0; given tq.
35. end of if block 15. determine the processes’ remaining
36. end of else block burst time.
37. end of else block 16. calculate avg_turn_t, avg_wt_t, ncs
38. end 17. // avg_turn_t = average turnaround time
18. // avg_wt_t = average waiting time
19. // ncs = number of context switch
C. ROUND ROBIN 20. end
Round-robin scheduling is a preemptive computer system
algorithm that operates based on a regular interruption known VI. JOHNSON SEQUENCING
as the ‘‘clock tick.’’ Tasks are selected for execution in a The M/M/c/K scheduling problem can be described as
predetermined sequence, with each task being granted a fixed follows:
amount of CPU time during each timer tick. In this scheduling Given a set of jobs m = {A, B} that need to be processed
approach, all jobs are treated equally and take turns waiting concurrently and a set of n = {1, 2, . . . , n} representing the
in a queue for their allocated time slice on the CPU. However, jobs, each job must follow two specific protocols:
tasks are not allowed to execute continuously until comple- 1) Machine A processes the first operation for the job.
tion; instead, they are ‘‘pre-empted,’’ meaning their execution 2) Machine B performs the second operation for the
is halted midway. same job.
The use of a ‘‘pre-emptive’’ scheduler introduces certain The second operation starts immediately after the first one is
considerations and overheads. When a task is preempted, completed. All the time required for job K , which belongs to
its current state must be saved so that it can resume exe- set N, is spent sequentially on both Machine A and Machine
cution smoothly when it is given permission to run again. B. Let σ : N → N be a permutation of jobs, and let π represent
This involves performing a full context save, which includes the set of all possible permutations of n!. The function Fk (σ )
preserving all relevant flags, registers, and other memory denotes the flow time of job K in a particular permutation σ .
locations. While this ensures a seamless transition for the In other words, we aim to find the arrangement of jobs that
task, it is essential to assess the implications of frequent task ensures the shortest possible time for the longest job flow
switching on system performance. duration in the overall process. This optimization is critical

105594 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

in achieving efficient and balanced job scheduling across the Let J be the set of tasks that need to be scheduled, and let
two machines to enhance the overall system performance. 5 denote the array containing all feasible combinations of
In summary, the M /M /c/K scheduling issue involves finding tasks in the form of J = 1, 2, 3, . . . , n. For any given
the best permutation of jobs to minimize the maximum flow schedule, we use the notation (1), (2), . . . , (n) to represent
time for concurrent processing on Machine A and Machine B. the permutation, where (j) indicates the job set at posi-
Further investigations and analyses are needed to propose tion j in the permutation π. The optimal schedule obtained
effective algorithms or heuristics to solve this problem effi- through the application of Johnson’s rule is referred to as [75],
ciently in real-world applications. [76], and [77]. Similarly, the best order determined by John-
  son’s rule is denoted as ϕ. Additionally, we assume that the
Fmax σ ∗ = max Fk σ ∗ = min max {Fk (σ )}
  
(13) tasks in set J are arranged in the same order as they occur in
k∈N σ ∈π k∈N
the permutation. For instance, task TK1 represents the first
Johnson’s conclusion presents an effective solution for job in Johnson’s series, TK2 is the second job, and so on.
addressing the problem, supported by the application of John- Therefore, we can deduce that i precedes j (i < j) or i succeeds
son’s rule, which provides an adequate optimum condition j (i > j) based on their positions in the permutation [78], [79],
for sequencing pairs of jobs. The algorithm’s computational [80], [81].
complexity is marked as O(nlog (n)), enabling its efficient In the following statements and propositions, we present
application to all generations using Johnson’s rule, as demon- the mathematical formulations related to response times and
strated in the subsequent analysis. Additionally, the algorithm makespan for the permutation flow-shop problem.
minimizes the programming effort required on the computer, Statement 1: Let Iπ(j) denote the total amount of idle
enhancing its practicality and usability. time, in milliseconds, that the 2nd server machine M2 has
However, it is worth noting that the complexity of the experienced up to the handling of the jth job in the sequence
M /M /c/K scheduling issue remains an open question. It is π, as calculated in (1):
currently unknown whether the problem is NP-hard or if  X 
j Xj
a polynomial-time solution exists [74]. Despite the success Iπ (j) = max 0, aπ(l) − bπ (l) (16)
of Johnson’s rule in optimizing sequences, the challenge of l=1 l=1
finding all optimum sequences in the context of M /M /c/K Statement 2: Task Tk, which has the maximum value of I in
scheduling remains to be addressed. Further research and the ideal solution ϕ, is referred to as a critical task in (2):
investigation are required to determine the true nature of the
problem and explore potential solutions. Iφ (TK ) = max Iφ (l) (17)
l=∈j
- Algorithm-1
Step 1. Determine the priority index for each of the k jobs, Assumption: We assume that the response times are not
denoted as Pk , using the following formula: zero, hence Iϕ(Tk) > 0.
Proposition 1: The response time of the critical job,
sig (ak − bk − ϵ)
Pk = (14) Iϕ(Tk), reduces the makespan Cmax.
min (ak , bk ) The makespan Cmax of a given permutation π can be
where sign represents the signum function, and Johnson’s computed as follows:
rule can be employed to construct all optimum sequences by X 
l Xn
introducing an infinitesimal amount ε. Here, ε is any non-zero Cmax (π ) = max xπ(i) + yπ(i) (18)
1≤l≤n i=1 i=0
real number.
Step 2: Create a list of task sequences based on their From (3), it follows that there exists a spot to be filled,
priority indices and rank them in ascending order. Let σ (k) 1 ≤ s ≤ n, such that:
specify the job index in the k th position of the sequence σ . The Xs Xn
resulting sequence, denoted as π∗ = (σ (1), σ (2), . . . , σ (n)), Cmax (π) = xπ(i) +
i=1
yπ(i) (19)
i=0
should satisfy the following condition:
This can be rewritten as:
P_(σ (1) <=)P_(σ (2) <= . . . <=)P_(σ (n)) (15) Xs Xs−1 Xn
Cmax (π) = xπ(i) − yπ(i) + yπ(i) (20)
By applying this algorithm, one optimal sequence is gen- i=1 i=1 i=1
Pn
erated, but it comes at the cost of increased computational Here is i=1 yπ(i) is unrelated to the permutation π. There-
complexity. Step (1) involves straightforward algebraic pro- fore, the makespan Cmax (π) is equivalent to:
cedures for priority index computation. Step (2) involves X  X
l Xl−1 n
sorting the sequences, which can be achieved using an Cmax (π ) = max xπ(i) − yπ (i) + yi
effective sorting algorithm that runs in O(N log N ) time, 1≤l≤n i=1 i=1 i=1
as commonly employed in current practice. (21)
- Notation and definitions of Johnson Sequencing The best makespan for F2|prmu|cmax is equivalent to
In this study, we adopt the standard notation widely utilized in min because it represents an ideal timetable. The reduction
the literature to represent the permutation flow-shop problem. in the makespan is denoted by Iϕ (k), indicating that any

VOLUME 11, 2023 105595


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

other ideal response, whether it adheres to Johnson’s rule or the impact of different permutations on the overall perfor-
not [57], [58]. mance. It allows for the identification of critical points where
- Critical analysis of the job makespan increases or decreases, thereby providing valu-
able information for improving scheduling strategies. Further
In this research, we present a significant finding that a crucial
investigation into these subcases can lead to more refined and
job always maintains a fixed position in an optimal schedule
efficient approaches to solve scheduling problems.
for any instance of the problem. We analyze two distinct sce- X
narios where jobs have varying processing times on different Iπ (j) = Iφ (TK ) −
lϵPTK TK
φ ∩Sπ
machines: X
i When there are no connections between the processing + (xl − yl ) − yTK + (xl − yl ) −
lϵPTK
φ ∩Sπ
TK
times of essential jobs and other tasks, or in other words, 
xj − yj + xj (24)
xj ̸ = yj for all j ∈ J .
ii No processing-time ties between critical jobs. By analyzing these subcases under Case (1.1.ii.a), we gain
We start by comparing it with any sequence that contains a deeper understanding of how certain task sequences can
job k at p(Tk). If p(Tk) = p(Tk), then we demonstrate that lead to increased makespan and suboptimal scheduling.
Cmax() will be greater than Cmax at its optimum. To establish These insights can be valuable in developing improved algo-
this, we refer to two cases: Case 1.1, where job k appears rithms for job sequencing and scheduling to enhance the
at position p(Tk) before its initial position (p(Tk) < p(Tk)); overall efficiency and performance of systems.
X
and Case 1.2, where job k occupies a position p(Tk) in an Iφ (TK ) = + (xl − yl )
arbitrary sequence (Tk) after the position it holds in (p(Tk) > lϵPTK
φ ∩Pπ
TK
X
p(Tk)). The sequences of job arrangements obtained through + TK TK
(xl − yl ) + xTK (25)
lϵPφ −Pπ
Johnson’s rule are referred to as the sets SPT and LPT,
as stated in the introduction. The first term is associated with the tasks common to ω
Our analysis provides valuable insights into the optimal and π that precede Tk, while the second term accounts for
scheduling of critical jobs in scenarios where there are no the ancestors of Tk in ω that are part of Pkω but excluded
processing-time connections or ties. Further investigations from the ancestors of π.
are needed to explore additional conditions and applications From these correspondences and inequalities, we obtain
where these findings can be effectively utilized to optimize important insights into the relative positioning of tasks in
job scheduling in real-world contexts. ω and π, which provide valuable clues for identifying
X non-optimal permutations. These observations enable us to
Iπ (TK ) = Iφ (TK ) − TK TK
(xl − yl ) (22) analyze the impact of different arrangements on the makespan
lϵPφ −Pπ
and determine the optimal job sequence, leading to more
In this case, for any job l in Tk, and in accordance with the efficient scheduling strategies. By leveraging such findings,
requirement of Theorem 1, there are no ties (xl, xTk). Since we can further improve the performance of job sequencing
Tk is part of the SPT set in ϕ, there exist tasks that are not algorithms and contribute to enhanced task execution and
included in the predecessors of k in ϕ but rather belong to the overall system efficiency.
predecessors of Tk in ϕ. X
Iφ (TK ) = TK TK
(xl − yl ) + xTK (26)
These positions are properly designated by Tk. We can lϵPφ ∩Pπ
establish the following relationship between I(k) and I(l) and
based on (1) [71]: X X
I(k) < I(l) if and only if xl < xTk for all l in Tk. (xl −y) + xTK < j (xl − yl ) + xj (27)
lϵPTK TK
φ ∩Pπ lϵPπ
By analyzing these subcases and relationships, we gain
The right hand side of is comparable to Iπ(l).
a deeper understanding of the implications for the optimal X
Iφ (TK ) < (al − bl ) + aTK + aj − bj (28)

scheduling of jobs in different permutations and their effects TK TK
hϵPφ ∩Pπ
on the maximum job flow time, Fmax(π). Further explo-
ration of these findings can lead to novel approaches for job The right-hand side of (12) is equivalent to I π(Tk). Con-
sequencing optimization in scheduling problems. sequently, I(Tk) > Iω(Tk) and for such arrangement π can’t
X be ideal. This arrangement can undoubtedly be summed up
Iπ (TK )= I φ (TK ) − TK TK
(xl − yl ) (23) to at least one than one occupation with a similar condi-
hϵPφ ∩Pπ
tion as l [72].
As the research proceeds, it becomes evident that certain Situation 1: All the tasks in PTkω ∩ STkπ belong to the
cases have a significant impact on the optimal job sequencing SPT set. In this case, we have the following relation:
and the cumulative idle time of job k. Let us further analyze
X
Iπ (TK ) = Iφ (TK ) +(xl − yl )
the two subcases under Case (1.1.i.b): lϵPTK
φ ∩Sπ
TK
X
This analysis of the idle time and job sequencing helps = (xl − yl ) (29)
TK TK
lϵSφ ∩Pπ
us gain insights into the optimal scheduling of jobs and

105596 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

As referenced above xl >yl, l ∈ STk ω ∩ PTkω. What’s Algorithm Algorithm of Dynamic Heuristic Johnson
more, xl< yl, l ∈ PTkω ∩ STkπ. Obviously, I(Tk)>Iω(Tk) Sequencing using 3 Servers
in (13) and consequently π can’t be ideal. 1. Input: ((b11, b21),(b12, b22),. . . ,(b1n, b2n) jobs in a
Situation 2: Every one of the positions of PTkω ∩ STkπ queue ) in a sequence
have a place with LPT. Allow l to be the last occupation of 2. Output: an optimal schedule σ
PTkω ∩ STkπ, i.e., the last errand from the replacements step 1. the ready queue stores each process in descending
of Tk in π that has a place with the ancestors of Tk in ω. order. // num_pro= number of processes //a=array of jobs
In the first place, we expect that main assignments of PTkω ∩ waiting in the ready queue // i & j= loop counter variables
STkπ are quickly placed on after Tk in π, that implies no //bst_t= burst time
undertakings from STkω is placed on in this halfway group- 3. while(rq!=null) // rdy_q=ready queue
ing. We express Iπ(l) with regards to Iω(Tk) as follows: //tim_q=time quantum//minbt=minimum burst time
X of job//maxbt=maximum burst time of job
Iπ (j) = Iφ (TK ) − (xl − yl ) − yTK 4. tq= (maxbti-minbtj)/2.
hϵPTK TK
φ ∩Sπ end of while loop// tq is equal to bt if there is just one job.
5. assign tim_q to (1 to num_pro) jobs for i=0 to
X
+ (xl − yl )
num_pro loop
lϵSφTK ∩PTK
π
X 6. bi->tqn
(xl − yl ) − xj − yj + xj

+ (30) //assign bi time to each and every jobs using 3 server
lϵPTK TK
φ ∩Sπ
//tqn=new time quantum
7. Job1 ← {Jj∈J: b1j <= b2j}
In this present circumstance, the ancestors of k in ω are 8. Job2 ← {Jj∈J: b1j > b2j}
excluded and the new ancestors of Tkin π are incorporated. 9. Step 2
Additionally, the errand among Tk and l are thought about. 10. Place the tasks in Job1 in descending order of the
Since PTkω ∩ STkπ incorporates task h, then, at that point rates of degradation b1j calling this sequence σ (1);
(xl – yl) should be deducted to make (14) right. 11. Sort the occupations in J2 according to the
Since Tk ∈ LPT, we have xl > yl, h ∈STkω ∩ PTkπ . non-increasing rates of degradation b2j
Likewise, yl > yTk since l ≺ Tk in ω and l, Tk ∈ LPT. 12. calling this sequence σ (2)
Subsequently, I(l) > Iω(Tk) and the make range is expanded 13. Step 3
σ ← (σ (1)|σ (2)
X
Iπ (j) = j + (xl − yl ) + xj (31) 14.
hϵPπ
15. return
On the opposite hand, the cumulative idle time of job k
inξ may be written as follows:
X X
Iφ (TK ) = j + (xl − yl )+ TK TK
(xl − yl ) +xTK communication linking storage and computing resources, and
hϵPπ lϵSφ ∩Pπ the number of files required for each job. The data files
(32) for each task were randomly distributed within the range
Since Tk ϵ SPT, we’ve xj< yj, j ϵ PTk ω. In addition, of values defined by x and y, with x set to 1, representing
xl >yTk because Tk ϵ l. From (16) and (17), we get that the minimum value, and y representing the maximum value.
Iπ (j) >I9(j) this means that the makespan is increased. Consequently, each task may require between one and x files,
with each file having a minimum size of 100 MB. To facilitate
VII. EXPERIMENTAL ANALYSIS distribution, the data files were duplicated across multiple
This research study involved a carefully selected set of five storage systems.
tasks, each with specific processing times, as documented For the experimental analysis, several scheduling algo-
in Table 3. To create a simulation that emulates real-world rithms were considered, including FCFS, Round Robin, and
cloud computing scenarios and to evaluate the efficacy of Priority Scheduling using a single machine, FCFS and John-
the proposed efficient strategy, a cloud computing environ- son sequencing using two machines, and FCFS and DHJS
ment was utilized. Detailed configurations of data centers for using three machines. Subsequently, the M /M /c/K model
specialized simulation experiments are presented in Table 3, was employed to calculate important performance metrics
providing essential information such as size, functionality, such as average waiting time, total turnaround time, Lq , Ls ,
and the presence of hosts, processing elements, and data cen- Wq , and Ws .
ters. Each component of the data centers contributes unique The outcomes of the experiments demonstrated that DHJS
throughput, storage, and space allocation algorithms tailored using three servers exhibited superior performance com-
for hosts and virtual machines. The matrix entries in Table 3 pared to the other algorithms. These findings highlight
correspond to the five tasks, presenting their respective exe- the efficiency and effectiveness of the proposed DHJS
cution lengths. approach in cloud computing environments. The ability of the
The simulation scenario is designed to encompass a diverse DHJS method to reduce waiting times and improve overall
range of factors, including data size, task loads, wideband turnaround time makes it a promising option for enhancing

VOLUME 11, 2023 105597


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

TABLE 3. Processing time matrix. contribute to the overall execution time of the Cybershake
process. Due to their complexity, seismogram synthesis tasks
demand substantial computing resources, encompassing both
CPU and memory time. The task sizes, denoted as 68, 50,
100, and 1000, have been carefully selected to serve as sample
tasks for the Cybershake scientific workflow. These task sizes
exemplify the various difficulties and computational intensity
inherent in seismogram synthesis, further highlighting the
significance of optimizing scheduling and resource alloca-
tion in cloud computing environments to enhance overall
performance.

task scheduling and resource utilization in cloud computing


scenarios.
In this research, the proposed DHJS-based framework is
subjected to analysis using the M/M/c/K queuing model
to investigate the dynamics of waiting queues, as illus-
trated in Figure 1. The input data for this study comprises
users’ requests for execution in the context of cloud comput-
ing, following the service provider paradigm. Consequently,
it becomes the responsibility of the cloud service provider to
efficiently manage and schedule diverse demands. Notably,
most existing research focuses on scheduling jobs after they
have been added to an existing task list. However, the crucial
aspect lies in how the product or service supplier handles FIGURE 7. Cybershake scientific workflow.
incoming tasks, as this marks the true beginning of the task
planning and resource management process.
The scientific procedure utilized in the Cybershake work-
To conduct the experiments, data from the Cybershake
flow encompasses five distinctive phases, each serving a
process serves as the entry jobs for the proposed system.
crucial role in the analysis and characterization of earthquake
The Cybershake scientific methodology, employed by the
hazards through the Probabilistic Seismic Hazard Analysis
Southern California Earthquake Centre (SCEC), is visualized
(PSHA) method:
in Figure 1 and utilizes the Probabilistic Seismic Hazard
Analysis (PSHA) method to characterize earthquake hazards. 1. Extract GST: The first stage involves the extraction of
Moreover, the generation of green strain tensors (GSTs) is an Green strain tensor (GST) data. This step is vital to
integral part of this process. Table 4 presents a comprehensive prepare the data for subsequent processing and analysis.
list of the Cybershake seismogram synthesis jobs along with 2. Seismogram Generation: Among all the tasks within
their respective dimensions and execution periods. the Cybershake technique, the seismogram generation
The Cybershake scientific workflow sample tasks involve stands out as the most computationally intensive phase.
complex computational procedures, resulting in resource- It dominates the overall execution time of the Cyber-
intensive tasks. Particularly, the process of seismogram shake process.
synthesis requires significant computational time and con- 3. ZipSeis: During this phase, the data compiled from
sumes a substantial portion of the Cybershake’s overall previous stages undergoes processing and compression,
execution time. Furthermore, these tasks demand substantial optimizing storage and facilitating further analysis.
computing resources, including both CPU and memory time. 4. PeakValCalcOkaya: In this stage, the peak values of
By employing the proposed DHJS approach and utilizing the generated seismograms are calculated. This step is
the M /M /c/K model, this research aims to shed light on how of paramount importance for subsequent analysis and
cloud computing environments can optimize task schedul- interpretation of seismic data.
ing, reduce waiting times, and enhance resource utilization. 5. ZipPSA: The final phase involves the compilation and
Through the analysis of the waiting queues and task perfor- processing of the analyzed information to yield the
mance metrics, valuable insights can be gained to improve desired results. This stage culminates in the comprehen-
the overall efficiency and effectiveness of cloud computing sive characterization of earthquake hazards.
systems. The successful execution of each of these phases contributes
The seismogram synthesis tasks employed in this study to the overall effectiveness and accuracy of the Cybershake
represent a computationally challenging aspect of the scientific process, enabling the Southern California Earth-
Cybershake scientific workflow. These tasks significantly quake Centre (SCEC) to better understand and mitigate

105598 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

TABLE 4. Cyber shake synthesis tasks. genome analysis. The resulting data is then transmitted to the
Mag network for further processing and analysis. Consider-
ing the presence of several time-consuming activities within
this process, we have chosen to conduct our experimental
analysis using a carefully selected sample of five tasks from
the overall dataset, as illustrated in Table 5. To prioritize the
execution of these tasks, we have assigned specific priorities
based on their respective sizes. Notably, the task with the
smallest size has been assigned the highest priority, given
the expectation that it will be completed faster compared to
the other tasks within the set. This prioritization strategy is
crucial for efficient resource allocation and optimal workflow
management in the context of epigenomic analysis.

FIGURE 8. Epigenomics scientific workflow.

TABLE 5. Sample of five tasks from the overall dataset for our
experimental analysis. B. CALCULATION OF FCFS, PRIORITY AND ROUND ROBIN
SCHEDULING ALGORITHM USING SINGLE MACHINE
In this section, we present the computations and analysis
of the First-Come-First-Serve (FCFS), Priority, and Round-
Robin scheduling methods on a single machine. To evaluate
the performance of these scheduling algorithms, we calculate
several key metrics, including the Average Waiting Time
(AWT), Total Turnaround Time (TAT), Average Number of
Tasks in the System (Ls ), Average Number of Tasks in the
Queue (Lq ), and Average Waiting Time of a Task in the
Queue (Wq ). These metrics provide valuable insights into
the efficiency and effectiveness of each scheduling approach
earthquake risks. The utilization of the Probabilistic Seismic and help in understanding their impact on task execution
Hazard Analysis method and the careful management of and resource utilization. By employing the relevant formulas,
computationally intensive tasks in this workflow are essen- specifically equations 12, we can derive these important per-
tial steps towards enhancing seismic hazard assessment and formance indicators, enabling a comprehensive assessment of
preparedness. the scheduling methods under consideration.

A. SCIENTIFIC METHODOLOGY FOR EPIGENOMICS C. CALCULATION OF FCFS ALGORITHM BY


The scientific methodology employed in epigenomics, which USING SINGLE MACHINE
aims to automate genome sequencing, is presented in In the First-Come-First-Serve (FCFS) algorithm on a single
Figure 7. This comprehensive procedure comprises vari- machine, we begin by calculating the mean service time using
ous resource-intensive tasks critical for achieving accurate equations 1 and 2. Next, we determine the service time for

VOLUME 11, 2023 105599


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

each process based on the Gantt chart depicted in Figure 8. D. CALCULATION OF PRIORITY SCHEDULING ALGORITHM
With this information, we can proceed to compute the mean BY USING SINGLE MACHINE
service time and the average service rate. Upon analyzing In the priority scheduling algorithm on a single machine,
the results, it is observed that tasks TK1 to TK5 require an we begin by calculating the mean service time using equa-
execution time ranging from 0 to 1.27 milliseconds, which is tions 1 and 2. Next, we determine the service time for each
relatively high for a multi-server machine. process based on the Gantt chart depicted in Figure 9. With
To comprehensively analyze the FCFS algorithm’s per- this information, we can proceed to compute the mean service
formance, we employ equations 1 to 12 and present the time and the average service rate. It is worth noting that tasks
results in Tables 12 to 16. Additionally, we utilize equa- TK1 to TK5 are expected to complete their execution within a
tions 7 to 10 to calculate the average waiting time and total time range of 0 to 1.27 milliseconds, which is relatively high
turnaround time for each task. These metrics serve as valuable when considering a multi-server machine scenario.
performance indicators and are instrumental in evaluating To comprehensively analyze the priority scheduling
the efficiency and effectiveness of the FCFS algorithm on algorithm’s performance, we employ equations 1 to 12 and
the single machine setup. The calculations and results are present the results in Tables 12 to 16. These tables provide
thoroughly discussed, and further insights are gained from valuable insights into the system’s behavior and efficiency.
equations (31) and (32), which allow for a comprehensive Additionally, we utilize equations 7 to 10 to calculate the
assessment of the average waiting time and total turnaround average waiting time and total turnaround time for each
time for each task in the FCFS algorithm. task. These metrics serve as crucial indicators in evaluat-
ing the effectiveness of the priority scheduling algorithm on
the single machine setup. The calculations and results are
thoroughly discussed, and further insights are gained from
equations (31) and (32), allowing for a comprehensive assess-
ment of the average waiting time and total turnaround time for
FIGURE 9. Gantt chart of FCFS scheduling using single server. each task in the priority scheduling algorithm.
For TK1: service time is (0.02-0) +(0.96-0.73) = 0.25,
For TK1: service time is (0.25-0) = 0.25, TK2: (0.04-0.02) + (0.58-0.25) = 0.35,
TK2: (0.60-0.25) = 0.35, TK3: (0.25-0.04) = 0.21
TK3: (0.81-0.60) = 0.21 TK4: (0.73-0.58) = 0.15
TK4: (0.9-0.81) = 0.09 TK5: (1.27-0.96) = 0.31
TK5: (1.27-0.9) = 0.37 So Mean service time = 1.27/5 = 0.254 and Service rate
So Mean service time = 1.27/5=0.254 and Service rate will be µ = 1/(E(s)) = 3.93700787 ms by using equation (1)
will be µ = 1/(E(s)) = 3.93700787 ms by using equation (1) and (2)
and (2) Waiting time calculation of Priority Scheduling
Waiting time calculation of FCFS Scheduling Task TK1: 0 +(0.73-0.02) = 0.71 ms
Task TK1: = 0 ms Task TK2: 0+(0.25-0.04) =0.21 ms
Task TK2: (0.25-0.02) =0.23 ms Task TK3: =0 ms
Task TK3: (0.60-0.04) =0.56 ms Task TK4: (0.58-0.07) =0.51 ms
Task TK4: (0.81-0.07) =0.74 ms Task TK5: (0.96-0.09) =0.87 ms
Task TK5: (0.96-0.09) =0.87 ms Average waiting time (AWT) = (0.71 + 0.21 + 0.51 +
Average waiting time (AWT) = (0.23 + 0.56 + 0.74 + 0.87)/5 = 0.46 ms by using equation (11)
0.87)/5=0.48 ms by using equation (11) Turnaround time calculation of FCFS Scheduling is
Turnaround time calculation of FCFS Scheduling Task TK1: = 0.96 ms
Task TK1: (0.25-0) = 0.25 ms Task TK2: (0.58-0.02) =0.56 ms
Task TK2: (0.60-0.02) = 0.58 ms Task TK3: (0.25-0.04 ) = 0.21 ms
Task TK3: (0.81-0.04 ) = 0.77 ms Task TK4: (0.73-0.07) = 0.65 ms
Task TK4: (0.96-0.07) = 0.89 ms Task TK5: (1.27-0.09) = 1.18 ms
Task TK5: (1.27-0.09) = 1.18 ms Average Turnaround time (TAT) = (0.96 + 0.56 + 0.21 +
Average Turnaround time (TAT) = (0.25 + 0.58 + 0.65 + 1.18)/5 = 0.714 msby using equation (12).
0.77 + 0.89 + 1.18)/5 = 2.726 ms by using equation (12).
E. CALCULATION OF ROUND ROBIN SCHEDULING
ALGORITHM BY TAKING TIME QUANTUM = 0.10 MS
USING SINGLE MACHINE
In the Round Robin scheduling algorithm utilizing a single
machine, we initiate the analysis by calculating the mean ser-
FIGURE 10. Gantt chart of priority scheduling using single server. vice time using equations 1 and 2. Subsequently, we proceed

105600 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

FIGURE 11. Gantt chart of round robin scheduling using single machine.

TABLE 6. Initial processing time of different task using two machines. TABLE 7. In-out time of two machines of different tasks in FCFS.

to compute the service time for each process based on the


information presented in the Gantt chart shown in Figure 10.
By doing so, we can determine the mean service time and the
average service rate for the given tasks.
Upon closer examination, we find that tasks TK1 to TK5
are expected to complete their execution within the time range
of 0 to 1.27 milliseconds, which may be considered relatively
high when considering a multi-server machine context.
To comprehensively assess the performance of the Round FIGURE 12. Gantt chart of FCFS scheduling using 2 machines.
Robin scheduling algorithm, we utilize equations 1 to 12 to
calculate various metrics. The obtained results are presented
in Tables 12 to 15, providing detailed insights into the sys- proceeds to machine 2 within a time frame of 0.14 to 0.25 mil-
tem’s behavior and performance. Additionally, we employ liseconds. This pattern is observed for all tasks, ensuring a
equations 7 to 10 to derive the average waiting time and total consistent and efficient execution process.
turnaround time for each task, crucial factors in evaluating To evaluate the performance of the FCFS scheduling
the efficiency of the Round Robin scheduling algorithm in the algorithm using two machines, equations 1 and 2 are
context of a single machine. The calculations and outcomes employed to calculate the mean service time. By determin-
are thoroughly discussed, with further analysis facilitated ing the service time for each process using the Gantt chart
by equations (31) and (32), allowing for a comprehensive depicted in Figure 11, we can estimate the mean service time
evaluation of the average waiting time and total turnaround and the average service rate.
time for each task in the Round Robin scheduling algorithm. Upon analysis, it is observed that the execution times for
tasks TK1 to TK5 range from 0 to 0.85 ms, which is com-
F. IMPLEMENTATION OF FCFS USING 2 MACHINES paratively less than the single-server system. The computed
In the FCFS algorithm utilizing two machines, the burst time results utilizing equations 1 through 12 are summarized in
of each task is allocated across the two machines. Based on Tables 12 to 14, providing detailed insights into the system’s
the task sequence presented in Table 5, tasks are executed performance.
on the first machine first, followed by those on the second For further evaluation of the system’s efficiency, the aver-
machine. The tasks that are present in the ready queues of age waiting time and total turnaround time of each task are
both servers are processed based on their IN-OUT times, fol- computed using equations (31) and (32), respectively. These
lowing the First-Come-First-Serve (FCFS) method, as shown calculations contribute to a comprehensive assessment of
in Table 7. the FCFS scheduling algorithm’s effectiveness when imple-
Taking Task TK1 as an example, it begins execution on mented with two machines.
machine 1 and completes on machine 2, with an IN-TIME In this section, we will proceed with the computation of
of 0 ms and an OUT-TIME of 0.14 ms, respectively. After the service time for each process using the Gantt chart dis-
Task TK1 finishes its execution on machine 1, it immediately played in Figure 11. By analyzing the Gantt chart, we can

VOLUME 11, 2023 105601


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

determine the execution time for each task on the respective considering the dynamics of the task execution cycle, this
machines. Subsequently, we will calculate the mean service methodology can enhance the overall efficiency of data pro-
time and the average service rate, which are important perfor- cessing tasks and support more effective decision-making
mance metrics in evaluating the efficiency of the scheduling in cloud computing and other resource-intensive computing
algorithm. These calculations will provide valuable insights environments. Further experimental evaluations and in-depth
into the system’s behavior and resource utilization, enabling performance analyses are required to validate the effective-
us to make informed comparisons and assessments of the ness and practicality of this approach in real-world scenarios,
algorithm’s performance. paving the way for its potential integration into existing
For TK1: service time is (0.25-0) = 0.25, cloud computing infrastructures and resource management
TK2: (0.49-0.14) = 0.35, strategies.
TK3: (0.63-0.30) = 0.33
TK4: (0.72-0.37) = 0.35 TABLE 8. In-out time of two machines of different tasks in johnson
TK5: (0.85-0.43) = 0.42 sequencing.

So Mean service time = 1.7/5 = 0.34 and Service rate will


be 1/(E(s)) = 2.94117647 ms by using equation (7) to (10).
Waiting time calculation of FCFS Scheduling using 2
Machines
Task TK1: 0 = 0 ms
Task TK2: (0.14-0.02) =0.12 ms
Task TK3: (0.30-0.04) = 0.26 ms
Task TK4: (0.37-0.07)+(0.49-0.43) =0.36 ms
Task TK5: (0.43-0.09)+(0.72-0.61) =0.45 ms
Average waiting time (AWT) = (0 + 0.12 + 0.26 +
0.36 + 0.45)/5 = 0.238 ms by using equation (11)
Turnaround time calculation of FCFS Scheduling The mean service time and average service rate can be
Task TK1: (0.25-0) = 0.25 ms determined once the service times for each process shown in
Task TK2: (0.49-0.02) = 0.47 ms Figure 12 have been computed. As outlined in Table 8, John-
Task TK3: (0.63-0.04 ) = 0.59 ms son recommends modifying his strategy in cases where two
Task TK4: (0.72-0.07) = 0.65 ms machines are involved. In such scenarios, the total burst time
Task TK5: (0.85-0.09) = 0.79 ms of a specific task is distributed between the two machines.
Average Turnaround time (TAT) = (0.25 + 0.47 + 0.59 + The task with the lesser processing time among the two
0.65 + 0.79)/5 = 0.55 ms by using equation (12). machines is placed in the execution queue first. In our case,
based on Table 5, Task TK4 has the shortest processing time
G. IMPLEMENTATION OF JOHNSON SEQUENCING of 0.06 ms on machine 1. Therefore, TK4 will be executed
USING 2 MACHINES first, followed by TK3 with a processing time of 0.07 ms on
The proposed framework incorporates the Johnson Sequenc- machine 1, and so on. This sequence yields the task execution
ing algorithm to achieve an optimized task arrangement, order as TK4-TK3-TK2-TK5-TK1.
followed by the implementation of the M/M/c/K queuing For this application of the Johnson sequencing scheduling
model to analyze waiting queues, as visually represented in technique using two machines, the mean service time is com-
Figure 12 Gantt Chart. As elaborated in Table 8, a task is puted using equations 1 and 2. Upon determining the service
characterized as a set of instructions currently being exe- time for each process, as visualized in the Fig. 12 Gantt chart,
cuted, while a process represents a collection of these same we can then calculate the mean service time and the average
instructions or programs. Processes progress through various service rate. Notably, the execution durations for tasks TK1 to
phases during their execution cycle, and a single process can TK5 range from 0 to 0.72 ms, which is considerably shorter
encompass multiple threads, each dedicated to performing compared to the FCFS method employing a multi-server
diverse tasks. In practice, computers commonly employ batch machine and a single server system. The calculations are
processing to efficiently handle high-volume, repetitive data performed using equations (1) through (12), and the results
procedures, as exemplified in Table 8. However, when applied are tabulated in Table 12. Subsequently, equations (7)–(10)
to individual data transactions, several data processing pro- are employed for the calculation of the average waiting time
cesses, such as backups, filtering, and sorting, may prove and total turnaround time of each task, thereby providing
computationally expensive and yield suboptimal outcomes. valuable insights into the overall performance and efficiency
The combined utilization of the Johnson Sequencing of the Johnson Sequencing algorithm with two machines in a
algorithm and the M /M /c/K queuing analysis presents a resource-intensive computing environment. Further analysis
promising approach to attain a streamlined task scheduling and experimental evaluations are necessary to validate the
arrangement, leading to improved system performance and superiority of this approach and its potential application in
better resource utilization. By optimizing the task order and real-world cloud computing scenarios.

105602 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

to Table 5 for the necessary data. Subsequently, we will


utilize Table 10 to determine the IN and OUT timings of
each task across the three machines. For instance, let’s con-
sider the IN-OUT time of task TK1 in this particular case,
which spans from 0 to 0.08 milliseconds. Following the
completion of TK1’s execution on Machine 1, which occurs
FIGURE 13. Gantt chart of Johnson sequencing scheduling using
2 machines.
between 0.08 and 0.16 milliseconds, Machine 2 takes over
the execution of TK1. After Machine 2 finishes processing
TK1, Machine 3 commences its execution between 0.16 and
In this phase of the study, we proceed to determine the ser- 0.25 milliseconds. Similarly, this sequential processing pat-
vice time for each process based on the information presented tern is observed for all the tasks, from TK2 through TK5.
in Figure 12 Gantt chart. Once the service times for all pro- Once the Gantt chart is generated, we can proceed to calculate
cesses have been established, we will proceed to calculate the the average waiting time for each task in the queue. This
mean service time and the average service rate. These metrics metric will provide valuable insights into the efficiency of
are essential in evaluating the performance and efficiency the FCFS algorithm when applied to a multi-machine setup
of the scheduling algorithm under consideration. The Gantt and help in further understanding the system’s overall per-
chart provides a visual representation of the execution dura- formance and resource utilization. The obtained results will
tions of individual tasks, which aids in accurate calculations serve as a basis for assessing the strengths and limitations
and comparative analysis. By quantifying the mean service of the FCFS approach and may inform potential areas for
time and average service rate, we gain valuable insights into optimization and improvement.
the system’s response time and resource utilization, enabling
us to make informed decisions about process scheduling and
TABLE 9. Initial processing time of different task using three machines.
optimization. The results of these calculations will be pivotal
in assessing the effectiveness and suitability of the proposed
algorithm in resource-intensive computing environments and
may serve as a basis for further refinement and application in
real-world cloud computing scenarios.
For TK4: service time is (0.15-0) = 0.15
TK3: (0.29-0.06) = 0.23
TK2: (0.48-0.13) = 0.35
TK5: (0.61-0.29) = 0.32
TK1: (0.72-0.47) = 0.25
So Mean service time = 1.3/5 = 0.26 and Service rate will
be 1/(E(s)) = 3.84615385 ms by using equation (7) to (10) In this analysis of the First-Come-First-Serve (FCFS)
Waiting time calculation of Johnson Sequencing using 2 scheduling method with three machines, we utilize
Machines Equations 1 and 2 to calculate the mean service time. Once
Task TK1: = 0.61 ms the service times of each process have been determined based
Task TK2: (0.13-0.02) = 0.11 ms on the Fig. 13 Gantt chart, we can estimate the mean service
Task TK3: (0.06-0.04) = 0.02 ms time and the average service rate. The Gantt chart provides a
Task TK4: = 0 ms visual representation of the execution times for tasks TK1 to
Task TK5: (0.29-0.09) = 0.20 ms TK5, which ranged from 0 to 0.66 ms. Notably, this duration
Average waiting time (AWT) = (0.61 + 0.11 + 0.2 + 0 + is comparatively shorter compared to single-server systems,
0.20)/5 = 0.188 ms by using equation (11) as well as FCFS and Johnson Sequencing approaches with
Turnaround time calculation of FCFS scheduling two servers.
Task TK1: = 0.72 ms The obtained results are computed using Equa-
Task TK2: (0.48-0.02) = 0.46 ms tions 1 to 12, and the detailed outcomes are presented in
Task TK3: (0.29-0.04 ) = 0.25 ms Tables 12 to 16, utilizing Equations 7 to 10. These compre-
Task TK4: (0.15-0.07) = 0.08 ms hensive tables provide valuable insights into the performance
Task TK5: (0.61-0.09) = 0.51 ms metrics of the FCFS method, helping us understand factors
Average Turnaround time (TAT) = (0.72 + 0.46 + 0.27 + such as average waiting time and total turnaround time for
0.08 + 0.51)/5 = 0.408 ms by using equation (12) each task.
The calculated values of average waiting time and total
H. IMPLEMENTATION OF FCFS SCHEDULING turnaround time are essential for evaluating the efficiency
USING 3 MACHINES and effectiveness of the FCFS scheduling method with three
To begin the analysis of the First-Come-First-Serve (FCFS) machines. These metrics will provide a quantitative assess-
scheduling algorithm using three machines, we will refer ment of the system’s performance and resource utilization,

VOLUME 11, 2023 105603


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

offering valuable information for further optimization and evaluation, providing rigorous evidence to support our find-
potential enhancements. ings. The accuracy and rigor of the analysis will enhance
In summary, this comprehensive analysis of the FCFS the credibility of the research and ensure that the outcomes
scheduling method with three machines offers valuable are both reliable and trustworthy. Ultimately, these findings
insights into its operational characteristics and highlights its will contribute to the advancement of knowledge in the field
advantages over other scheduling approaches. The presented of scheduling algorithms and their practical implications in
results contribute to the understanding of how the FCFS diverse applications, including cloud computing, data pro-
algorithm performs in multi-machine scenarios, providing cessing, and resource management systems.
researchers and practitioners with valuable information for For TK1: service time is (0.25-0) = 0.25,
making informed decisions in real-world applications. TK2: (0.43-0.08) = 0.35,
TK3: (0.50-0.20) = 0.30
TABLE 10. Initial processing time of different task using three machines. TK4: (0.55-0.28) = 0.27
TK5: (0.66-0.0.33) = 0.30
So Mean service time = 1.47/5 = 0.294 and Service rate
will be 1/(E(s)) = 3.40136054 ms by using equation (7)
to (10)
Waiting time calculation of FCFS Scheduling using 3
Machines
Task TK1: = 0 ms
Task TK2: (0.08-0.02) = 0.06 ms
Task TK3: (0.20-0.04) = 0.16 ms
Task TK4: (0.28-0.07) = 0.21 ms
Task TK5: (0.33-0.09) =0.24 ms
Average waiting time (AWT) = (0.06 + 0.16 + 0.21 +
0.24)/5 = 0.134 ms by using equation (11)
Turnaround time calculation of FCFS Scheduling
Task TK1: = 0.25 ms
Task TK2: (0.43-0.02) = 0.41 ms
Task TK3: (0.50-0.04 ) = 0.46 ms
Task TK4: (0.55-0.07) = 0.48 ms
FIGURE 14. Gantt chart of FCFS scheduling using 3 machines. Task TK5: (0.66-0.09) = 0.57 ms
Average Turnaround time (TAT) = (0.25 + 0.41 + 0.46 +
In the subsequent phase of our investigation, we will deter- 0.48 + 0.57)/5 = 0.384 ms by using equation (12)
mine the service time for each process using the Fig. 13 Gantt
chart. This detailed chart provides a visual representation of I. IMPLEMENTATION OF DHJS SCHEDULING
the execution times for each task, enabling us to precisely USING 3 MACHINES
ascertain the service time for every process involved in the In the subsequent phase of our investigation, we will calculate
scheduling method under consideration. With the service the service times for each process using the Fig. 14 Gantt
times accurately established, we can proceed to calculate the chart, which provides a visual representation of the execution
mean service time and the average service rate. times for each task. With the service times accurately deter-
By calculating the mean service time, we gain valuable mined, we can proceed to compute the mean service time and
insights into the average duration taken to execute a task, average service rate.
shedding light on the overall efficiency of the scheduling As suggested by Johnson, the scheduling approach is
method. Additionally, the average service rate provides infor- adapted when three machines are utilized instead of two.
mation on the rate at which tasks are processed, offering In this scenario, each task must be completed sequentially by
further clarity on the system’s performance and resource one of the three machines: M1, M2, and M3. The jobs are
utilization. assigned with estimated processing times, and the objective
This meticulous analysis will allow us to make well- is to find an optimal solution based on two possible assump-
informed conclusions about the operational characteristics tions:
and effectiveness of the scheduling method. Moreover, the
obtained results will facilitate a comprehensive comparison 1. Jobs within each task group are structured using the
with other scheduling approaches, helping researchers and RM priority assignment approach, and the task groups
practitioners to select the most suitable scheduling strategy themselves are also prioritized.
for various real-world scenarios. 2. Each task group consists of unique projects.
The application of the Gantt chart and the subsequent By employing a Gantt chart, similar to Fig. 14, we can
calculations contribute to a robust and scientifically sound calculate the average service rate equivalent to the Johnson

105604 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

TABLE 11. Initial processing time of different task using three machines performance of the scheduling method employed in the con-
for DHJS.
text of three machines.
By employing equations 1 and 2, we will calculate the
mean service time in this particular application of the Johnson
sequencing scheduling method with three machines. Once we
have obtained the service times for each process from the
Fig. 14 Gantt chart, we can readily estimate the mean service
time and average service rate.
It is noteworthy that the execution durations for tasks
TK1 to TK5, which are derived from the Gantt chart, range
from 0 to 0.61 ms. This observation highlights the consid-
sequencing approach. This chart visually illustrates how job erable reduction in execution times when compared to other
activities on each machine evolve over time, displaying hori- scheduling methods, including FCFS, Johnson Sequencing
zontal bars representing the occupied and idle hours for each with a single server, and the single server system.
schedule. The Gantt chart aids in verifying makeup time, The results of the computations will be presented in
machine downtime, and task waiting times. However, it lacks Tables 12 to 16, providing a comprehensive overview of
a systematic method for optimizing schedules and relies on the scheduling algorithm’s performance with three machines.
the analyst’s intuition to improve the timeframe. Additionally, we will utilize equations 31 and 32 to determine
Once the service times for each process indicated in the the average waiting time and overall turnaround time for each
Fig. 14 Gantt Chart have been determined, we proceed to task.
calculate the mean service time and average service rate. This meticulous analysis and evaluation will yield valuable
In the case of three machines, we follow the approach out- insights into the effectiveness and efficiency of the Johnson
lined in Table 4. The total burst duration of a given task is sequencing approach in multi-machine environments, con-
distributed among the three machines, and the task with the tributing significantly to our understanding of scheduling
shortest processing time among the machines is given the algorithms’ performance in complex computing scenarios.
highest priority in the execution queue. Based on Table 11, For TK4: service time is (0.15-0) = 0.15,
we observe that task TK4 has the shortest processing time TK3: (0.24-0.03) = 0.21,
on machine 1, at 0.06 milliseconds. Consequently, TK4 will TK1: (0.33-0.06) = 0.27
be executed first. Subsequently, task TK3 takes the least time TK5: (0.47-0.14) = 0.33
to process on machine 1, at 0.07 milliseconds. Hence, TK3 TK2: (0.61-0.23) = 0.38
will be executed after TK4, and so on. Thus, the order of task So Mean service time = 1.34/5 = 0.268 and Service rate
execution will be TK4-TK3-TK1-TK5-TK2. will be 1/(E(s)) = 3.73134328 ms by using equation (7)
By employing equations 1 and 2, we calculate the mean to (10)
service time in this application of the Johnson sequencing Waiting time calculation of Johnson Scheduling using
scheduling method with three machines. Once the service 3 Machines
times for each process are identified using the Fig. 14 Gantt Task TK1: = 0.06 ms
chart, we estimate the mean service time and average service Task TK2: (0.23-0.02) = 0.21 ms
rate. The execution durations for tasks TK1 to TK5 range Task TK3: = 0 ms
from 0 to 0.61 ms, significantly shorter than FCFS, Johnson Task TK4: = 0 ms
Sequencing using a single server, and the single server sys- Task TK5: (0.14-0.09) =0.05 ms
tem. The calculated results are presented in Tables 12 to 16, Average waiting time (AWT) = (0.06 + 0.21 + 0 + 0 +
and equations 31 and 32 are utilized to determine the average 0.05) /5=0.064 ms by using equation (11)
waiting time and overall turnaround time of a task. This Turnaround time calculation of Johnson Scheduling
comprehensive analysis will yield valuable insights into the using 3 Machines
efficiency and performance of the scheduling method with Task TK1: = 0.33 ms
three machines, contributing to our understanding of schedul- Task TK2: (0.61-0.02) = 0.59 ms
ing algorithms’ efficacy in multi-machine environments. Task TK3: (0.24-0.04 ) = 0.20 ms
In the subsequent step of our investigation, we will proceed Task TK4: (0.15-0.07) =0.08 ms
with the computation of the service times for each process, Task TK5: (0.47-0.09) = 0.38 ms
as indicated in the Fig. 14 Gantt chart. This chart provides Average Turnaround time (AWT) = (0.33 + 0.59 +
a visual representation of the execution times for each task, 0.20 + 0.08 + 0.38)/5 = 0.316 ms by using equation (12)
allowing us to accurately determine the service times for the
processes. 1) RESULTS
With the service times established, we can then proceed The tables 9-13 present the results obtained from compar-
to calculate the mean service time and average service rate. ing the DHJS algorithm with other scheduling algorithms,
These metrics are essential for evaluating the efficiency and namely FCFS, Priority, Round Robin using a single machine,

VOLUME 11, 2023 105605


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

TABLE 12. Results by using FCFS priority and round robin scheduling TABLE 16. Results by using dynamic heuristic johnson sequencing
algorithm using single machine. algorithm using 3 machines.

TABLE 13. Results by using FCFS algorithm using 2 machines.

TABLE 14. Results by using johnson sequencing algorithm using


2 machines.

FIGURE 15. Trial investigation of the Typical number of jobs in the


queue (Lq) by using (λ= 1).

TABLE 15. Results by using FCFS algorithm using 3 machines.

and Johnson sequencing using a multi-server machine.


Notably, the DHJS algorithm exhibits reductions in both
service time and average waiting time, as indicated by the
outcomes of equations 1-12. The variables Lq , Ls , Wq , and Ws
are interconnected, and their values are displayed in Figs. 8 FIGURE 16. Trial investigation of the typical number of jobs in the
to 14, which represent the average line length and number of queue (Lq) by using (λ =2).

jobs per shift.


To determine the service rate, denoted as = 1/E(S), we first
calculate the average arrival rate and then the average service provides highly optimal values for Lq , Ls , Wq , and Ws ,
time (E) as presented in tables 12 to 14. Subsequently, we pro- as depicted in Table 16. In comparison, Table 12 repre-
ceed to calculate the probability (P0) that the system is not sents the outcomes for FCFS, Priority, and Round Robin
operational, employing formula (6). The value of ca obtained scheduling techniques using a single server machine. Table 13
from this calculation is then utilized to compute the average encompasses the results for FCFS using two machines,
number of jobs in the queue (Lq ) using equation (7). Building Table 14 for Johnson sequencing using two servers, Table 15
upon the results of the previous LQ step, we further calculate for FCFS scheduling using three servers, and finally, Table 16
the system’s average task load (Ls ) through formula (8). Addi- for the DHJS scheduling algorithm employing three servers.
tionally, formula (9) is applied to calculate the average wait The comprehensive analysis presented in this study estab-
time for tasks in the system, while formula (10) is utilized to lishes the superiority of the DHJS approach in multi-server
obtain the average wait time for tasks in the queue. environments, highlighting its efficiency and effectiveness
Upon examining the results, it becomes evident that the in minimizing both service time and average waiting time.
DHJS scheduling algorithm using three server machines These findings contribute significantly to the understanding

105606 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

FIGURE 17. Trial investigation of the typical number of jobs in the


queue (Lq) by using (λ =3).
FIGURE 19. Trial investigation of the typical number of jobs in the
system (Ls) by using (λ =2).

FIGURE 18. Trial investigation of the typical number of jobs in the


system (Ls) by using (λ =1).
FIGURE 20. Trial investigation of the typical number of jobs in the
System (Ls) by using (λ =3).

and implementation of optimized scheduling algorithms in


practical computing scenarios. To further illustrate the effectiveness of DHJS,
The graph in Figures 6 to 13 depicts the average number of Figures 11 to 13 display the mean, best, and worst mean span,
jobs in the queue, utilizing the average arrival rate, for various providing a comprehensive view of its performance. The
scheduling algorithms including FCFS, Round Robin, Prior- outcomes presented in the figures substantiate the superiority
ity Scheduling, Johnson Sequencing, FCFS with 2 servers, of DHJS in terms of reducing task waiting times, making
and FCFS with 3 servers. Notably, the dynamic heuristic it a highly efficient and effective scheduling algorithm for
Johnson sequencing (DHJS) approach exhibits a significant a wide range of computational tasks. These findings have
reduction in task waiting count compared to the other algo- significant implications for enhancing resource utilization
rithms. The evaluation metrics employed in this investigation and optimizing task execution in cloud computing and related
include top, bottom, average, and standard deviation values, domains.
presenting the best outcomes obtained. Figures 14 to 16 display the average number of tasks in
Upon analyzing the results, it is evident that for task the system. It is noteworthy that Johnson sequencing with
sizes of 100, DHJS demonstrated superior performance com- FCFS task scheduling exhibits a higher task waiting count
pared to all other algorithms, while maintaining comparable compared to the dynamic heuristic Johnson sequencing. The
efficiency for other task sizes. However, as the task sizes mutation stage of differential evolution utilizes step sizes
increase, DHJS’s advantage becomes more pronounced, posi- determined by a scaling factor denoted as F. In the tradi-
tioning it as the preferred option for projects exceeding tional DE approach, F is a fixed positive value that remains
100 tasks. constant throughout the optimization process. Consequently,

VOLUME 11, 2023 105607


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

FIGURE 23. Trial investigation of the typical holding-up season of the


FIGURE 21. Trial investigation of the typical holding-up season of the
positions in the queue (Wq) by using (λ =3).
positions in the queue (Wq) by using (λ =1).

FIGURE 24. Trial investigation of the typical holding-up season of the


FIGURE 22. Trial investigation of the typical holding-up season of the positions in the system (Ws) by using (λ =1).
positions in the queue (Wq) by using (λ =2).

and convergence to better solutions throughout the optimiza-


if the population exhibits considerable diversity and the value tion process. This adaptive and dynamic approach highlights
of F is large, the step size may cause solutions to diverge the effectiveness of DHJS in maintaining exploration-
significantly from the current optimal solution. exploitation balance, enabling efficient and robust task
To address this issue, the proposed dynamic heuristic scheduling in cloud computing environments.
Johnson sequencing (DHJS) introduces a novel approach Figures 17 to 19 present the average waiting time for jobs
to modify F during the optimization. This dynamic adjust- in the queue, revealing that the dynamic heuristic Johnson
ment encourages the algorithm to explore the search space sequencing (DHJS) outperforms Johnson sequencing and
extensively, especially in the early stages of the optimization FCFS task scheduling in terms of reducing job waiting times.
process, seeking potential regions that may yield improved It is essential to note that the proposed technique cannot be
solutions. As a result, the population diversity can be directly compared to existing algorithms, as paired mapping
effectively maintained, enhancing the algorithm’s ability to is a novel concept not explored in the current literature.
converge to better solutions. However, to assess the effectiveness of DHJS, we conducted
Through Figures 14 to 16, it becomes evident that as a comparative analysis against the FCFS algorithm.
the iteration count increases, the population diversity may Notably, the layover duration is linked to the number of
decrease when using the traditional DE approach with clouds in each dataset. However, it is crucial to highlight that
a fixed F. However, with the dynamic heuristic Johnson this correlation is not entirely apparent, and further investi-
sequencing, the dynamic adaptation of F helps sustain the gation is needed to fully understand the relationship between
diversity of solutions, resulting in improved performance layover duration and cloud resources.

105608 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

validate and optimize the DHJS algorithm for a wide range


of cloud computing scenarios.
Figures 27 to 28 illustrate the typical wait time for
jobs in the system, showcasing the superior performance of
dynamic heuristic Johnson sequencing compared to John-
son sequencing with FCFS task scheduling. The dynamic
heuristic Johnson sequencing algorithm was developed to
address multiprocessor scheduling by integrating Johnson’s
rule with the genetic algorithm. To enhance the algorithm’s
convergence, novel crossover and mutation procedures were
introduced.
The proposed dynamic heuristic Johnson sequencing
algorithm optimizes each machine’s time span during the
decoding process using Johnson’s rule. This innovative
approach aims to minimize job waiting times and improve
overall system efficiency in multiprocessor environments.
FIGURE 25. Trial investigation of the typical holding-up season of the
positions in the system (Ws) by using (λ =2).
To assess the effectiveness of dynamic heuristic Johnson
sequencing, we conducted extensive simulations and com-
pared its performance to two other scheduling approaches:
the list scheduling approach and an upgraded list scheduling
methodology.
The simulation results demonstrated the superiority of
dynamic heuristic Johnson sequencing over the other
scheduling approaches, showcasing its ability to effectively
manage task scheduling and reduce job waiting times. The
integration of Johnson’s rule with the genetic algorithm
proves to be a promising strategy for achieving efficient
multiprocessor scheduling in cloud computing and related
fields.
The novel crossover and mutation procedures further
enhance the convergence speed of the algorithm, making
it a competitive solution for various real-world scheduling
scenarios. The effectiveness of dynamic heuristic Johnson
sequencing highlights its potential as a valuable tool for
optimizing resource allocation and enhancing system perfor-
mance in complex computing environments.
FIGURE 26. Trial investigation of the typical holding-up season of the
positions in the system (Ws) by using (λ =3). Further research and experimentation will be essential to
explore the algorithm’s performance under diverse work-
loads, system configurations, and cloud computing settings.
Additionally, comparative studies with other state-of-the-art
The suggested approach, which is based on workload algorithms and rigorous mathematical analyses will con-
matching to allocate cloud resources, demonstrates its capa- tribute to a comprehensive understanding of the algorithm’s
bility to dynamically and rapidly increase cloud resources, capabilities and limitations.
as illustrated in the figure. This adaptive resource allocation The average waiting times for various scheduling algo-
strategy enables the DHJS algorithm to effectively manage rithms are presented in Fig. 27. When employing a single
the task scheduling process in cloud computing environ- server, the Round Robin algorithm yielded an average wait-
ments, leading to reduced waiting times and improved overall ing time of 33.8 milliseconds, while FCFS and Priority
system efficiency. scheduling achieved 20.3 milliseconds and 19.46 millisec-
The proposed DHJS algorithm represents a significant con- onds, respectively. In contrast, when using two servers,
tribution to the field of cloud computing task scheduling, and Johnson sequencing resulted in an average waiting time of
its innovative approach of paired mapping opens new avenues 7.95 milliseconds, whereas FCFS recorded 5.67 millisec-
for research and exploration. By effectively combining work- onds. The utilization of three servers in Johnson sequencing
load matching and dynamic resource allocation, DHJS offers further reduced the average waiting time to 2.71 milliseconds.
promising results and can serve as a benchmark for future Notably, the Dynamic Heuristic Johnson Sequencing (DHJS)
algorithmic developments in cloud task scheduling. Further algorithm outperformed all other approaches with a signifi-
studies and real-world implementations will be valuable to cantly lower average waiting time (AWT).

VOLUME 11, 2023 105609


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

FIGURE 27. Average waiting time of listed algorithms.

FIGURE 28. Average Turnaround time of listed algorithms.


These findings underscore the effectiveness of DHJS in
minimizing job waiting times and optimizing resource allo- TABLE 17. T-test results for LQ in terms of mean, std. deviation and std.
cation in multi-server environments. The DHJS algorithm’s means of error.
ability to intelligently combine Johnson’s rule with the
genetic algorithm enables efficient task scheduling and
enhances system performance. The substantial reduction in
average waiting time exhibited by DHJS highlights its poten-
tial to be a valuable tool for improving the overall efficiency
of cloud computing and other resource-intensive applications.
However, to gain a comprehensive understanding of
DHJS’s capabilities, further investigations are warranted.
Additional experimentation under diverse workload scenarios
and system configurations will provide valuable insights into
the algorithm’s adaptability and performance. Comparative
TABLE 18. T-test results for LQ in terms of correlation and P-value.
analyses with other state-of-the-art scheduling techniques
will also contribute to benchmarking DHJS against existing
approaches and identifying its unique strengths.
As a research community, continued efforts should focus
on refining the algorithm, fine-tuning its parameters, and
exploring its scalability to larger-scale computing environ-
ments. Moreover, conducting real-world implementations
and case studies will offer practical validation of DHJS’s
effectiveness and its potential applicability in real-time cloud
computing scenarios.
In conclusion, the remarkable reduction in average waiting
time achieved by DHJS positions it as a promising solu-
tion for addressing multiprocessor scheduling challenges and
enhancing the overall efficiency of complex computing sys- reduced the average turnaround time to 3.05 milliseconds.
tems. Future research endeavors will build upon these initial Notably, the Dynamic Heuristic Johnson Sequencing (DHJS)
findings, paving the way for more sophisticated and effective algorithm outperformed all other methods, demonstrating
task scheduling methodologies in the realm of cloud comput- significantly shorter average turnaround times (TAT).
ing and beyond. These findings highlight the effectiveness of DHJS in
The average turnaround times for different scheduling minimizing task turnaround times and optimizing resource
algorithms are presented in Fig. 28. When employing a allocation in multi-server environments. By intelligently
single server, the Round Robin algorithm resulted in an aver- combining Johnson’s rule with the genetic algorithm, DHJS
age turnaround time of 50.83 milliseconds, whereas FCFS enhances task scheduling efficiency, leading to faster job
and Priority scheduling achieved 26.29 milliseconds and completions and improved system throughput. The substan-
6.89 milliseconds, respectively. In the case of two servers, tial reduction in average turnaround time demonstrated by
Johnson sequencing recorded an average turnaround time DHJS indicates its potential to be a valuable tool for enhanc-
of 3.94 milliseconds, and FCFS yielded 3.70 milliseconds. ing the overall performance and responsiveness of cloud
The utilization of three servers in Johnson sequencing further computing and other compute-intensive systems.

105610 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

TABLE 19. T-test results for LQ in terms of paired differences.

TABLE 20. T-test results for LS in terms of mean, std. deviation and std. means of error.

TABLE 21. T-test results for LS in terms of correlation and P-value.

To comprehensively evaluate DHJS’s capabilities, further will aid in benchmarking DHJS against existing approaches
investigations are essential. Conducting additional experi- and understanding its unique advantages.
ments under varying workloads and system configurations As a research community, it is crucial to continue refining
will provide valuable insights into the algorithm’s adapt- the DHJS algorithm, fine-tuning its parameters, and exploring
ability and performance in different scenarios. Comparative its scalability to larger-scale computing environments. More-
analyses with other state-of-the-art scheduling techniques over, practical implementations and case studies in real-world

VOLUME 11, 2023 105611


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

TABLE 22. T-test results for LS in terms of paired differences.

TABLE 23. T-test results for WQ in terms of mean, std. deviation and std. means of error.

cloud computing scenarios will offer practical validation of assess the performance of these algorithms on various task
DHJS’s effectiveness and applicability. instances. The results of the paired t-test are summarized in
In conclusion, the substantial reduction in average Tables 16 to 19.
turnaround time achieved by DHJS underscores its poten- To determine the significance of the differences between
tial as a promising solution for addressing multiprocessor the algorithms’ performance, the accepted threshold for sta-
scheduling challenges and enhancing the overall efficiency tistical significance (alpha, α) was set at 0.05. In other words,
of complex computing systems. Future research endeavors a p-value smaller than 0.05 would indicate a statistically
will build upon these initial findings, paving the way for more significant improvement of the DHJS method over the other
advanced and efficient task scheduling methodologies in the algorithms. Upon analysis of the tables, it is evident that
domain of cloud computing and beyond. the p-values for nearly all datasets were indeed smaller
than the alpha value, confirming the statistical significance
of the DHJS method’s superior performance compared to the
J. STATISTICAL ANALYSIS USING T-TEST alternative job scheduling algorithms.
The statistical analysis of the proposed DHJS algorithm and The computed Mean, Std. Deviation, Std. Means of Error,
its comparison with other scheduling algorithms, including Correlation, p-value, and t-test value for DHJS consis-
Priority, SJF, Round Robin, Johnson Sequencing using two tently demonstrate its competitive advantage over the other
servers, FCFS using three servers, and DHJS using three algorithms. These results provide strong evidence in favor
servers, was conducted using a paired t-test. The evaluation of DHJS as a robust and efficient scheduling solution,
metrics, such as Mean, Std. Deviation, Std. Means of Error, showcasing its superiority in terms of various performance
Correlation, p-value, and t-test value, were considered to metrics.

105612 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

TABLE 24. T-test results for WQ in terms of correlation and P-value.

TABLE 25. T-test results for WQ in terms of paired differences.

TABLE 26. T-test results for WS in terms of mean, std. deviation and std. means of error.

It is important to note that the paired t-test approach environments. Investigating the impact of different system
was applied with standardized stopping criteria for all task configurations, task characteristics, and workload patterns on
instances to ensure a fair and reliable comparison. By con- DHJS’s performance will provide valuable insights into its
ducting a rigorous statistical analysis, the findings not only adaptability and scalability.
validate the DHJS algorithm’s effectiveness but also lend In conclusion, the results of the paired t-test unequivocally
confidence to its applicability and reliability in real-world demonstrate that the proposed DHJS algorithm outperforms
scenarios. the other considered scheduling algorithms across various
As a research community, we can further explore the fac- evaluation metrics. This research contributes to the advance-
tors that contribute to the success of DHJS in multi-server ment of job scheduling techniques in complex computing

VOLUME 11, 2023 105613


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

TABLE 27. T-test results for WS in terms of correlation and P-value.

TABLE 28. T-test results for WS in terms of paired differences.

environments and lays the foundation for further studies in third stage, while accommodating a one-time interruption
this domain. As we continue our exploration, refining the in the first stage. This heuristic integrated the LBM rule,
DHJS algorithm and conducting real-world experiments will Round Robin Scheduling, and Johnson Sequencing to min-
further strengthen its position as a promising solution for opti- imize makespan. Furthermore, we addressed the challenge
mizing resource utilization and enhancing task scheduling of unavailability in a stochastic three-machine flow shop by
efficiency. extending the application of Johnson Sequencing. We adapted
the fundamental task model to incorporate task-specific data
file requirements and generated results using mathematical
VIII. CONCLUSION AND FUTURE RESEARCH formulations. As cloud computing continues to evolve, task
This study has explored the efficacy of the Johnson scheduling remains a crucial issue, and this study contributes
Sequencing Algorithm in scheduling flow shop scenarios valuable insights to this domain. However, there is always
with unavailability periods, seeking to provide optimal or scope for improvement, and future research could explore
near-optimal solutions. Building upon relevant literature, upgraded algorithms. A comparative analysis with our most
we initially focused on the optimality requirement of the recent results will be undertaken to advance the field and
Johnson Sequencing Algorithm to minimize makespan in achieve the highest level of expertise in task scheduling for
flow shops with one, two, and three servers, each featur- cloud computing environments. The continued pursuit of
ing a single unavailability period. In pursuit of improved optimizing task scheduling methodologies holds the potential
scheduling strategies, we introduced a new version of the to further enhance the efficiency and effectiveness of cloud
Dynamic Heuristic Johnson Sequencing algorithm (DHJS). computing services, ultimately benefiting both cloud service
Through meticulous calculations, we implemented DHJS in providers and end-users. Here in, some of the limitations for
flow shops with varying server configurations and multi- this work and some research future suggestions to handle
ple unavailability intervals, effectively reducing makespan. these limitations:
Notably, we devised a heuristic approach for the three-stage • Limitations
hybrid flow shop, comprising one server in the first stage, - While our proposed MTD-DHJS algorithm demon-
two servers in the second stage, and three servers in the strated significant improvements in makespan reduction

105614 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

and resource utilization compared to existing scheduling REFERENCES


algorithms, the comparison was limited to a specific set [1] P. Mukherjee, P. K. Pattnaik, T. Swain, and A. Datta, ‘‘Task scheduling
of algorithms. Future research could explore a broader algorithm based on multi criteria decision making method for cloud com-
puting environment: TSABMCDMCCE,’’ Open Comput. Sci., vol. 9, no. 1,
range of scheduling algorithms to provide a more com- pp. 279–291, Jan. 2019.
prehensive evaluation of the MTD-DHJS algorithm’s [2] J. Hu, Z. Jiang, and H. Liao, ‘‘Joint optimization of job scheduling
performance. and maintenance planning for a two-machine flow shop considering job-
- The scalability of the MTD-DHJS algorithm should dependent operating condition,’’ J. Manuf. Syst., vol. 57, pp. 231–241,
Oct. 2020.
be further investigated. While our approach exhibited [3] P. H. Raj, ‘‘Johnson’s sequencing for load balancing in multi-access
remarkable scalability in resolving complex job schedul- edge computing,’’ in Computer Networks, Big Data and IoT. Singapore:
ing challenges within three-server cloud computing Springer, 2021, pp. 287–295.
[4] X. Li, T. Jiang, and R. Ruiz, ‘‘Heuristics for periodical batch job scheduling
settings, its performance may vary in larger-scale cloud
in a MapReduce computing framework,’’ Inf. Sci., vol. 326, pp. 119–133,
environments. Investigating the algorithm’s scalability Jan. 2016.
in handling a higher number of servers and tasks would [5] P. Mukherjee, T. Swain, and A. Datta, ‘‘Issues of some task scheduling
be valuable. strategies on sensor cloud environment,’’ in Smart Intelligent Computing
and Applications, vol. 2. Singapore: Springer, 2020, pp. 651–663.
- Although comprehensive simulations and testing were [6] Y. Xiong, S. Huang, M. Wu, J. She, and K. Jiang, ‘‘A Johnson’s-rule-based
conducted, the MTD-DHJS algorithm’s performance genetic algorithm for two-stage-task scheduling problem in data-centers of
in real-world cloud computing environments requires cloud computing,’’ IEEE Trans. Cloud Comput., vol. 7, no. 3, pp. 597–610,
Jul. 2019.
further validation. Future research could include prac-
[7] Z. Pan, X. Hou, H. Xu, L. Bao, M. Zhang, and C. Jian, ‘‘A hybrid manufac-
tical implementations and experiments on actual cloud turing scheduling optimization strategy in collaborative edge computing,’’
platforms to validate the algorithm’s efficiency and Evol. Intell., vol. 3, pp. 1–13, Oct. 2022.
effectiveness in real-time scenarios. [8] M. Okwu and I. Emovon, ‘‘Application of Johnson’s algorithm in process-
ing jobs through two-machine system,’’ J. Mech. Energy Eng., vol. 4, no. 1,
• Future Suggestions: pp. 33–38, Aug. 2020.
- Dynamic computational time prediction is a critical [9] P. J. A. Cock, J. M. Chilton, B. Grüning, J. E. Johnson, and N. Soranzo,
aspect of the MTD-DHJS algorithm. Future research ‘‘NCBI BLAST+ integrated into galaxy,’’ GigaScience, vol. 4, no. 1, p. 39,
Dec. 2015.
could explore advanced predictive techniques, such as
[10] W. Tian, G. Li, W. Yang, and R. Buyya, ‘‘HScheduler: An optimal approach
machine learning-based models, to improve the accu- to minimize the makespan of multiple MapReduce jobs,’’ J. Supercomput.,
racy of computational time predictions for tasks. This vol. 72, no. 6, pp. 2376–2393, Jun. 2016.
could lead to more precise scheduling decisions and [11] R. Raju, J. Amudhavel, M. Pavithra, S. Anuja, and B. Abinaya, ‘‘A heuristic
fault tolerant MapReduce framework for minimizing makespan in hybrid
further reduction in makespan. cloud environment,’’ in Proc. Int. Conf. Green Comput. Commun. Electr.
- Investigating hybrid scheduling approaches that com- Eng. (ICGCCEE), Mar. 2014, pp. 1–4.
bine the strengths of multiple algorithms could be [12] R. Indukuri, P. Varma, and M. Sundari, ‘‘Performance evaluation of two
beneficial. Integrating the MTD-DHJS algorithm with stage scheduling algorithm in cloud computing,’’ Brit. J. Math. Comput.
Sci., vol. 6, no. 3, pp. 247–256, Jan. 2015.
other state-of-the-art scheduling methods may result in [13] R. M. Singh, S. Paul, and A. Kumar, ‘‘Task scheduling in cloud computing:
even better makespan optimization and resource utiliza- Review,’’ Int. J. Comput. Sci. Inf. Technol., vol. 5, no. 6, pp. 7940–7944,
tion. 2014.
[14] M. A. Elaziz, S. Xiong, K. P. N. Jayasena, and L. Li, ‘‘Task scheduling in
- Cloud computing environments often experience cloud computing based on hybrid moth search algorithm and differential
dynamically changing workloads. Future research could evolution,’’ Knowl.-Based Syst., vol. 169, pp. 39–52, Apr. 2019.
evaluate the performance of the MTD-DHJS algorithm [15] A. Lachmann, D. Torre, A. B. Keenan, K. M. Jagodnik, H. J. Lee, L. Wang,
under varying workloads and consider dynamic task M. C. Silverstein, and A. Ma’ayan, ‘‘Massive mining of publicly available
RNA-seq data from human and mouse,’’ Nature Commun., vol. 9, no. 1,
arrivals to assess its adaptability and efficiency in han- p. 1366, 2018.
dling real-world fluctuations in demand. [16] S. O. Bukhari, ‘‘Cloud algorithms: A computational paradigm for man-
- As energy consumption is a significant concern in aging big data analytics on clouds,’’ in Intelligent Systems. Singapore:
Springer, 2021, pp. 455–470.
cloud computing, future research could incorporate
[17] Z. Tong, H. Chen, X. Deng, K. Li, and K. Li, ‘‘A novel task scheduling
energy-efficient strategies in the task scheduling pro- scheme in a cloud computing environment using hybrid biogeography-
cess. Considering energy-aware scheduling objectives based optimization,’’ Soft Comput., vol. 23, no. 21, pp. 11035–11054,
alongside makespan optimization would contribute Nov. 2019.
[18] A. Wilczyński and J. Kołodziej, ‘‘Modelling and simulation of security-
to more environmentally friendly cloud computing aware task scheduling in cloud computing based on blockchain
operations. technology,’’ Simul. Model. Pract. Theory, vol. 99, Feb. 2020,
- To maximize the practical applicability of the MTD- Art. no. 102038.
[19] M. Miao, S. Xiong, B. Jiang, H. Jiang, S. W. Cui, and T. Zhang, ‘‘Dual-
DHJS algorithm, future research could explore its enzymatic modification of maize starch for increasing slow digestion
implementation and performance in specific industry property,’’ Food Hydrocolloids, vol. 38, pp. 180–185, Jul. 2014.
settings with unique task characteristics and require- [20] F. Rashidi, E. Abiri, T. Niknam, and M. R. Salehi, ‘‘On-line parameter
ments. Tailoring the algorithm to industry-specific needs identification of power plant characteristics based on phasor measure-
ment unit recorded data using differential evolution and bat inspired
could result in customized and optimized task schedul- algorithm,’’ IET Sci., Meas. Technol., vol. 9, no. 3, pp. 376–392,
ing solutions. May 2015.

VOLUME 11, 2023 105615


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

[21] J. Guo, Z. Song, Y. Cui, Z. Liu, and Y. Ji, ‘‘Energy-efficient resource [43] C. Sriskandarajah and S. P. Sethi, ‘‘Scheduling algorithms for flexible
allocation for multi-user mobile edge computing,’’ in Proc. IEEE Global flowshops: Worst and average case performance,’’ Eur. J. Oper. Res.,
Commun. Conf., Dec. 2017, pp. 1–7. vol. 43, no. 2, pp. 143–160, Nov. 1989.
[22] E. H. Houssein, A. G. Gad, Y. M. Wazery, and P. N. Suganthan, ‘‘Task [44] A. S. Prasad and S. Rao, ‘‘A mechanism design approach to resource
scheduling in cloud computing based on meta-heuristics: Review, taxon- procurement in cloud computing,’’ IEEE Trans. Comput., vol. 63, no. 1,
omy, open challenges, and future trends,’’ Swarm Evol. Comput., vol. 62, pp. 17–30, Jan. 2014.
Apr. 2021, Art. no. 100841. [45] R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. F. De Rose, and R. Buyya,
[23] K. Naik, G. Gandhi, and S. Patil, ‘‘Multiobjective virtual machine selection ‘‘CloudSim: A toolkit for modeling and simulation of cloud computing
for task scheduling in cloud computing,’’ in Computational Intelligence: environments and evaluation of resource provisioning algorithms,’’ Softw.,
Theories, Applications and Future Directions (Advances in Intelligent Pract. Exp., vol. 41, no. 1, pp. 23–50, Jan. 2011.
Systems and Computing). Singapore: Springer, 2019, pp. 319–331. [46] A. V. Karthick, E. Ramaraj, and R. G. Subramanian, ‘‘An efficient multi
[24] A. E. Keshk, ‘‘Cloud computing online scheduling,’’ IOSR J. Eng., vol. 4, queue job scheduling for cloud computing,’’ in Proc. World Congr. Com-
no. 3, pp. 7–17, Mar. 2014, doi: 10.9790/3021-04360717. put. Commun. Technol., Feb. 2014, pp. 164–166.
[25] T. Maguluri, R. Srikant, and L. Ying, ‘‘Stochastic models of load balancing [47] W. Li and T. I. Freiheit, ‘‘An effective heuristic for adaptive control of
and scheduling in cloud computing clusters,’’ in Proc. IEEE INFOCOM, job sequences subject to variation in processing times,’’ Int. J. Prod. Res.,
Mar. 2012, pp. 702–710. vol. 54, no. 12, pp. 3491–3507, Jun. 2016.
[26] D. Alsadie, ‘‘A metaheuristic framework for dynamic virtual machine [48] S. R. Hejazi and S. Saghafian, ‘‘Flowshop-scheduling problems with
allocation with optimized task scheduling in cloud data centers,’’ IEEE makespan criterion: A review,’’ Int. J. Prod. Res., vol. 43, no. 14,
Access, vol. 9, pp. 74218–74233, 2021. pp. 2895–2929, Jul. 2005.
[27] Q. Zhang, L. Cheng, and R. Boutaba, ‘‘Cloud computing: State-of-the-art [49] W. Liang, C. Hu, M. Wu, and Q. Jin, ‘‘A data intensive heuristic approach
and research challenges,’’ J. Internet Services Appl., vol. 1, no. 1, pp. 7–18, to the two-stage streaming scheduling problem,’’ J. Comput. Syst. Sci.,
May 2010. vol. 89, pp. 64–79, Nov. 2017.
[28] B. H. Malik, M. Amir, B. Mazhar, S. Ali, R. Jalil, and J. Khalid, [50] J. Zhang, F. Xiong, and Z. Duan, ‘‘Research on resource scheduling of
‘‘Comparison of task scheduling algorithms in cloud environment,’’ Int. cloud computing based on improved genetic algorithm,’’ J. Electron. Res.
J. Adv. Comput. Sci. Appl., vol. 9, no. 5, pp. 384–390, 2018. Appl., vol. 4, no. 2, pp. 1–6, Apr. 2020.
[29] H. G. Tani and C. E. Amrani, ‘‘Smarter round Robin scheduling algorithm [51] A. Verma, L. Cherkasova, and R. H. Campbell, ‘‘Orchestrating an ensem-
for cloud computing and big data,’’ J. Data Mining Digit. Hum., vol. 6, ble of MapReduce jobs for minimizing their makespan,’’ IEEE Trans.
pp. 1–8, Jan. 2018. Depend. Sec. Comput., vol. 10, no. 5, pp. 314–327, Sep. 2013.
[30] M. B. Gawali and S. K. Shinde, ‘‘Task scheduling and resource allocation [52] C.-C. Wu, J. N. D. Gupta, S.-R. Cheng, B. M. T. Lin, S.-H. Yip, and
in cloud computing using a heuristic approach,’’ J. Cloud Comput., vol. 7, W.-C. Lin, ‘‘Robust scheduling for a two-stage assembly shop with
no. 1, Dec. 2018, Art. no. 4. scenario-dependent processing times,’’ Int. J. Prod. Res., vol. 59, no. 17,
[31] H. El-Boghdadi and R. A. Ramadan, ‘‘Resource scheduling for offline pp. 5372–5387, Sep. 2021.
cloud computing using deep reinforcement learning,’’ Int. J. Comput. Sci. [53] J. Luna, A. Taha, R. Trapero, and N. Suri, ‘‘Quantitative reasoning about
Netw. Secur., vol. 19, no. 4, Apr. 2019. cloud security using service level agreements,’’ IEEE Trans. Cloud Com-
[32] J.-B. Wang, J. Wang, Y. Wu, J.-Y. Wang, H. Zhu, M. Lin, and J. Wang, put., vol. 5, no. 3, pp. 457–471, Jul. 2017.
‘‘A machine learning framework for resource allocation assisted by cloud [54] H. Morgan, P. Sanan, M. Knepley, and R. T. Mills, ‘‘Understanding perfor-
computing,’’ IEEE Netw., vol. 32, no. 2, pp. 144–151, Mar. 2018. mance variability in standard and pipelined parallel Krylov solvers,’’ Int.
[33] F. Saqib, A. Dutta, J. Plusquellic, P. Ortiz, and M. S. Pattichis, ‘‘Pipelined J. High Perform. Comput. Appl., vol. 35, no. 1, pp. 47–59, Jan. 2021.
decision tree classification accelerator implementation in FPGA (DT- [55] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, ‘‘Cloud
CAIF),’’ IEEE Trans. Comput., vol. 64, no. 1, pp. 280–285, Jan. 2015. computing and emerging IT platforms: Vision, hype, and reality for deliv-
[34] B. Tang, H. He, P. M. Baggenstoss, and S. Kay, ‘‘A Bayesian classification ering computing as the 5th utility,’’ Future Gener. Comput. Syst., vol. 25,
approach using class-specific features for text categorization,’’ IEEE Trans. no. 6, pp. 599–616, Jun. 2009.
Knowl. Data Eng., vol. 28, no. 6, pp. 1602–1606, Jun. 2016. [56] H. Khazaei, J. Misic, and V. B. Misic, ‘‘Performance analysis of cloud
[35] R. Bruni and G. Bianchi, ‘‘Effective classification using a small training computing centers using M/G/m/m+r queuing systems,’’ IEEE Trans.
set based on discretization and statistical analysis,’’ IEEE Trans. Knowl. Parallel Distrib. Syst., vol. 23, no. 5, pp. 936–943, May 2012.
data Eng., vol. 27, no. 9, pp. 2349–2361, Sep. 2015. [57] Y.-C. Hsu, P. Liu, and J.-J. Wu, ‘‘Job sequence scheduling for cloud
[36] Y. Cui, K.-W. Chin, S. Soh, and M. Ros, ‘‘Novel task scheduling computing,’’ in Proc. Int. Conf. Cloud Service Comput., Dec. 2011,
approaches in energy sharing solar-powered IoT networks,’’ IEEE Internet pp. 212–219.
Things J., vol. 10, no. 12, pp. 10970–10982, Jun. 2023. [58] H. Y. Fuchigami and S. Rangel, ‘‘A survey of case studies in produc-
[37] S. V. Angiuoli, M. Matalka, A. Gussman, K. Galens, M. Vangala, tion scheduling: Analysis and perspectives,’’ J. Comput. Sci., vol. 25,
D. R. Riley, C. Arze, J. R. White, O. White, and W. F. Fricke, ‘‘CloVR: pp. 425–436, Mar. 2018.
A virtual machine for automated and portable sequence analysis from the [59] C.-W. Tsai and J. J. P. C. Rodrigues, ‘‘Metaheuristic scheduling for cloud:
desktop using cloud computing,’’ BMC Bioinf., vol. 12, no. 1, pp. 1–15, A survey,’’ IEEE Syst. J., vol. 8, no. 1, pp. 279–291, Mar. 2014.
Dec. 2011. [60] J.-Q. Li and Y.-Q. Han, ‘‘A hybrid multi-objective artificial bee colony
[38] S. More, S. Muthukrishnan, and E. Shriver, ‘‘Efficiently sequencing tape- algorithm for flexible task scheduling problems in cloud computing sys-
resident jobs,’’ in Proc. 18th ACM SIGMOD-SIGACT-SIGART Symp. tem,’’ Cluster Comput., vol. 23, no. 4, pp. 2483–2499, Dec. 2020.
Princ. Database Syst., Philadelphia, PA, USA, May 1999, pp. 33–43. [61] P. Banerjee, V. Raj, K. Thakur, B. Kumar, and M. K. Dehury, ‘‘A new pro-
[39] S. Zhang, H. Yan, and X. Chen, ‘‘Research on key technologies of cloud posed hybrid page replacement algorithm (HPRA) in real time systems,’’
computing,’’ Phys. Proc., vol. 33, pp. 1791–1797, Jan. 2012. in Proc. 5th Int. Conf. Smart Syst. Inventive Technol. (ICSSIT), Jan. 2023,
[40] M. Choudhary and S. K. Peddoju, ‘‘A dynamic optimization algorithm for pp. 1620–1625.
task scheduling in cloud environment,’’ Int. J. Eng. Res. Appl., vol. 2, no. 3, [62] B. Zhang, D. T. Yehdego, K. L. Johnson, M.-Y. Leung, and M. Taufer,
pp. 2564–2568, 2012. ‘‘Enhancement of accuracy and efficiency for RNA secondary structure
[41] R. K. R. Indukuri, S. V. Penmasta, M. V. R. Sundari, and G. J. Moses, ‘‘Per- prediction by sequence segmentation and MapReduce,’’ BMC Struct. Biol.,
formance evaluation of deadline aware multi-stage scheduling in cloud vol. 13, no. S1, pp. 1–24, Nov. 2013.
computing,’’ in Proc. IEEE 6th Int. Conf. Adv. Comput. (IACC), Feb. 2016, [63] O. Wiles, G. Gkioxari, R. Szeliski, and J. Johnson, ‘‘SynSin: End-to-end
pp. 229–234. view synthesis from a single image,’’ in Proc. IEEE/CVF Conf. Comput.
[42] S. Pal and P. K. Pattnaik, ‘‘A simulation-based approach to optimize the Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 7467–7477.
execution time and minimization of average waiting time using queuing [64] G. M. Komaki and V. Kayvanfar, ‘‘Grey wolf optimizer algorithm for
model in cloud computing environment,’’ Int. J. Electr. Comput. Eng., the two-stage assembly flow shop scheduling problem with release time,’’
vol. 6, no. 2, p. 743, Apr. 2016. J. Comput. Sci., vol. 8, pp. 109–120, May 2015.

105616 VOLUME 11, 2023


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

[65] A. Lachmann, D. J. B. Clarke, D. Torre, Z. Xie, and A. Ma’ayan, ‘‘Inter- [82] B. Kumar, S. Roy, K. U. Singh, S. K. Pandey, A. Kumar, A. Sinha,
operable RNA-seq analysis in the cloud,’’ Biochim. Biophys. Acta, Gene S. Shukla, M. A. Shah, and A. Rasool, ‘‘A static machine learn-
Regulatory Mech., vol. 1863, no. 6, Jun. 2020, Art. no. 194521. ing based evaluation method for usability and security analysis in
[66] P. Banerjee and S. Roy, ‘‘An investigation of various task allocating mech- e-commerce website,’’ IEEE Access, vol. 11, pp. 40488–40510, 2023, doi:
anism in cloud,’’ in Proc. 5th Int. Conf. Inf. Syst. Comput. Netw. (ISCON), 10.1109/ACCESS.2023.3247003.
Oct. 2021, pp. 1–6. [83] T. Hai, J. Zhou, Y. Lu, D. N. Jawawi, A. Sinha, Y. Bhatnagar, and
[67] Y. Yang and H. Shen, ‘‘Deep reinforcement learning enhanced greedy N. Anumbe, ‘‘Posterior probability and collaborative filtering based het-
optimization for online scheduling of batched tasks in cloud HPC sys- erogeneous recommendations model for user/item application in use case
tems,’’ IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 11, pp. 3003–3014, of IoVT,’’ Comput. Electr. Eng., vol. 105, Jan. 2023, Art. no. 108532, doi:
Nov. 2021. 10.1016/j.compeleceng.2022.108532.
[68] Z. Tang, L. Qi, Z. Cheng, K. Li, S. U. Khan, and K. Li, ‘‘An energy-efficient
task scheduling algorithm in DVFS-enabled cloud environment,’’ J. Grid
Comput., vol. 14, no. 1, pp. 55–74, Mar. 2016.
[69] S. Basu, M. Karuppiah, K. Selvakumar, K.-C. Li, S. K. H. Islam,
M. M. Hassan, and M. Z. A. Bhuiyan, ‘‘An intelligent/cognitive model of PALLAB BANERJEE was born in Ranchi, Jhark-
task scheduling for IoT applications in cloud computing environment,’’ hand, India, in 1984. He received the B.Tech.
Future Gener. Comput. Syst., vol. 88, pp. 254–261, Nov. 2018. degree in computer science and engineering and
[70] R. Kobayashi and K. Adachi, ‘‘Radio and computing resource allocation the M.Tech. degree in CSE from the Biju Patnaik
for minimizing total processing completion time in mobile edge comput- University of Technology (BPUT), in 2008,and
ing,’’ IEEE Access, vol. 7, pp. 141119–141132, 2019. 2015, respectively. In 2008, he started his pro-
[71] S. S. Murad, R. Badeel, N. Salih, A. Alsandi, R. Faraj, A. R. Ahmed, fessional career as a Lecturer with the CSE
A. Muhammed, M. Derahman, and N. Alsandi, ‘‘Optimized Min-Min Department, CIT, Ranchi. He is an Assistant Pro-
task scheduling algorithm for scientific workflows in a cloud envi- fessor with the Department of Computer Science
ronment,’’ J. Theor. Appl. Inf. Technol., vol. 100, no. 2, pp. 480–506, and Engineering, Amity University, Ranchi. More
2022. specifically, his work examines in the field of operating systems, real
[72] K. Z. Gao, Z. M. He, Y. Huang, P. Y. Duan, and P. N. Suganthan, ‘‘A survey time operating systems, and cloud computing. He is currently an Assistant
on meta-heuristics for solving disassembly line balancing, planning and Professor with the Computer Science and Engineering Department, Amity
scheduling problems in remanufacturing,’’ Swarm Evol. Comput., vol. 57,
University. He excelled in teaching field since last 11 years. His current
Sep. 2020, Art. no. 100719.
research interests include task scheduling in cloud computing, RTOS, and
[73] Y. Alayev, F. Chen, Y. Hou, M. P. Johnson, A. Bar-Noy, T. F. La Porta, and
algorithm design. He is a Life Member of the Computer Society of India.
K. K. Leung, ‘‘Throughput maximization in mobile WSN scheduling with
power control and rate selection,’’ IEEE Trans. Wireless Commun., vol. 13,
no. 7, pp. 4066–4079, Jul. 2014.
[74] S. K. Panda, S. S. Nanda, and S. K. Bhoi, ‘‘A pair-based task scheduling
algorithm for cloud computing environment,’’ J. King Saud Univ.-Comput.
SHARMISTHA ROY is currently an Associate
Inf. Sci., vol. 34, no. 1, pp. 1434–1445, 2022.
Professor with the Faculty of Computing and
[75] A. Sinha, P. Mishra, M. Ramish, H. R. Mahmood, and K. K. Upadhyay,
Information Technology, Usha Martin Univer-
‘‘Employing unsupervised learning algorithm for stock market analysis
sity, Ranchi. Her research interests include nan-
and prediction,’’ in Proc. 1st Int. Conf. Adv. Comput. Future Com-
mun. Technol. (ICACFCT), Meerut, India, Dec. 2021, pp. 75–79, doi: otechnology, machine learning, robotics, software
10.1109/ICACFCT53978.2021.9837372. requirements ambiguity detection, system design,
[76] M. Ramish, A. Sinha, J. Desai, A. Raj, Y. S. Rajawat, and P. Punia, big data stream processing, and biomedical deep
‘‘IT attack detection and classification using users event log feature and learning.
behavior analytics through Fourier EEG signal,’’ in Proc. IEEE 11th Int.
Conf. Commun. Syst. Netw. Technol. (CSNT), Indore, India, Apr. 2022,
pp. 577–582, doi: 10.1109/CSNT54456.2022.9787637.
[77] A. Sinha, M. Ramish, S. Kumari, P. Jha, and M. K. Tiwari, ‘‘ANN-ANT-
LION-MLP ensemble transfer learning based classifier for detection and
classification of oral disease severity,’’ in Proc. 12th Int. Conf. Cloud Com- ANURAG SINHA received the bachelor’s degree
put., Data Sci. Eng. (Confluence), Noida, India, Jan. 2022, pp. 530–535, from Amity University, Jharkhand. He is currently
doi: 10.1109/Confluence52989.2022.9734176. pursuing the master’s degree in computer science.
[78] A. Sinha, B. Kumar, P. Banerjee, and M. Ramish, ‘‘HSCAD: Heart He has published many research papers in IEEE
sound classification for accurate diagnosis using machine learning and conference, Springer, and Iop, also written chapter
MATLAB,’’ in Proc. Int. Conf. Comput. Perform. Eval. (ComPE), Shil- and research papers for Sci and Scopus journal.
long, India, Dec. 2021, pp. 115–120, doi: 10.1109/ComPE53109.2021. He was the finalist in Smart India Hackathon 2022;
9752199. he has been shortlisted for his research based on
[79] A. Raj, S. Jadon, H. Kulshrestha, V. Rai, M. Arvindhan, and A. Sinha, the enhancement of medical imaging using deep
‘‘Cloud infrastructure fault monitoring and prediction system using LSTM learning. He has worked in industrial projects,
based predictive maintenance,’’ in Proc. 10th Int. Conf. Rel., INFO-
patents, and Sci journal paper in field of nanotechnology, machine learning,
COM Technol. Optim. (Trends Future Directions) (ICRITO), Noida, India,
robotics, software requirements ambiguity detection, system design, cloud
Oct. 2022, pp. 1–6, doi: 10.1109/ICRITO56286.2022.9964554.
based data warehousing, big data stream processing, natural language pro-
[80] M. Bhargavi, A. Sinha, J. Desai, N. Garg, Y. Bhatnagar, and P. Mishra,
cessing, wireless communications anomaly detection, and biomedical deep
‘‘Comparative study of consumer purchasing and decision pattern analysis
using pincer search based data mining method,’’ in Proc. 13th Int. Conf.
learning. He has won the Best Paper Award in research writing competi-
Comput. Commun. Netw. Technol. (ICCCNT), Kharagpur, India, Oct. 2022, tion held at IIT Delhi. He was awarded The Young Scientist Award for
pp. 1–7, doi: 10.1109/ICCCNT54827.2022.9984410. research presentation. He is currently a Reviewer of Journal of Applied
[81] M. Bhargavi, A. Sinha, G. M. Rao, Y. Bhatnagar, S. Kumar, and Artificial Intelligence. He has also been a Reviewer of IEEE conference
S. R. Pawar, ‘‘Application of IoT for proximity analysis and alert gener- Gucon 2020 and Springer conference at Northeastern Hill University. He is
ation for maintaining social distancing,’’ in Key Digital Trends Shaping a regular reviewer of Sci and Scopus indexed conference journal Applied
the Future of Information and Management Science (Lecture Notes in Artificial Intelligence (Taylor and Francis), Wireless Communications and
Networks and Systems), vol. 671. Cham, Switzerland: Springer, 2023, doi: Mobile Computing (Hindawi), MIS journal (Hindawi), and IEEE Guconnet
10.1007/978-3-031-31153-6_2. conference series.

VOLUME 11, 2023 105617


P. Banerjee et al.: MTD-DHJS: Makespan-Optimized Task Scheduling Algorithm for Cloud Computing

MD. MEHEDI HASSAN (Member, IEEE) ANUPAM KUMAR BAIRAGI (Senior Member,
received the B.Sc. degree in computer science IEEE) received the B.Sc. and M.Sc. degrees in
and engineering from North Western University, computer science and engineering from Khulna
Khulna, Bangladesh, in 2022, where he excelled in University (KU), Bangladesh, and the Ph.D.
his studies and demonstrated a strong aptitude for degree in computer engineering from Kyung Hee
research. He is currently pursuing the M.Sc. degree University, South Korea. He is a Professor with
in computer science and engineering with Khulna the Computer Science and Engineering Disci-
University, Khulna. As the Founder and the CEO pline, KU. His research interests include wireless
of the Virtual BD IT Firm and the VRD Research resource management in 5G and beyond, health-
Laboratory, Bangladesh, he has established him- care, the IIoT, cooperative communication, and
self as a highly respected leader in the fields of biomedical engineering, data game theory. He has authored and coauthored around 60 publications, includ-
science, and expert systems. He is a dedicated and accomplished Researcher. ing refereed IEEE/ACM journals and conference papers. He has served as a
His research interests are broad and include important human diseases, such technical program committee member in different international conferences.
as oncology, cancer, and hepatitis, as well as human behavior analysis and
mental health. He is highly skilled in association rule mining, predictive
analysis, machine learning, and data analysis, with a particular focus on the
biomedical sciences. As a young researcher, he has published 33 articles in
various international top journals and conferences, which is a remarkable SAMAH ALSHATHRI received the bachelor’s
achievement. His work has been well-received by the research community degree in computer science and the master’s degree
and has significantly contributed to the advancement of knowledge in his in computer engineering from King Saud Univer-
field. Overall, he is a highly motivated and skilled researcher with a strong sity, Riyadh, Saudi Arabia, and the Ph.D. degree
commitment to improving human health and well-being through cutting- from the Department of Computer and Mathemat-
edge scientific research. His accomplishments to date are impressive, and his ics, Plymouth University, Plymouth, U.K. She is
potential for future contributions to his field is very promising. In addition, currently an Assistant Professor with the Depart-
he serves as a reviewer for 22 prestigious journals. He has filed more than ment of Information Technology, College of Com-
three patents out of which two are granted to his name. puter and Information Sciences, Princess Nourah
bint Abdulrahman University (PNU), Riyadh. Her
research interests include wireless networks, cloud computing, fog com-
puting, the IoT, data mining, machine learning, text analytics, image
classification, and deep learning. She was the Chair of the Network and
Communication Department and participated in organizing many interna-
tional conferences. She has authored or coauthored many articles published
SHRIKANT BURJE received the M.E. degree
in well-known journals in the research field.
from the Shri Sant Gajanan Maharaj College
of Engineering (SSGMCE), Shegaon, Maharash-
tra, India, and the Ph.D. degree from CSVTU,
Bhilai, Chhattisgarh, India. He is an Associate
Professor with the Department of Electronics and WALID EL-SHAFAI (Senior Member, IEEE) was
Telecommunication Engineering, Christian Col- born in Alexandria, Egypt. He received the B.Sc.
lege of Engineering and Technology (CCET), degree (Hons.) in electronics and electrical com-
Bhilai. He has been engaged in research and teach- munication engineering from the Faculty of Elec-
ing for more than 22 years. He has presented more tronic Engineering (FEE), Menoufia University,
than 20 articles in national and international journals and has two patents. His Menouf, Egypt, in 2008, the M.Sc. degree from
main area of interest includes microprocessor, microcontroller, embedded the Egypt-Japan University of Science and Tech-
systems, circuit theory, basic electronics, and biomedical image processing. nology (E-JUST), in 2012, and the Ph.D. degree
from the FEE, Menoufia University, in 2019. Since
January 2021, he has been a Postdoctoral Research
Fellow with the Security Engineering Laboratory (SEL), Prince Sultan
University (PSU), Riyadh, Saudi Arabia. He is currently a Lecturer and
an Assistant Professor with the Department of Electronics and Com-
ANUPAM AGRAWAL received the B.E. degree munication Engineering (ECE), FEE, Menoufia University. His research
in electrical engineering from Rajasthan Univer- interests include wireless mobile and multimedia communications sys-
sity, in 2009, and the M.Tech. degree in power tems, image and video signal processing, efficient 2D video/3D multi-view
systems from the Government College Bikaner, video coding, multi-view video plus depth coding, 3D multi-view video
in 2014. He is currently pursuing the Ph.D. degree coding and transmission, quality of service and experience, digital com-
in electrical engineering from Manipal University munication techniques, cognitive radio networks, adaptive filters design,
Jaipur. He has more than nine years’ experience in 3D video watermarking, steganography, encryption, error resilience and
the teaching and research field, currently working concealment algorithms for H.264/AVC, H.264/MVC, and H.265/HEVC
as an Assistant Professor with the Electrical and video codecs standards, cognitive cryptography, medical image process-
Electronics Department, Bhilai Institute of Tech- ing, speech processing, security algorithms, software-defined networks, the
nology, Durg, Chhattisgarh, India. His research areas include solar energy, Internet of Things, medical diagnoses applications, FPGA implementations
synthesis of materials for solar cells, fabrication and characterization of third- for signal processing algorithms and communication systems, cancellable
generation solar cell, and reliability and nodal pricing in power systems. biometrics and pattern recognition, image and video magnification, arti-
He was a Project Fellow in the INDO-Russia Project (sanctioned by DST, ficial intelligence for signal processing algorithms and communication
Government of India, New India), in 2020. He has published 17 research systems, modulation identification and classification, image and video
articles in reputed SCI indexed journals (IF >5) and five in Scopus indexed super-resolution and denoising, cybersecurity applications, malware and
peer-reviewed conferences. He also published two book chapters in Taylor ransomware detection and analysis, deep learning in signal processing, and
& Francis Group and CRC Press. He also filed and published two utility communication systems applications. He also serves as a reviewer for several
patents and accepted three design applications in Intellectual Property India, international journals.
Government of India.

105618 VOLUME 11, 2023

You might also like