Hiremath, T. C., & K. S, R. (2022) - Optimization Enabled Deep Learning Method in Container-Based Architecture of Hybrid Cloud For Portability and Interoperability-Based Application Migration
Hiremath, T. C., & K. S, R. (2022) - Optimization Enabled Deep Learning Method in Container-Based Architecture of Hybrid Cloud For Portability and Interoperability-Based Application Migration
Intelligence
To cite this article: Tej. C. Hiremath & Rekha K. S. (20 Sep 2022): Optimization enabled
deep learning method in container-based architecture of hybrid cloud for portability and
interoperability-based application migration, Journal of Experimental & Theoretical Artificial
Intelligence, DOI: 10.1080/0952813X.2022.2117421
ARTICLE
Introduction
The network is an emerging domain that helps fulfill the needs of present and future services
under development. One technique that has acquired extensive attention for delivering services
throughout the network is cloud computing (Bhardwaj & Krishna, 2019). Cloud computing
incorporates information technology (IT) resources into a huge scalable resources pool using
virtualisation techniques (Li & Yuan, 2012). It offers services using Platform as a Service (PaaS),
Infrastructure as a Service (IaaS), and software as a Service (SaaS) (Li et al., 2019; Nikhath et al.,
2021). The Digital Agenda and European Commission e-Government Action plan and cloud
model have effectual usage of data and communication benefits as its aim. The usage of the
cloud model is promoted for creating more responsive and administrative services. In this
context, the storm cloud project is utilised by European Commission using CIP-FP7 Program.
The project aims to reveal the shift to the cloud model to deploy services that public authorities
offer based on the Information and communication technologies (ICT) model. The aim is to
describe valuable guiding principle on how to process the moving applications to the cloud. It is
CONTACT Tej. C. Hiremath [email protected] Research Scholar, Department of Computer Science &
Engineering, The National Institute of Engineering, Mysuru, Karnataka, India
© 2022 Informa UK Limited, trading as Taylor & Francis Group
2 T. C. HIREMATH AND K. S. REKHA.
devised based on direct experimentation using pilot projects consortium (Panori et al., 2019).
The usage of optimisation methods to allocate resources in the cloud model showed much
interest amongst the researchers (Manvith et al., 2021). Currently, a new cloud-related service
named Cloud Gaming Harmony search algorithm (HSA) has more concentration in the gaming
industries. Cloud gaming HSA reached such popularity due to its rapid expansion in the
academic world and engineering applications (Hassan et al., 2012; Poetra et al., 2020).
Cloud computing gained the application of enterprises due to several benefits, like convenient nature,
simple and less implementation cost. The migration of the cloud is defined as the application migration
or services using classical platforms. The cloud has several benefits, like high storage and computing
capacity, services diversification. Due to the cloud model, (Zhong et al., 2020) more ventures select to
migrate their application or services to the cloud by saving internal resources of network and costs. Thus,
cloud data centres host huge services and applications, and attaining optimum resource allocation in
cloud data centres is imperative. In the cloud data centre, the goal of allocating resources is to assign
resources rationally to users (Benomar et al., 2020). In addition, the goal of Enterprise applications
migration to the cloud is to increase the application utility and save local resources of the network (Li
et al., 2019). The allocation of resources comes in the class of resources management as it permits the
resources to be assigned effectively (Oleghe, 2021). There can be changes in network bandwidth
availability over time; therefore, some cloud gaming services utilise adaptation algorithms that alter
video codec parameters accordingly. The Adaptation methods utilised by the GeForce NOW service are
discussed in Suznjevic et al., (2016).
The performance of mobile users is a distributed computing platform, which can be vigorously
optimised through the container migration wherein the process of cloud is moved through one
computing node to another in response to mobility events and balance the load amongst the
network. The container migration is performed by moving a VM or container from the edge cloud
to other clouds (Ma et al., 2017). The virtualisation based on containers permits the users to
execute an application and dependence on an Operating system (OS) with the reliable allocation
of resources, simple scaling, and enhanced effectiveness (Maheshwari et al., 2018; Nichols et al.,
2006). The containers acquire the momentum because of light running and exploitation overhead,
small start or stop time, and high network bandwidth than VM (Pahl & Lee, 2015; Tay et al., 2017).
Container (LXD) (https://www.ubuntu.com/containers/lxd), Docker (2018) and migration of VM are
executed in Machen et al., (2018) Nadgowda et al., (2017). In Wang et al., (2015), migration
techniques are developed on fewer attributes, like the distance between the user and edge
cloud. Classical techniques execute container migration without taking the container-based
attributes, like dynamic resource allocation, bandwidth, processing speed, size, and Random
access memory (RAM). The cost of migration of heterogeneous model is a complicated combina
tion of network, remote, and local resources. A load of the system (Kumar & Vimala, 2020) and
accessible resources of processing and inter-node bandwidth can alter the total time of migration
(Ravuri & Vasundra, 2020). In addition, the container migration is simulated to validate the
reliability of the city-scale scenario (Maheshwari et al., 2018). There are many varieties amongst
user devices in terms of resolution of the screen, capacity computation, and network bandwidth
(Tian et al., 2015).
The main aim of the research is to design a method for portability and interoperability-based
application migration in the cloud platform. The purpose is to develop a hybrid optimisation
algorithm for portability and interoperability-based application migration. First, the simulation of
the cloud is done with the PM, VM, and container. At first, the application is run on the
container. Then, to provide interoperable application migration, a hybrid optimisation algorithm,
namely Lion-SS, is developed. The Lion-SS optimisation algorithm is developed by integrating
the SSOA and LOA. The new objective function is designed based on predicted load, demand,
JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE 3
transmission cost, and resource capacity. Furthermore, the prediction of the load is made using
Deep LSTM.
The key contribution of the paper:
● Proposed Lion-SS for application migration in the cloud: The proposed Lion-SS is devised to
perform the application migration by attaining portability and interoperability. Here, the
proposed Lion-SS is devised by combining LOA and SSOA.
The paper is ordered as follows: Section 2 illustrates the assessment of classical application
migration techniques. Section 3 presents the cloud model, and section 4 offers the proposed
application migration framework. Finally, section 5 devises the results and discussion, and section
6 provides the conclusion.
Literature review
The eight classical technologies for application migration are described below. Li et al. (2019)
devised a multi-objective optimised replica placement strategy for SaaS applications. Here, the
genetic algorithm was incorporated to allocate the resources to address the optimised replica
placement issue. The technique reduced the response time and certified the balancing of load.
However, the technique failed to consider additional Input or output (I/O) overhead caused by the
migration process. Lichtenthäler (2019) devised model-driven migration using a fine-grained cloud
model for performing application migration. Here, the goal was that the model could be produced
based on the classical model. This model can be semi-automatically transformed into a fine-
grained cloud model. At last, the artefacts can be produced using the transformed model to
execute a novel model. This method failed to transform applications into fine-grained Cloud-
Native Applications (CNAs). Maheshwari et al. (2018) devised a traffic-aware container migration
approach for application migration using a pure container hypervisor called LXD (Linux Container
Hypervisor). The container migration model was then analysed for real-time applications, like
mobile edge cloud scenarios and so on. The technique offers improved performance with less
latency, balanced load. However, this approach failed to lower the average system response time
at a higher load. Bellavista et al. (2019) devised edge computing model that handled service
migration considering different options for application migration. The technique offered an
application-agnostic or an application-aware approach. The goal was to find an effective place
ment with less consumption of energy. However, this technique failed to realise which portion of
data can be moved to the target node. Bhardwaj and Krishna (2019) designed a technique, namely
LXD container-based migration technique for application migration. The technique utilised Linux
kernel and executed LXD that utilised libraries for launching the container, and then container
migration was ensured with a checkpoint/restore mechanism. However, the technique causes
overhead when a high load occurs. Truyen et al. (2016) devised container-based multi-tenant
architecture for dealing with SaaS applications to perform application migration. Here, the assess
ment was done using the technical Strengths, Opportunities, Weaknesses, and Threats and must
be considered by the SaaS provider while adopting the container-based model. The risk of the
container orchestration for multi-tenant SaaS is not considered for the understanding of the true
potential was the major drawback of the system. Panori et al. (2019) introduced a technique for
application migration using cloud infrastructure. At first, it abridged the technique along with the
steps of Thessaloniki (Greece), Agueda (Portugal), and Valladolid (Spain) for implementing the
process of migration in the storm clouds. However, the stakeholders’ participation needed extra
actions to sustain and support their engagement in all phases of the migration process. Li et al.
(2019) devised migration strategy for performing application migration in the cloud platform.
Here, certain objectives, such as time of cloud migration, cloud data centre cost, cloud migration
utility, were utilised for performing cloud migration. Here, the model was splitted into two phases,
4 T. C. HIREMATH AND K. S. REKHA.
such as bandwidth allocation and physical resource allocation. The first phase reduced cloud
migration time, and the second phase offered a gradient-based algorithm that could attain
optimum allocation of resources. However, this technique failed to attain allocation of resources
considering inelastic services or even multiclass services.
Challenges
The issues in application migration strategies are as follows,
● The issue in multi-objective optimised replica placement relies on adapting techniques in car
navigation model or live video as it is delay-sensitive and needs less latency (Li et al., 2019).
● The transformation of application in fine-grained Platforms is a challenging task for the
designers. Moreover, offering support for the task using structured techniques and tooling is
complex (Lichtenthäler, 2019).
● The issue in container-based virtualisation in contrast to production is a security constraint. It is
because containers share host’s Kernel, and hence the suspicious users can put at risk and affect
the hardware system security (Bhardwaj & Krishna, 2019).
● The container-based model builds reliable and trustworthy multi-tenant SaaS applications, but
the issue relies on the risks of container orchestration (Truyen et al., 2016).
● The major issue of the resource allocation model is the analysis of resource allocation for an
application whose units handle elastic and inelastic services to attain optimum allocation of
resources (Li et al., 2019).
System model
The cloud resources are managed by the virtualisation method, which helps to provide services to
users using VM. Here, the allocation of resources based on the user request is a chief concern. The
cloud containers assist in managing resources and they can function with minimal system resources
and run the application very faster. The resource allocation provides synchronisation amongst users
and service provider. The resources of VM are utilised in diverse configurations with different
memory, storage, and power. Small performance degradation causes the cloud infrastructure
unproductive because the allocation of applications in containers has a complete control of cloud
functions. Therefore, devising an application migration strategy is essential.
The cloud model devised for assigning VM migration in a cloud platform is depicted in Figure 1. The
goal of the cloud model is to determine optimal resources, which allocate resources to every VM. Assume
a cloud platform comprisesn PMs and is represented as A ¼ fA1 ; A2 ; � � � ; Am ; � � � An g ; 1 � m � n, and
each PM consists of VM. Consider VM contained in mth PM is expressed
� �
asB ¼ B1 ; B2 ; � � � ; Bp ; � � � ; Bq ; 1 � p � q, whereq is total VMs in mth PM. Cloud resources can be
modelled on different containers on a demand basis. Each VM comprises different containers, which
are formulated asC ¼ fC1 ; C2 ; � � � ; Cr ; � � � Cs g ; 1 � r � s. The application that runs on the rth container is
expressed as, G ¼ fG1 ; G2 ; . . . ; Go ; . . . ; Gt gwhere tsignifies total applications such that 1 � o � t:
Every container selected for allocating resources is configured with attributes, like memory,
bandwidth, CPU, and frequency scaling factor, which are expressed as,
Go ¼ fDo ; Fo ; Mo ; Po g (1)
where,Do is the bandwidth of oth container, Fo is the frequency of oth container, Mo denote memory
of oth container, and Po represent processor of the oth container.
JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE 5
Cloud
A1 A2 An
B1 Bq B1 Bq B1 Bq
C1 C2 Cs C1 C2 Cs
Each application has demand and is represented in Figure 2. The maximum demand is considered
as the maximal number of containers. The container with the minimal transmission cost is evaluated
first.
Here, the transmission cost is given as,
s � �
1X No: of transmission of application in container
T¼ (2)
s r¼1 t
s � D �
1X N þ NF þ NM þ NP 1
C¼ � (3)
s r¼1 maxðND ; NF ; NM ; NP Þ α
where, αsignifies normalisation factor, ND symbolises the number of bandwidth to run an applica
tion in a container, NF symbolises the number of frequency scaling to run an application in
a container, NM signifies the number of memory units to run an application in a container, and
NP refers to the number of CPU to run an application in a container, and maxð:Þsymbolise maximum
value. The frequency scaling is employed to meet the better quality of service, the minimal
bandwidth, number of processors and memory units are more necessary for the cost effective
application migration.
The load of the rth container is given as,
t
1X
Lr ¼ Co � E (4)
α o¼1
where,α refers normalisation factor,Co denotes the capacity of container in running oth application,
Esignifies threshold and given by,
operatorð�Þ. The deep LSTM is devised with inputfI1 ; . . . Il g, hidden statesfO1 ; . . . :Ol g, cell out
putfQ1 ; . . . Ql g, and gates cl dl , and gl . Figure 3 displays the structural design of Deep LSTM.
The output from the input gate is modelled as,
�
cl ¼ λ ωIc � Il þ ωOc � Ol 1 þ ωQc � Ql 1 þ γc (6)
where, λis the gate activation function, ωIc is the weight amongst input gate and input layer, � is
the convolutional operator, Il is the input vector, ωOc is the weight between memory output and input
layer. Ol 1 is the previous cell output, ωQc is the weight between output of cell and input layer. � is the
element-wise multiplication, Ql 1 is the previous memory unit output, and γc denote input layer bias.
The obtained output from forget gate is formulated as,
�
dl ¼ λ ωId � Il þ ωOd � Ol 1 þ ωQl � Ql 1 þ γd (7)
where, ωId is weight between forget gate and input layer, ωOd indicates weight between output
gate and memory unit of the previous layer, and ωQd is the weight between cell and output gate.
γd symbolise bias, which corresponds to forget gate. In addition, the output from the output gate is
expressed as,
�
gl ¼ λ ωIl � Il þ ωOl � Ol 1 þ ωQl � Ql 1 þ γg (8)
where, ωIl indicates the weight between input layer and output gate, ωOl indicates the weight
among memory unit and output gate. ωQl represents the weight among cell and output gate,
γg indicates the output gate bias. In the view of the activation function, temporary cell state output
is modelled as,
�
el ¼ tanh ωI � Il þ ωO � Ol 1 þ γu
Q (9)
u u
where, γu indicates the cell bias, ωIu is the weight among input layer and cell. ωOu denotes the weight
among memory unit and cell. The cell output is evaluated using Equation (10).
Qτ ¼ dl � Ql 1
fτ
þ cl � Q (10)
�
Ql ¼ dl � Ql 1 þ cl � tanh ωIu � Il þ ωOu � Ol 1 þ γu (11)
where, Tl is the output vector and ωOT is the weight among the memory unit and output vector. The
output layer bias is modelled as γT . The output is the final predicted load Ln .
Cloud
PM-1
PM-2 PM-n
VM-1 VM-q
Container-1 Container-s
Applications
Predicted load
Resource
capacity
Figure 4. Structural design of application migration model using proposed Lion-SS algorithm.
configuration of PM and VM, and container. Initially, the application is running on the container. In
order to provide interoperable application migration, a hybrid optimisation algorithm called the
Lion-SS algorithm is developed. The Lion-based SS optimisation algorithm integrates the SSOA
(Kaveh & Zaerreza, 2020) and LOA (Yazdani & Jolai, 2016). The objective function is newly designed
based on predicted load, demand, transmission cost, and resource capacity. In addition, the load
prediction is performed using deep LSTM (Zhu et al., 2016). Figure 4 shows the structural design of
the application migration model using the proposed Lion-SS algorithm.
Solution encoding
Here, the proposed Lion-SS is employed to select an optimal solution, in which best containers are
chosen for processing the applications. The optimisation discovers the optimal value between
solutions contained in the solution set, initially provided as arbitrary value. For resources allocation,
the solution set comprises a set of applications, let us considerG1 to G10 . The total number of
containers are assumed to be s, and is considered as, C1 ; C2 ; . . . :Cs . Here, the solution vector is
JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE 9
Figure 5. Solution representation for optimal application migration using the proposed Lion-SS algorithm.
formulated arbitrarily. By considering the fitness value, the solution set obtains the optimal applica
tions, which are allocated to containers using a devised fitness function. Figure 5 illustrates the
solution representation for optimal allocation of resources using the developed Lion-SS. Here,
tindicates the number of applications that run on the container.
Fitness function
The fitness function considers the parameters, such as demand, transmission cost, capacity, and
predicted load. The fitness evaluation is mathematically formulated in Equation (14).
ð1 Lu Þ þ C þ ð1 TÞ þ K
Fitness ¼ (14)
4
where, Lu signifies predicted load, which is evaluated using the Equation (4) K is the demand of
application, C symbolises application capacity, which is evaluated using the Equation (3) and
Tsymbolises transmission cost, which is evaluated using the Equation (2).
Stemple
k ¼ Sold
k þ Step size (16)
where, Sold
k signifies old temple solution vector, and Step size indicate constant value.
10 T. C. HIREMATH AND K. S. REKHA.
where, αis the exploration and βsignifies exploitation parameters, Randsymbolise random num
bers, Shk symbolise solution vector of the shepherd, Shf represent solution vector of the selected horse,
and She symbolise solution vector of selected sheep.
Here, the α is expressed as,
αmax αo
α ¼ αo þ � max iteration (18)
max iteration
Here, the β is expressed as,
βo
β ¼ βo � max iteration (19)
max iteration
Shþ1
k ¼ Shk þ αRandShf αRandShk þ βRandShe βRandShk (20)
Shþ1
k ¼ Shk ð1 αRand βRandÞ þ αRandShf þ βRandShe (21)
LOA (Yazdani & Jolai, 2016) stores the best solution obtained so far and is mathematically
formulated as,
Shþ1
k ¼ Shk þ 2DRandð0; 1ÞfN1 g þ Hð 1; 1Þ tan θJfN2 g (22)
where,Shk signifies the current position of the female lion, Dsymbolise distance between female
lion position and selected point, fN1 gsignifies vector which its starts point is the previous location of
a female lion and fN2 gis perpendicular to fN1 gand Randð0; 1Þ symbolise random number between 0
and 1, and Hð 1; 1Þ refers random number between −1 and 1.
Shk ¼ Shþ1
k 2DRandð0; 1ÞfN1 g Hð 1; 1Þ tan θJfN2 g (23)
Shþ1
k ¼ Shþ1
k ð1 αRand βRandÞ ð2DRandð0; 1ÞfN1 g þ Hð 1; 1Þ tan θJfN2 gÞð1 αRand βRandÞ
þαRandShf þ βRandShe
(25)
Shþ1
k Shþ1
k ð1 αRand βRandÞ ¼ ð2DRandð0; 1ÞfN1 g þ Hð 1; 1Þ tan θJfN2 gÞð1 αRand βRandÞ
þαRandShf þ βRandShe
(26)
Shþ1
k ð1 1 þ αRand þ βRandÞ ¼ ð2DRandð0; 1ÞfN1 g þ Hð 1; 1Þ tan θJfN2 gÞð1 αRand βRandÞ
þαRandShf þ βRandShe
(27)
Shþ1
k ðαRand þ βRandÞ ¼ ð2DRandð0; 1ÞfN1 g þ Hð 1; 1Þ tan θJfN2 gÞð1 αRand βRandÞ
(28)
þαRandShf þ βRandShe
JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE 11
� �
1 ðαShf þ βShe ÞRand ð2DRandð0; 1ÞfN1 g þ Hð 1; 1Þ tan θJfN2 gÞð1 αRand βRandÞ
Shþ1
k ¼
ðα þ βÞRand
(29)
Step 4:Re-evaluate fitness for updated solutions: The fitness of updated solutions is re-evaluated
and the good solution is selected for application migration.
Step 5: Termination: The steps 2 to 4 are repeated until the maximum number of iterations reaches.
Algorithm 1 describes the pseudo-code of the proposed Lion-SS algorithm.
Experimental setup
The proposed strategy is implemented in PYTHON. The experimentation is done in a PC having
Windows 10 OS, Intel i3 core processor, and 4GB RAM.
Performance analysis
Figure 6 displays assessment of proposed Lion-SS is done using fitness parameter by varying
application size considering four set ups. The assessment of proposed Lion-SS using fitness con
sidering set-up 1 is done in Figure 6(a). For iteration = 1, the fitness evaluated by proposed Lion-SS
based Deep LSTM with application size = 500, application size = 1000, application size = 1500, appli
cation size = 2000 are 0.603, 0.612, 0.648, and 0.664. The assessment of proposed Lion-SS using
fitness considering set-up 2 is done in Figure 6(b). For iteration = 1, the fitness evaluated by proposed
Lion-SS based Deep LSTM with application size = 500, application size = 1000, application size =
1500, application size = 2000 are 0.584, 0.615, 0.651, and 0.671. The assessment of proposed Lion-SS
using fitness considering set-up 3 is done in Figure 6(c). For iteration = 1, the fitness evaluated by
proposed Lion-SS-based Deep LSTM with application size = 500, application size = 1000, application
size = 1500, application size = 2000 are 0.577, 0.628, 0.646, and 0.673. The assessment of proposed
Lion-SS using fitness considering set-up 4 is done in Figure 6(d). For iteration = 1, the fitness
evaluated by proposed Lion-SS based Deep LSTM with application size = 500, application size =
1000, application size = 1500, application size = 2000 are 0.559, 0.615, 0.645, and 0.653.
12 T. C. HIREMATH AND K. S. REKHA.
Figure 6. Assessment of techniques with fitness considering (a) set-up 1 (b) Set-up 2 (c) Set up-3 (d) Set-up 4.
Comparative methods
The techniques adapted for the evaluation involve: Multi-objective optimised replica placement (Li
et al., 2019), LXD/CR container-based migration (Bhardwaj & Krishna, 2019), Bandwidth Allocation (Li
et al., 2019), Container-based multi-tenant (Truyen et al., 2016), and proposed Lion-SS-based Deep
LSTM.
Comparative analysis
The analysis of methods is done using load, and resource capacity by varying application size. The
analysis is done considering four set ups.
a) Assessment with setup 1
Figure 7 displays assessment of techniques using setup 1 considering load, and Resource
capacity. The assessment of techniques with load is depicted in Figure 7(a). For application size =
500, the load evaluated by Multi-objective optimised replica placement, LXD/CR container-based
migration, bandwidth Allocation, container-based multi-tenant are are0.383, 0.347, 0.346, 0.321while
the load evaluated by proposed Lion-SS-based Deep LSTM is 0.300. The assessment of techniques
with resource capacity is depicted in Figure 7(b). For application size = 500, the resource capacity
JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE 13
Figure 7. Assessment of techniques using setup 1 considering (a) Load (b) Resource capacity.
Figure 8. Assessment of techniques using setup 2 considering (a) Load (b) Resource capacity.
14 T. C. HIREMATH AND K. S. REKHA.
Figure 9. Assessment of techniques using setup 3 considering (a) Load (b) Resource capacity.
JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE 15
Figure 10. Assessment of techniques using setup 4 considering (a) Load (b) Resource capacity.
Comparative discussion
Table 1 displays the comparative assessment of techniques considering load and resource capacity.
Using set up-1, the load evaluated by Lion-SS-based Deep LSTM is 0.055, while the load evaluated by
Multi-objective optimised replica placement, LXD/CR container-based migration, bandwidth Allocation,
container-based multi-tenant, are 0.214, 0.177, 0.093, and 0.068. The resource capacity measured by
Lion-SS-based Deep LSTM is 0.361, while the resource capacity measured by Multi-objective optimised
replica placement, LXD/CR container-based migration, bandwidth Allocation, container-based multi-
tenant, are 0.354, 0.357, 0.359, and 0.361. Using set up-2, the load evaluated by proposed Lion-SS-based
Deep LSTM is 0.064 and resource capacity measured by Lion-SS-based Deep LSTM is 0.356. Using set up-
3, the load evaluated by Lion-SS-based Deep LSTM is 0.016 and resource capacity measured by Lion-SS-
based Deep LSTM is 0.346. Using set up-4, the load evaluated by Lion-SS-based Deep LSTM is 0.007 and
resource capacity measured by Lion-SS-based Deep LSTM is 0.342.
Conclusion
This paper proposes a hybrid optimisation algorithm for portability and interoperability-based
application migration in cloud platform. The simulation of cloud is performed with PM, VM and
container. Here, the interoperable application migration is offered considering a newly devised
optimisation method, namely Lion-SS optimisation technique. The Lion-SS algorithm is developed
by combining the SSOA and LOA. Here, the new objective function is modelled considering the
predicted load, demand, transmission cost, and resource capacity. In addition, the prediction of load
is performed using Deep LSTM. This technique offered an improved performance and offers an
imperative benefit to data centre operator when it’s required to perform the migration operation
with less bandwidth cases. The proposed Lion-SS-based Deep LSTM provided improved perfor
mance with smallest load of 0.007, and resource capacity of 0.342. Thus, the developed method has
higher resource capacity and hence it can be applicable in excellent accessibility with reduced cost.
The proposed method requires huge memory bandwidth for the computational units; hence, in
future better deep learning mechanism with other advanced optimisation techniques can be utilised
to check the efficiency of the proposed model in attaining optimal application migration. In addition,
the mutual consideration of optimal allocation of resource in network links can be considered.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes on contributors
Prof. Tej C. Hiremath received the B.E. degree from the Department of Computer Science and Engineering, Tontadarya
College of Engineering, Gadag, Karnataka, India, in 2008 and the M.Tech degree from Department of Computer Science
and Engineering, Basaveshwar Engineering College, Bagalkot, Karnataka, India, in 2011. From 2014 to 2021, he was an
Assistant Professor in Department of Computer Science and Engineering, BGMIT, Mudhol, Karnataka, India. He is
currently a Ph.D Research Scholar in Department of Computer Science and Engineering, The National Institute of
Engineering, Mysuru, Karnataka, India. His research interests include cloud computing, fog computing, containerization,
virtualization, machine learning and deep learning.
Dr. Rekha. K. S received the B.E. degree from the Department of Computer Science and Engineering, Sri
Jayachamarajendra College of Engineering, Mysuru, Karnataka, India, in 2004 and the M.Tech degree in Software
Engineering from Sri Jayachamarajendra College of Engineering, Mysuru, Karnataka, India, in 2007. She received Ph.D
degree from Visvesvaraya Technological University, Belagavi, Karnataka, India in 2019. She is currently an Associate
Professor in Department of Computer Science and Engineering, The National Institute of Engineering, Mysuru,
Karnataka, India. She has 15 years of Teaching experience and 6 years of Industry Experience. Her research interests
include wireless sensor networks, cloud computing, machine learning, artificial intelligence, IoT and big-data analytics.
She has involved herself in all the research activities and presented/published her research papers in International
conferences and journals. She is currently supervising the funded project from ARTPARK, IISC, Bengaluru. She has taken
up the responsibility of the Reviewer and Session Chair in many international conferences. She has reviewed the Mc-
Grawhill Books namely Software Project Management and Computer Programming Fundamentals & C Programming.
References
Bellavista, P., Corradi, A., Foschini, L., & Scotece, D. (2019). Differentiated Service/Data migration for edge services
leveraging container characteristics. IEEE Access, 7, 139746–139758. https://doi.org/10.1109/ACCESS.2019.2943848
Benomar, Z., Longo, F., Merlino, G., & Puliafito, A. (2020). Cloud-Based enabling mechanisms for container deployment
and migration at the network edge. ACM Transactions on Internet Technology, 20(3), 1–28. https://doi.org/10.1145/
3380955
Bhardwaj, A., & Krishna, C. R. (2019). A container-based technique to improve virtual machine migration in cloud
computing. IETE Journal of Research 68 1 , 401–416.
Docker. (2018). Docker. [online]. Retrieved 25 July, 2018, from https://www.docker.com/
JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE 17
Hassan, M. M., Hossain, M. S., Sarkar, A. M. J., & Huh, E.-N. (2012). Cooperative game-based distributed resource allocation
in horizontal dynamic cloud federation platform. Information Systems Frontiers, 16(4), 523–542. https://doi.org/10.
1007/s10796-012-9357-x
Kaveh, A., & Zaerreza, A. (2020). Shuffled shepherd optimization method: A new Meta-heuristic algorithm. Engineering
Computations, 37(7), 2357–2389. https://doi.org/10.1108/EC-10-2019-0481
Kumar, C. A., & Vimala, R. (2020). Load balancing in cloud environment exploiting hybridization of chicken swarm and
enhanced raven roosting optimization algorithm. Multimedia Research, 3(1), 45–55.
Lichtenthäler, R. (2019).Model-Driven software migration towards fine-grained cloud architectures, In 11th ZEUS
Workshop Bayreuth, Germany,35–38.
Li, C., Wang, Y., Tang, H., & Luo, Y. (2019). Dynamic multi-objective optimized replica placement and migration strategies
for SaaS applications in edge cloud. Future Generation Computer Systems, 100, 921–937. https://doi.org/10.1016/j.
future.2019.05.003
Li, C., & Yuan, L. (2012). Optimal resource provisioning for cloud computing environment. The Journal of
Supercomputing, 62, 989–1022. https://doi.org/10.1007/s11227-012-0775-9
Li, S., Zhang, Y., & Sun, W. (2019). Optimal resource allocation model and algorithm for elastic enterprise applications
migration to the cloud. Mathematics, 7(10), 909. https://doi.org/10.3390/math7100909
Machen, A., Wang, S., Leung, K. K., Ko, B. J., & Salonidis, T. (2018). Live service migration in mobile edge clouds. IEEE
Wireless Communications, 25(1), 140–147. https://doi.org/10.1109/MWC.2017.1700011
Maheshwari, S., Choudhury, S., Seskar, I., & Raychaudhuri, D. (2018).Traffic-Aware dynamic container migration for
real-time support in mobile edge clouds, In 2018 IEEE International Conference on Advanced Networks and
Telecommunications Systems (ANTS) Indore, India,1–6.
Maheshwari, S., Deochake, S., De, R., & Grover, A. (2018). Comparative study of virtual machines and containers for
DevOps developers. License: CC BY-NC-SA 4.0, Projects: Operating Systems and Cloud Computing . Edge Cloud
Control Framework.
Manvith, V. S., Saraswathi, R. V., & Vasavi, R. (2021).A performance comparison of machine learning approaches on
intrusion detection dataset, 2021 Third International Conference on Intelligent Communication Technologies and
Virtual Mobile Networks (ICICV) Tirunelveli, India, 782–788.
Ma, L., Yi, S., & Li, Q. (2017). Efficient service handoff across edge servers via docker container migration, Proceedings of
the Second ACM/IEEE Symposium on Edge Computing San Jose, California, ACM.
Nadgowda, S., Suneja, S., Bila, N., & Isci, C. (2017).Voyager: Complete container state migration, Distributed Computing
Systems (ICDCS), IEEE 37th International Conference on. IEEE Atlanta, GA, USA.
Nichols, V. (2006). New approach to virtualization is a lightweight. Computer, 39(11), 12–14. https://doi.org/10.1109/MC.
2006.393
Nikhath, A. K., Sailaja, N. V., Vasavi, R., & Saraswathi, R. V. (2021). Road traffic counting and analysis using video
processing. Intelligent System Design 1171 , 645–651.
Oleghe, O. (2021). Container placement and migration in edge computing: Concept and scheduling models. IEEE Access,
9, 68028–68043. https://doi.org/10.1109/ACCESS.2021.3077550
Pahl, C., & Lee, B. (2015). Containers and clusters for edge cloud architectures-A technology review, Future Internet of
Things and Cloud (FiCloud), 2015 3rd International Conference on. IEEE Rome, Italy.
Panori, A., González-Quel, A., Tavares, M., Simitopoulos, D., & Arroyo, J. (2019). Migration of applications to the Cloud: A
user-driven approach. Journal of Smart Cities, 2(1), 16–27.
Poetra, F. R., Prabowo, S., Karimah, S. A., & Prayogo, R. D. (2020). Performance analysis of video streaming service
migration using container orchestration. IOP Conference Series: Materials Science and Engineering, 830.
Ravuri, V., & Vasundra, S. (2020). Moth-Flame optimization-bat optimization: Map-reduce framework for big data
clustering using the Moth-flame bat optimization and sparse Fuzzy C-means. Big Data, 8(3), 203–217. https://doi.
org/10.1089/big.2019.0125
Suznjevic, M., Slivar, I., & Kapov, L. S. (2016). Analysis and QoE evaluation of cloud gaming service adaptation under
different network conditions: The case of NVIDIA GeForce NOW. 8th International Conference On Quality of Multimedia
Experience (QoMex) Lisbon, Portugal, 1–6.
Tay, Y. C., Gaurav, K., & Karkun, P. (2017).A performance comparison of containers and virtual machines in workload
migration context. Distributed Computing Systems Workshops (ICDCSW), 2017 IEEE 37th International Conference on.
IEEE Atlanta, GA, USA.
The LXD container hypervisor. Retrieved January 20, 2018, from https://www.ubuntu.com/containers/lxd
Tian, H., Wu, D., He, J., Xu, Y., & Chen, M. (2015). On achieving cost-effective adaptive cloud gaming in geo-distributed
data centers. IEEE Transactions on Circuits and Systems for Video Technology, 25(12), 2064–2077. https://doi.org/10.
1109/TCSVT.2015.2416563
Truyen, E., Van Landuyt, D., Reniers, V., Rafique, A., Lagaisse, B., & Joosen, W. (2016). Towards a container-based
architecture for multi-tenant SaaS applications, In Proceedings of the 15th international workshop on adaptive and
reflective middleware Trento, Italy, 1–6.
18 T. C. HIREMATH AND K. S. REKHA.
Wang, S., Urgaonkar, R., Zafer, M., He, T., Chan, K., & Leung, K. K. (2015).Dynamic service migration in mobile edge-clouds,
IFIP Networking Conference (IFIP Networking) Toulouse, France.
Yazdani, M., & Jolai, F. (2016). Lion optimization algorithm (LOA): A nature-inspired metaheuristic algorithm. Journal of
Computational Design and Engineering, 3(1), 24–36. https://doi.org/10.1016/j.jcde.2015.06.003
Zhong, Z., & Buyya, R. (2020). A cost-efficient container orchestration strategy in Kubernetes-Based cloud computing
infrastructures with heterogeneous resources. ACM Transactions on Internet Technology, 20(2), 1–24. https://doi.org/
10.1145/3378447
Zhu, W., Lan, C., Xing, J., Zeng, W., Li, Y., Shen, L., & Xie, X. (2016). Co-Occurrence feature learning for skeleton based
action recognition using regularized deep LSTM networks, In Thirtieth AAAI Conference on Artificial Intelligence
Phoenix, Arizona, USA.