Paper 3 CC
Paper 3 CC
Abstract
In the early days of digital transformation, the automation, scalability, and availability of cloud computing made a big
difference for business. Nonetheless, significant concerns have been raised regarding the security and privacy levels
that cloud systems can provide, as enterprises have accelerated their cloud migration journeys in an effort to provide
a remote working environment for their employees, primarily in light of the COVID-19 outbreak. The goal of this study
is to come up with a way to improve steganography in ad hoc cloud systems by using deep learning. This research
implementation is separated into two sections. In Phase 1, the “Ad-hoc Cloud System” idea and deployment plan were
set up with the help of V-BOINC. In Phase 2, a modified form of steganography and deep learning were used to study
the security of data transmission in ad-hoc cloud networks. In the majority of prior studies, attempts to employ deep
learning models to augment or replace data-hiding systems did not achieve a high success rate. The implemented
model inserts data images through colored images in the developed ad hoc cloud system. A systematic steganog‑
raphy model conceals from statistics lower message detection rates. Additionally, it may be necessary to incorporate
small images beneath huge cover images. The implemented ad-hoc system outperformed Amazon AC2 in terms of
performance, while the execution of the proposed deep steganography approach gave a high rate of evaluation for
concealing both data and images when evaluated against several attacks in an ad-hoc cloud system environment.
Keywords Ad-hoc system, Cloud computing, Steganography, Cloud security, Deep learning, Encryption
© The Author(s) 2022, corrected publication 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0
International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you
give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes
were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated
otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To
view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 2 of 36
The ad‑hoc cloud computing paradigm 0.3 bpp or less. As the message length increases, so does
The design of the (Ad-hoc Cloud Computing) is having the bpp, and the amount of change depends on the reso-
a high rate of similarity of the classic (Grid Computing), lution of the original image [11].
(Condor) [4] and (Volunteer Computing System) based
on Berkeley Open Infrastructure of Network Computing. Motivation
As the main idea is to re-use the host computing acces- As a result of the high complexity of cloud infrastruc-
sible resources for operational tasks. Nevertheless, the ture operations, such as [12–14], as well as the existence
chosen computational design for resources utilization has of unreliable resources, there were numerous obstacles
various encounters, which should be highlight to provide that needed to be addressed. These types of difficulties
a high performance evaluation for the end-user [5]. Con- and the essential techniques for overcoming them were
sequently, multiple standards should be implemented to discussed in great depth. It is challenging to develop a
provide the same features of the ordinary cloud comput- cloud solution prototype with a high-level data security
ing system. However, there were multiple key differences paradigm. The performance of LAN security may differ
the unified the ad-hoc computing system design: from that of other network types [15]. It is well-known
that steganography can be used to conceal data for a
• The system can work within a group of periodically variety of purposes, including to perform malicious acts
accessible hosts, which might possibly have some un- using graphics on websites that conceal data [16]. Digi-
excepted behavior from time to another [6]. tal watermarks, on the other hand, could be used to add
• The provided resources from a cloud cluster or a grid data or images without degrading the image quality. As
node can be dedicated to serve a single ad-hoc cloud embedding a message alters the appearance and essential
node. properties of the carrier [17], previously proposed suc-
• The end-user implies a consistent level of trust cessful steganography systems have experienced signifi-
towards the volunteer resources and grid systems; as cant challenges. The most common impediments consist
there is no existence for a trust relation between the of two points:
infrastructure system and the end user.
• Business continuity can be provided through a group • The amount of the required data to be
of unreliable nodes, the ad-hoc cloud system pro- • The level of change that must be achieved by the used
vides the availability for both host and guest users
within the ad-hoc cloud system in case any failure It is also essential to note that the extent of change
case occurred periodically [7]. depends on the image itself. Utilizing high-frequency
• The operated host processes –regardless the image parts to conceal data resulted in fewer percepti-
resources’ consumption level- were not affected by ble interruptions [18] compared to using low-frequency
any means with the ad-hoc cloud system over the image sections. Various common steganography tech-
passing time. niques use the images’ ’Least Significant Bits’ (LSB) for
• The volunteer system includes various wide range of secret data hiding, if it is completed with flexibility and
options as resources (i.e. Disk Space, Memory and consistency, as it is statistically difficult to observe the
I\O) [8]. output files’ alteration rate for multimedia data (i.e.,
images, audio, and video) [19]. Techniques such as
HUGO, which construct and match possible varieties of
Deep steganography ’Cover Image’ clones based upon their first order attrib-
Steganography is the art of hiding data or images within utes, strive to maintain image statistics if afflicted images
another image; the term was coined in the 15th century, differ from their unaffected counterparts. HUGO is often
when communications were physically hidden [9]. Cur- used for communications with a size of less than 0.5 bpp
rently, steganography is a form of encryption technique. [20]. So, neural networks were mostly used to explicitly
Steganography offers a challenge since it can alter the predict the availability of natural visuals and to embed
appearance and content of the carrier. Two factors affect the whole photos in carrier graphics in a much more effi-
the degree of variation: Initially, the magnitude of mate- cient way than in previous studies [21].
rial was suppressed. Images have traditionally been used
to disguise messages within written text. The information Study contributions
is concealed behind an image [10]. Bits-per-pixel (bpp) is This main target of this study was divided into
the unit of measurement for the hidden data rate. Most of two phases, with the purpose of introducing a full
the time, this method limits the total amount of data to approach to:
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 3 of 36
A. Implement an ad-hoc cloud integrated architecture performance measurements, then an illustration of the
as an end-to-end solution and then evaluate both output limitations. Section 7 highlights both discussion
performance and privacy standards within the imple- and analysis regarding the output of both strengths and
mented technique: weaknesses. Section 8, it has provided a summarization
regarding the overall study work.
Table 1 Abbreviations Description This monitors both the host’s resource utilization and
Acronym Description
execution. The host-side counterpart checks the cloud
element’s effect on the host; tasks were assigned to nodes
API Application Programming Interface using both ‘Broker’ and ‘Dispatcher’ architecture. Module
ER Bit Error Rate for the ad-hoc computing system design. The presented
Bpp Bits per pixel systems differ significantly in scheduling strategies, QoS
BOINC Berkeley Open Infrastructure for Network Computing assurances, and the mechanism in which the incorporate
CA Cloud Jobs Assigned reliability through an un-stable infrastructure, among
CC Cloud Jobs Completed other aspects [25].
CDTF Camera Display Transfer Function Chaumont [26] the authors have presented a full prac-
CNN Convolutional Neural Networks tical evaluation regarding the utilization of deep learning
DB Database in steganography and steganalysis. The study was limited
DCGAN Deep Convolutional Generative Adversarial Networks to the applied proposed techniques between 2015 and
DCT Discrete Cosine Transform 2018 in order to provide new future directions through
DDH Decisional Diffie–Hellman highlighting the limitations of the reviewed techniques.
DNN Deep Neural Networks The main components of CNN were deeply discussed
ECC Elliptic Curve Cryptography from the perspective of both time and memory. Multiple
Gmond Ganglia Monitoring Daemon techniques were discussed in detail to get at the roots of
GUI Graphical User Interface the idea of the recent proposed methods of using steg-
gUse Grid and User Support Environment anography with deep learning methods. The study has
HUGO Highly Undetectable steGO concluded that there were still limitations and challenges
LAN Local Area Network that remain regarding the experimental phase of the pro-
LSB Least Significant Bit posed studies, as there were a lot of restrictions that may
LFM linear frequency modulation prevent applying the previous studies on a large scale. As
MSB Most Significant Bits shown in Fig. 2 below, an example of a framework that
NCC Normalized Cross Correlation represents one of the discussed early deep steganography
NF Number of Failures approaches, which was the (Automatic Steganographic
OS Operating System Distortion Learning), from which they can get the (Alter-
P2P Peer to Peer ation Probability Map). Finally, based on a thorough
QoS Quality of Service comparative analysis, the authors concluded that future
RAM Random Access Memory works should specify enhanced algorithms to improve
RGB Red Green Blue the efficiency of deep learning networks with various
RRD Redundancy Rate Distortion types of steganography.
ROC Report on Compliance Chandra et al. [27] have studied various techniques
SLA Service Level Agreement for implementing clouds with the usage of volunteer
SOAP Simple Object Access Protocol resources. They describe the difficulties of constructing
SSE Secret Space Encryptor clouds (Nebulas) from erratic resources through tackling
SSIM Structural Similarity Index Measure similar sort of issues with different methods as shown in
V-BOINC Virtualized Berkeley Open Infrastructure for Network Fig. 3 below. Therefore, the distinctions between their
Computing approaches were minor. Ad-hoc Cloud Systems, or Neb-
VDI Virtual Desktop Infrastructure ulas, were dynamic infrastructures that combine features
VM Virtual Machine of both ’Volunteer Computing’ and ’Cloud Computing’.
WAN Wide Area Network Various issues were arising an example; a software may
WS-PGRADE Web Services - Parallel Grid Runtime and Developer be experimental and not require strict system execution
Environment assurances. They have proposed two solutions for errors
UDH Unsigned Diffie-Hellman handling, they use replication to run a job on various
XML Extensible Markup Language hosts simultaneously, or do VMs check pointing then
restoring those checkpoints on host failures.
It can be as well prohibitively costly for the migration
system. However, the main issue was to deal with inter-
process, especially if it depends on massive volumes of
mittent hosts and limit the effect on non-cloud opera-
dispersed data. For software deployment, correctly for a
tions. A modeler/manager module is implemented on
set of resources, resource scheduling is required. Larger
the VM in every cloud component shown in Fig. 1 below.
applications could be distributed on faster servers to
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 5 of 36
Fig. 1 An illustration for the proposed designed nodes’ components between the cloud infrastructure and cloud elements that was proposed in
[24]
Fig. 2 The (Alteration Probability Map) looked at in [26] was made with the “Automatic Steganographic Distortion Learning” method
decrease the influence of the slowest host on actual Weissman et al. [30] described their early practices with
system execution; these hosts have to be reliable [28], a model of the ‘Nebula Theory’, for scattered data- inten-
given that ad-hoc clouds often work on infrastructures sive softwares working in centralized infrastructures, the
with limited host count, the ‘Task Redundancy’ was not authors reiterate the existing cloud unsuitability through
used, as using it can reduce the amount of accessible the usage of the global research testbed (PlanetLab+),
ad-hoc hosts for new cloud workloads. Additionally, it they compare their prototype with the (Data-Intensive
was suggested to calculate the network performance Blog- Analysis) prototype. Using ’Nebula Master’, the
for mitigating the anticipated performance degradation users have the ability to join the cloud and the admins
[29]. can monitor and control the system. Compute pool
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 6 of 36
Fig. 3 The designed ‘Nebula Service’ system structure and its connections between ‘Data Nodes’ and ‘Compute Nodes’ as proposed in [27]
elements control volunteer storage and computing providers; as a result, the authors have limited their
nodes. With more blogs to analyze, Nebula saved time method to the cloud service field, in which non-dedicated
with a data transfer rate of 53% compared to a cloud hosts were employed for processing. Non-dedicated
emulator. When a comparison is made with the (Cen- hosts have restricted bandwidth and were not always
tralized Cloud) framework, the ‘Nebula’ outperforms the available. Nevertheless, the authors made an assump-
(Centralized Cloud) framework once the failure of a few tion that the web service can provide both fault tolerance
hosts. Nebula uses task duplication and re-execution to and redundancy methods in order to deal with extremely
provide fault tolerance. As stated before, ad- hoc cloud volatile non-dedicated servers, the authors study was
systems face both various theoretical and practical obsta- divided into two parts: Firstly, they have addressed other
cles. However, preliminary results that indicate promis- research issues; to help forecast long-term fault toler-
ing indications [30]. ance for non-dedicated hosts, they evaluate strategies
Duan et al. [31] They proposed an algorithm based- to predict short-term fault tolerance through defining a
steganography extraction through using the DNN, this strategy to recognize ideal mixes of dedicated and non-
technique had the ability to combine both DCT and ECC dedicated servers for both cost reduction and migrations.
in an image. Firstly, the ‘Secret Image’ would be created Their results demonstrate that the average maximum and
from transforming the original image that was previ- minimum long-term availability forecast error rates were
ously written using steganography approach, through the around 22% and 15%. Moreover, with the existence of
SegNet DNN paradigm, the classified image is incorpo- high-rate non-dedicated servers’ number. Increasing data
rated in the host image. It would not be difficult for alter- redundancy also reduces the dedicated hosts’ number
ing the host image, and the image quality would not be necessary to achieve availability assurances. The weak-
affected adversely, while the anti-detection characteristic ness of the proposed ad-hoc cloud is it lacks redundancy
is also strengthened. In addition, steganography capac- and instead reacts to host failures.
ity is guaranteed; as the DNN framework was used, all It is expected that adding task redundancy for ad-hoc
what was needed the ability to change the variables in cloud can improve task completion rates. Secondly, they
both processes, a) The embedding process, b) The extrac- have proposed an optimization strategy to reduce web-
tion process, with no need for additional formulas to be service provider charges or migrations. The authors
created, the applied models were employed to be with believe dedicated resources can be delivered through a
higher adaptability level in the system [31]. cloud service provider like (i.e., Amazon EC2), there-
Yi et al. [32] have discussed in their study both a) an fore every dedicated host cost 10$ / hour, the same goes
example of a cloud service provider (i.e., Dropbox) that for Amazon EC2 instance. Data Transferring through
utilizes resources from non-dedicated servers, b) a basic a failing non-dedicated host to a further one restores
prediction methods combined with host rating provide processes. Finally, the authors compare the benefits of
reliable long-term forecasts. The less dedicated servers employing dedicated versus non-dedicated hosts. In
or cloud services means less costs for the web service order to use a non-dedicated host’s resources, a cloud
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 7 of 36
service provider must first confirm the host’s availability as a replacement to data centers. It’s called Cloud@
using weekly monitoring data, many machine-learning Home since it was comparable to ’Volunteer Resources’.
classifiers were mainly utilized to group the servers based A ’HybridCloud’ permits users to resources’ subscription
on the projected short-term availability [32]. from an ’OpenCloud’. Those two cloud frameworks can
Wang et al. [33] have introduced in their study a unique be utilized separately, or linked to further cloud system,
method of steganography based-Stego images created respectively. Data privacy along with secure commu-
through DCGANs. From another perspective, CNNs nication protocols offer security for centrally managed
were used to implementing a functional link among resources and data. These were among the significant
both the ‘Secret Data’ and the ‘Stego Images’. Moreo- challenges identified by the authors in their research.
ver, the CNNs models, which have extraction ability for Zhang et al. [35] have proposed a full framework of
secret information from stego pictures, was the main using DNN with steganography, a better understanding
contribution of their study. Image steganography can of how DNN-based deep hiding operates through con-
effectively evade steganalysis approaches because of the trasting it with the DDH utilized and the newly suggested
proposed improved technique regarding the ‘Secret Data’. UDH. For example, if you want to hide a single image in
DCGANs have two obstacles; as they would be used for another, you can do it with this understanding. The con-
image steganography. Not all the created Stego images tainer picture can be utilized to give varied content to
have high quality; as the small size of the Stego image is various users based on their practical demands while we
not meeting the minimum requirement to conceal a data. demonstrate the capability of retrieving distinct hidden
The study discussed creating a resilient CNN to solve the images by different recipients. It has become a challenge,
mentioned obstacles. Error-correction codes were added with the increment of images/videos which were classi-
to this approach; in order to improve the accuracy. As a fied as intellectual property; the "Universal Watermark-
future work, many advanced algorithms could be pro- ing" definition was used, as the proposed UDH provides a
posed to enhance the quality of the approach and to over- temporarily solution to for such a problem. UDH can also
come the addressed obstacles. be utilized to transmit small messages, as demonstrated
Mori et al. [34] have describes SpACCE’s as an ad-hoc from the authors experiment; the study proved that the
cloud infrastructure dedicated for software sharing. Their results were promising for hiding an entire image, which
goal is to implement a cloud environment through the significantly increases the potential future works from
usage of an ad-hoc system named ’CollaboTray’, it can different directions [35].
move to another network node at any time (i.e. Micro- Wu et al. [36] build a BOINC-based private cloud for
soft Office Package). The server can re-locate in case the similar and distributed replication. With BOINC system
node presently hosting it becomes overloaded or if the as a dispatcher, the author’s own load-balancing methods
service supplied to clients suffers. If a software needs were using schedule tasks for nodes inside the system.
additional resources over the server capability, other They mention scheduling and infrastructure observing as
clients may be turned as servers; because their project crucial components through private cloud platforms, but
implementation is based on ad-hoc concept, the main do not mention the usage of BOINC or the framework
targets were similar: how to efficiently coexist through their technique based upon. In their view, cloud systems
user procedures, communicate over dynamic hosts, in represent business models that limit its scientific soft-
addition to component migration. Their findings reveal ware. A unified cloud concept is proposed as a replace-
that a server’s performance can suffer if only 40% acces- ment to data centers. It’s called ’Cloud@Home’ since it’s
sibility of the CPU. Consequently, resource heavy apps comparable to volunteer computing. The ’HybridCloud’
cannot use ’CollaboTray’. For the migration process of provides users subscribing resources from an ’Open-
’CollaboTray’, it should be first shutdown, and then its Cloud’. These two cloud frameworks have the ability to be
current state is moved to another node. Finally, it would used separately, or linked to further public/private cloud
be restarted. However, ’CollaboTray’ was not using vir- platforms, respectively, they have designed a BOINC-
tualization, so the system’s security is compromised in based private cloud for distributed implementation. With
case the server has moved to a suspect node. In case the BOINC role as a correspondent, the author’s own load-
server has moved to an un-reliable node, the software balancing methods were mainly utilized for task schedul-
performance would be affected. The proposed ad-hoc ing to nodes inside the system.
system implementation phase have addressed all these Girardin et al. [37] have designed a software named
issues, in addition to providing additional capabilities (i.e. Legion for generating web portals for various functions,
monitor – schedule). From their perspective, the cloud including uploading processes to V-BOINC. As it is done
computing represents a business model that limits its by developing a cloud interface which communicates
scientific software. A unified cloud concept is proposed through a Legion cloud service using SOAP. Therefore,
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 8 of 36
legion creates and maintains redundant data based on contains 1 million pairs were collected through using 25
the BOINC database, making it difficult to connect with camera and screen pairs for the CameraDisplay. Finally,
V-BOINC tasks. ’Legion’ also needs more libraries for these potential DNN-based techniques to steganography
submitting the task to V-BOINC; because ’Legion’ con- provides interesting ideas for future works.
ducts so many activities, it is analogous to the reason Satyanarayanan et al. [43] provide in their research a
’WS-PGRADE/gUSE’ did not utilize to submit jobs to simulation for a mobile cloud system that strictly resem-
V-BOINC. Others that promise to permit ’Job Submis- bles our proposed technique without the usage of mobile
sion’ to V-BOINC were either considered unsuitable for devices. In their work they, propose the usage of Virtual-
our requirements or have not provided the basic capabili- Box on mobile devices; nonetheless, some studies found
ties we require. Therefore, we built a private ‘Job Submis- VM-based approaches ineffective. They have studied how
sion System’ that works with BOINC [37]. to shrink VM sizes in addition to the used approach to
Regardless of the limited ad-hoc cloud system success move them between devices. Our technique of storing
as well as the merger of volunteer computing, mobile pre- configured virtual computers on devices for sending
computing provides more success and popularity since overlays (checkpoints) via a network that equals theirs.
2009 [38]. Mobile devices were considered as a ’Resource However, they did not address some features such as
Poor’ elements (i.e. Limited Storage Size, Computation, (Schedule – Task Recovery – Mobile Churn).
and Memory Capability), they were as well constrained
by both battery lifetime and network connection [39]. Problem statement
Offloading into another remote computational platform The huge amount of data that is shared between organiza-
would be an advantageous in some instances, such as ren- tions and public cloud services makes it more likely that
dering high-quality images in time the power is limited. privacy will be broken by accident or without permission.
Most research focuses on whether executing apps within Commonly, normal users are thought to be cloud plat-
ad-hoc mobile cloud systems is viable and whether per- form security flaws, information leakage, viruses, or illegal
formance advantages might be attained. Success stories behaviors, and cybercriminals aim to steal cloud infra-
vary, and the apparent benefits depend on the software, structure security weaknesses for financial gain or other
because of WAN latencies or cloud platforms cannot be illegal purposes [44]. Cloud services can monitor IT sys-
properly offloaded [40]. However, some argue that off- tems, but they are difficult to secure. Even though cloud
loading computing through Amazon EC2 can be practi- computing raises privacy concerns, this has not stopped
cal and beneficial regarding latency-tolerant softwares. its growth or the decline of data centers. All organizations
In order to make the best use of locality, the autonomous need to reevaluate their system security rules to avoid
mobile nodes set termed cloudlets is proposed, allowing sending data without permission, losing service, and get-
devices to offload duties to extra users [41]. ting bad press [45]. In addition to cloud services, public
Wengrowski et al. [42] utilized the deep learning algo- APIs expose enterprises to new security concerns. Cyber-
rithms for digital steganography into the photographic attacks target cloud infrastructures, and the capability to
realm for LFM, in which the coded images were conveyed attack a suspect’s system using penetration testing tools on
via light; to enable consumers for examining displays and a cloud platform is a frequent tactic employed by cyber-
digital advertising with their webcams with no broadband criminals [46]. As it is common to confuse the concept
connection. Concerning the digital steganography, CDTF of cryptography with that of steganography, auto encoder
has radio-metric effects, which have the ability of alter- network (AEN) is the technology used for compressing
ing the image’s appearance; as CDTF was trained with images [47]. The objective is to safeguard private data that
a dataset contains about one million images; in order to is sent over networks. During the training phase, the net-
model these impacts. The outcome represented the sys- work should adjust the compression techniques for secret
tem which formed hidden messages which could only image data to the lowest levels of the "Cover Image." Sev-
be retrieved with extreme precision, and consequently it eral of the previously described experiments utilized Deep
could not be seen using the normal eye. For each evalu- Neural Networks (DNN). As a result of the recent posi-
ated camera-display combination, the LFM approach had tive contributions of deep neural networks to steganalysis,
provided higher outcome of BER score than the previ- there have been numerous attempts to incorporate neural
ously proposed DNN and (Fixed Filter Steganography) networks into the actual concealing procedure; in order
methodologies. Both (Camera Exposure Configurations) to choose which LSBs to substitute in an image well with
and the (Camera Display Angles) have no effect on their representing the text message in a binary form, numerous
proposed study, which outperforms all previous stud- studies have used DNNs to determine which parts of the
ies at the angle of 45 degrees of screen sight. The dataset image data should be retrieved [48]. Neural networks were
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 9 of 36
used to figure out the time of encoding for the categorized Proposed model
data, which was spread across the image’s bits. Encryp- The proposed design for ad-hoc high-level modules that
tion and decryption networks have been trained together were used for implementing an ad-hoc cloud system.
so that the hidden image can be found. Since networks was built based on the V-BOINC, therefore, we inherit
have only been trained once, the set of images used for the several of V- BIONC features were inherited and an ini-
hide and secret does not affect how well they work. This tial client- server architecture [52], the architecture of
work has a "cover image" with 8 bits for each color chan- the ad-hoc cloud was discussed in-detail in this section,
nel. Another "cover image" with N x N x RGB pixels can be then comes a comprehensive design for the implemented
used to hide an encrypted image with N x N x RGB pixels prototype. Figure 4 below represents the high-level fun-
[49]. Even though previous studies required that encrypted damental components that is a part of ad-hoc cloud com-
messages be transmitted with perfect decryption, we puting system structure.
relaxed this requirement in our investigation. Regarding
both "carrier" and "secret image," compromises would be Ad‑hoc cloud computing system architecture
made based on the quality of the carrier and the hidden Ad-hoc computing systems should be set up according
image. We briefly discuss the discoverability level of the to the following rules so that they do not have the same
detected message’s presence as an afterthought [50]. The problem that most cloud computing systems do:
Fig. 4 The six main principle features that represent the adapted model of ‘Ad-hoc Cloud System’ model structure from [34]
structures, were essential to supply data for these deci- These new features transform the V-BOINC infra-
sion schedules. Advanced controlling is also necessary structure into an (Ad-hoc Cloud System) as these two
to provide monitoring for cloudlet, which provides aspects were previously available through two BOINC
Fig. 5 A diagram that focus on the ‘Ad-hoc Cloud Server’ four main components design between the ad-hoc user and the VM host
Fig. 6 A description of the (Ad-hoc Cloud Client) through its four main components: VM Operations, BOINC Scheduler, DepDisk and Reliabilities
Furthermore, for the execution of volunteer applications c) Ad-hoc hosts collect the VM checkpoints from all the ad-
on the VM volunteer host, the ad-hoc cloud client must hoc hosts and restore them from terminated or unsuccessful.
provide a reliable environment for the ’Job Execution’. d) Control both ad hoc and other users.
Unlike a conventional V-BOINC client, it has the ability to:
The structure of an ad-hoc client is more difficult than
a) Create regular VM checkpoints. the V-BOINC client because of the vast number of func-
b) Transmit checkpoints to the optimum ad-hoc hosts’. tions implemented through the V-BOINC client.
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 12 of 36
The main ad-hoc client modules together represented g) The new VM should be created, then assigning the
all the ad-hoc cloud’s user interface, connectivity, lis- main function task for both ‘Cloud Job’ and ‘Cloud
tener, and reliability. The GUI (i.e., BOINC Manager) Data’, which runs the job and shows the results for
controls the ad-hoc host’s membership in the ad-hoc the server.
cloud. Meanwhile, the Listener component awaits any
instructions provided through the ad-hoc server; this Basically, despite the similarities in client-server
can involve functioning on the VM through the (Virtual topologies, the individual components of both
Machines Services Component), which handles all fea- ’V-BOINC’ and ‘Ad- hoc Cloud System’ were very dis-
tures of the VM-VirtualBox connection. tinct. With these features, V-BOINC has evolved from
an ordinary virtualized volunteer infrastructure into an
Client and server interaction in the ad‑hoc cloud system ad-hoc cloud system, replacing BOINC and its virtual-
Both the server and the host in the ad-hoc cloud system ized volunteer scheme.
communicate with the usage of the BOINC connection
methods [62]. However, V-BOINC have been modi-
fied to allow customized data flow among the server. V‑ BOINC submission system
For instance, for upgrading the server on the VM status, The BOINC, the application that the volunteer hosts
BOINC enables client-server connection through send- were utilizing, was generated statically prior to the
ing XML messages that were interpreted locally by the server distribution; the V-BOINC platforms were then
received object; the host’s information is transmitted to brought to the ad-hoc cloud server, following which
the server [63]. The authenticator identifies the (Host, BOINC authentication was performed. The first jobs
BOINC version and Platform Type), the V-BOINC was approved by WS-PGRADE [64] were those submitted
restricted from consuming no more than the accessibility over the "Grid Network". It permits multiple connec-
of 90% from the memory by the volunteer users, which tions to both the ’Access Grid’ and the ’Desktop Grid’
provided in the volunteer host idle state. while requiring scripting for software implementation
on the infrastructure. In addition, the workflow-based
Practical evaluation analysis of the (ad‑hoc cloud system) graphical user interface ’WSPGRADE/gUse’ supports
A comparison was made between, a) the operation of the the ’Apache’ server-based Liferay portal; the configu-
(Ad-hoc Cloud System), b) the process of the (V-BOINC ration can be performed on a local or distant host. To
Server), both client and server tasks and communication facilitate job submission, WS-PGRADE leverages ’DCI-
techniques differ in the ad-hoc cloud system; as the ad- Bridge,’ a technology that enables standardized com-
hoc cloud host has to: puting infrastructure accessibility [65]. WSPGRADE/
gUse is compatible with BOINC, but it is not a com-
a) Install the ad-hoc client, after that automatically ponent of BOINC; rather, it is a service that commu-
requests a VM from the ad-hoc server. nicates alongside BOINC. Despite its reputation in
b) The ad-hoc host gets a VM in addition to the decom- the scientific community and the availability of multi-
pression script. ple computing platforms, integrating ’WS-PGRADE/
c) The VM is installed at a time it has work to achieve. gUSE’ using an ad hoc approach may be difficult. For
In contrast, once there was an established connec- instance, ’WS-PGRADE/gUSE’ must be configured
tion with the ad-hoc server, the VM would be quickly on the local host. Create a unique web entry point for
deployed and ready to operate. sending work to the ad hoc cloud system. This obtains
d) Various VM images cannot be used if there were not more libraries and packages (i.e., Liferay portal project
enough cloud tasks to fill them. In this condition, the [66]). In addition, the system ’WS-PGRADE/gUse’ with
user in the ad-hoc cloud can execute a job into the some non-required features, such as ’WS-PGRADE’
ad-hoc cloud server, and then the ad-hoc cloud client and ’DCI-Bridge’, can be used to submit work for future
would be ready for the execution. research on the ad hoc cloud system platform.
e) In case the job includes dependencies, the ad-hoc
server then might download the ‘DepDisk’ sent Ad‑hoc cloud system GUI
through the user, V-BOINC’s actions were followed The graphical user interface of the proposed ad-hoc
by other procedures. system is based on the V-BOINC interface. A user’s
f ) The ‘DepDisk’ is attached in this state and the new online account enables the modification of volunteer
VM disk is established and connected in case one user preferences and the tracking of the status of exist-
could not exist. ing or previous work [67]. To enable using it with the
BOINC basic interface, testing additional software types
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 13 of 36
task throughput and might reduce the task completion regularly check the time in which an ad-hoc cloud client
time through dynamically modifying redundancy levels. last contacted the server. It is available if the client polled
This is based on the recent condition of the ‘Volunteer the server within the last 3 min. Ordinary BOINC cli-
Infrastructure’. Though the ad-hoc cloud computing ents can communicate with the server for job. In most
system might not use the ‘Job Redundancy’ for provid- circumstances, despite being accessible to run software,
ing reliability, there were various approaches to calcu- a ‘V-BOINC Client’ might not be able to communicate
late the system reliability, as it was previously discussed with the ‘V-BOINC Server’ for lengthy periods. A ’Peri-
and compared in other studies [76–79]. Algorithms odic Updater’ module has been installed into the client to
that accurately reflect volunteer host standing can help check the server connectivity per minute. The ’Periodic
to confirm that the volunteer host set cannot be com- Updater’ is built as a ’pthread’ generated in a time that the
bined to upload erroneous results. Some schedulers client was created; this time period is saved as a log in the
have been proven to boost job accuracy while reducing project server DB, that notify the ’Availability Checker’ in
task completion time. Moreover, the "Reliability Sched- case any client in the ad-hoc cloud system has enrolled
ulers" can forecast future volunteer host availability to within recent two minutes. They were inaccessible if they
detect the time in which the volunteer chores should have not been polled. Consequently, ’Ad-hoc Scheduler’
be executed. The ‘Volunteer Task’ can be broken down scans the ’VM Service’ DB for any available hosts.
into reduced variable length subtasks and distributed
across multiple volunteers. In case of matching each
Host hardware requirements
volunteer host’s performance to the sub-resource task’s
Formerly, accessible ad-hoc cloud hosts were assessed
requirements, the overall job completion time can be
in order to check if they could physically perform both
lowered in many circumstances. Volunteer hosts can
ad- hoc cloud guest. Assuming both a cloud task and an
be graded depending on the expected download time
ad-hoc cloud guest utilize fair amounts of resources, we
for data-intensive apps working within the volunteer
cannot identify how many resources would be needed
resources. After that, the ‘Volunteer Task’ is assigned
before the execution. Consequently, we suppose every
to the volunteer host along with the fastest download
ad-hoc host has 16 GB of RAM, and 80 GB of storage. It
time. Subsequently, the ’Volunteer Task’ is allocated
is conceivable to compare the consumed both times and
into the ’Volunteer Host’, scheduling can alternatively
resources of previously run ’Cloud Jobs’ to anticipate both
be reliant on White box/Blackbox methods, where a
times and resources of a newly submitted ’Cloud Task’.
lot or little regarding the software is identified before
It is difficult to identify if a cloud work, before its
execution. Jobs could be scheduled for the nearest ideal
compilation, shares features with the earlier compila-
hosts depend on the resource requirements, then the
tion. This task merits further research; when launched,
nearest optimum host with the needed resources would
the V- BOINC client immediately records the volunteer
be chosen. End-user criteria (i.e., Budget Cost and Per-
host’s resource status. These resources have restric-
formance Management) can also have an affection for
tions and might be configured by the user options in
the scheduling decisions, while some other decisions
the V-BOINC client and ’Volunteer Software’. The
could be based on: a) computation time reduction. b)
‘Cloud Scheduler’ observes the resources which the ad-
provider profits enhancements and c) the required
hoc visitor or cloud job possibly access. Ad-hoc hosts,
SLAs appliance.
which might not be able to meet the resource require-
Previous papers have studied the potential develop-
ments, were eliminated from consideration. OpenStack
ments to the used ’Scheduler’ in the implemented design
nova-scheduler does something similar by calculating
for the ad-hoc cloud system [76, 80–82]; as the reliability,
acceptable servers for VM placement using the filters
improvement of the scheduling plan would affect:
(Core – RAM – Disk) filters.
• The overall performance of the ‘Cloud Jobs’.
• The total time required for completion Host resources
• Increase in task throughput The supplementary ad-hoc cloud hosts’ resource loads
were recovered, this can happen through adding (Gan-
glia), “which is a scalable, distributed monitoring tool
Availability for high-performance computing systems” [83], this
The ‘Ad-hoc Cloud Server’ keeps a group list of acces- tool would be an addition for the client in the ad-hoc
sible hosts defined by the ’Availability Checker Daemon’ cloud system, the host user might not need to setup
introduced into ’VM Service’ project. The ’Availabil- ‘Ganglia’ independently after the client setup in the ad-
ity Checker’ occasionally checks the VM Service DB to hoc cloud.
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 15 of 36
The ‘Ganglia gmond daemon’ gathers data form the - (c). BOINC automatically saves the allocated number
hosts (CPU, Memory, Disc, and Network Usage) in for ‘Cloud Jobs’ per every ‘Ad-hoc Host’ in the ‘Job Ser-
the ad-hoc cloud system. Network utilization could be vice’ DB. The ‘Ad-hoc Scheduler’ should be requested
valuable in identifying which ‘Cloud Jobs’ can particu- from the ‘Job Service’ database. VM Services have an
larly fit for the ‘Ad- hoc Cloud Server’. ‘Ganglia Daemon ‘Availability Checker Daemon’ that could terminate or
Tool’ collects the ad-hoc hosts’ data in rrd files. RRD fail any ‘Ad-hoc Host’ after being without activity for a
Tool was utilized to be able to provide ‘Resource Loads’ couple of minutes. A cloud job’s ad-hoc client monitors
[84], which collects the CPU requirements for every 15 reliability of factors in previous mentioned points (d) and
second interval throughout 120 seconds. The ‘Ganglia’ (e). Timeouts can identify virtual machine configuration
averages monitoring data every 15 seconds in normal errors such as VirtualBox registration failures or DepDisk
state, but we average load every two minutes to smooth failures. The ’VBoxManage’ API polls the ad-hoc cloud
out real-time oscillations and offer a good sense of guest each 10 seconds to check the working operation,
current demand. In case the ‘OpenStack Scheduler’ to ensure that non-operational ad-hoc cloud guests were
is incorporated for the ‘Ad-hoc Scheduler’, the nova- identified fast and efficiently. The running VMs method
scheduler has started selecting an accessible ‘Ad-hoc returns a set of VMs that were still running.
Host’ that has the minimum requirement for working, ‘Ganglia Monitoring Tools’ provide functions for
‘Ad-hoc Host’ processes not preferred to exceed more controlling and monitoring the ad-hoc host’s existing
than 65% of the CPU that has only RAM of 1GB. resource overloads [85]. It monitors non-BOINC soft-
‘Ad-hoc Scheduler’ uses the above output to assess ware’s total CPU consumption and suspends BOINC if
in case the existing load is suitable for both ‘Ad-hoc their overall CPU utilization exceeds a threshold con-
Guest’ and ‘Cloud Task’ implementation. It also stores figured through a volunteer user. Ad-hoc cloud host
prospective ‘Ad-hoc Hosts’ DB entries which could be resource utilization can influence reliability in case
utilized to conduct pending cloud jobs. The ad-hoc whatever monitoring approach is used, in case the host
hosts should provide the minimum requirements; in might be substantially exploited by the ad-hoc cloud host
order to provide the minimum accepted performance processes [86]. Consequently, the (Cloud Job) performs
criteria. For instance, the ad-hoc server with 640 MB poorly and can have a long time for finishing. The ad-
RAM (Lower than the 1 GB requirement), meeting our hoc cloud client notifies the ad-hoc cloud server regard-
512 MB minimum accessible memory requirement, but ing any failure that occurs to the ad-hoc cloud guest. The
the low possibility for accessing greater resources could ad-hoc cloud server has the detection ability of any type
not provide the ’Cloud Job’ the needed performance of poor performance regularly through the host, the reli-
in a time that more resources were required. Conse- ability level of an ad-hoc host can be measured at a time
quently, the ’Scheduler’ filters the servers in the ad-hoc in which:
cloud system depending on both ’Resource Loading’
and ’Hardware’. a) The ‘Ad-hoc Cloud Job’ has completed its task.
b) The ‘Ad-hoc Cloud Guest’ stops working.
c) The ad-hoc cloud host has not polled for 2 min.
Ad‑hoc cloud host reliability evaluation
Owing to the unpredictable nature of ad-hoc cloud sys-
tems, from which systems can fail or shutdown at any Decision procedure
time, host reliability must be considered, its reliability The prospective execution candidates list has been cre-
was determined by five factors: ated based on ad-hoc cloud host (Availability, Hardware
Specifications and Existing Resource Demand). This list
a. ‘Cloud Jobs’ pointed to ad-hoc host earlier. would then be ranked with every ’Ad-hoc Cloud Host’
b. Ad-hoc hosts overall completed ‘Cloud Jobs’. reliability, the ’Ad-hoc Cloud Scheduler’ chooses from a
c. User Errors. list the most dependable host to allocate the ‘Cloud Job’.
d. Guest Failures. During the schedule process of n number of ’Cloud
e. Host Resource Loading. Jobs’, the first n number of users would be selected,
through this way it can be confirmed the reputation
Furthermore, any type of software/hardware issue that level of ad-hoc cloud hosts with the existence of enough
prevents the client from working (i.e., Kernel Panic) can resources always have work to do. This is a primary
cause host termination; the guest errors include configu- scheduler, which has a possibility for improvements as
ration, installation, processing, and shutdown failures; can be shown as an example in Table 2. For instance,
the ad- hoc cloud server monitors reliability factors (a) assigning a single ’Cloud Job’ to ’Ad-hoc Server’; because
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 16 of 36
Table 2 An example for individual ‘Candidate List’ in ’Ad-hoc The (Job Receiver Listener) should keep the parsed
Scheduler’ data and start obtaining the ’DepDisk MPI’ along with
AdhocHost ID Reliability CPU Memory Disk Space an ad- hoc cloud system, such as V-BOINC, DepDisk
Rate will connect to the retrieved VM and start it. The ‘Job
Receiver’ after that can direct the VM’s ‘V-BOINC
14 96 37% 591 Mb 401 Gb
Client’ with the aim of communicating with ‘Job Ser-
93 77 81% 1.8 Gb 967 Gb
vice’ that exists at this URL http://129.205.80.10/Job_
22 58 96% 5.7 Gb 150 Gb
Service. The ad-hoc cloud-computing client leverages
3 39 38% 854 Mb 13 Gb
the VirtualBox API’s guest- control function. Despite
knowing the work units to give every ad-hoc visitor, the
guest sends the ‘Work Unit’ ID request to the server.
building a complicated ’Job Scheduler’ can be beyond
The (Fault Tolerance) was delivered through the ad-
the focus of the proposed work, the performance assess-
hoc computing system in specific to the client-server
ment and potential inclusion can be left for an upcoming
interaction. For example, instead of periodically check-
research work.
ing for updates for every guest account, we use P2P
checkpoints that were assigned to the nearest optimum
Cloud job working mechanism
ad-hoc hosts number, all in the matching cloudlet, to
In this part, an illustration for the selection process of
achieve high reliability. The term cloudlet refers to a
the ‘Ad-hoc Host’, it was configured for making the nec-
group of ‘Ad-hoc Guests’, which share software require-
essary procedures to let the ’Cloud Job’ to process in the
ments and ’DepDisk’, an example for the data analysis
VM. Since both conventional BOINC and V-BOINC
interface through the cloudlet within the process can
represent volunteer infrastructures, the host can con-
be illustrated through Fig. 8 below.
trol both implementations. Therefore, BOINC clients
If neither the ‘Ad-hoc Cloud Host’ nor the ‘Ad-hoc
rarely get notifications from servers unless they ask for
Cloud Guest’ fails. Then, the ‘Ad-hoc Server’ notifies
them. Ad-hoc servers can communicate with users with
the other ad-hoc cloud host for the checkpoint recov-
no need for waiting for users in order to begin com-
ery, the peer to peer network was represented in Fig. 9
munication. A job receiver can receive a task from any
below, the essentials of the implementations were men-
host. After that, the ‘Ad-hoc Server’ re- directs the cli-
tioned in the below points:
ents in the ad-hoc cloud system to the basic V_BOINC
server message provided to V-BOINC clients. The ‘Lis-
tener’ then the message would be parsed to identify the • Each of ‘A’ to ‘N’ ad-hoc hosts will include an exe-
action needed to be taken. cuted cloud job, or they were waiting for commands
Fig. 8 An example of the ‘Cloudlet’ controlling system interface while analyzing SQL live data [4].
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 17 of 36
Fig. 9 A representation for the peer-to-peer network for achieving reliability, where the failed VMs, restored VMs and failure probability VMs are
represented with the colours of red, blue and grey respectively with their rate in percentage
on the configured mechanism and set the guests illustrates the overall operation cycle starting from ’VM
based on the configured method. Request’ to ’Job Result’.
• Firstly, the ad-hoc cloud guest ‘A’ will obtain and exe- Firstly, the cloud host imports the data files through
cute the ‘Cloud Job’. Secondly, to confirm the check- the GUI, the overall imported data into the service were
points transferring to reliable ad-hoc cloud hosts, the located in a folder ’job’ inside the ’Job Service’ project,
guest should be periodically check pointed. then the process will continue through the developed
• The ad-hoc cloud host ‘A’ terminates prematurely, ’Work Creator’ tool, this tool identifies whether the
disrupting the guest A’s (Cloud Job) implementation import is application or data, along with adding XML
after a period. Because of this failure, it is not recom- information to the files, after that the ’BOINC API’
mended to be used in a production environment. is called for creating the ’BOINC Work Unit’. As the
’VM Service’ is notified with the ‘Ad- hoc Cloud Job’
As previously mentioned, the implementation phase creation through ’Job Service’, which allows the ’VM
of the (Ad-hoc Cloud System) is mainly based on the Controller’ tool to implement a VM-based volunteer
Berkeley Open Infrastructure for Network Comput- resources regarding the task execution, this operation
ing (BOINC) [87], which is a user-server open source is represented in Fig. 11 above. Then, the ’job’ in the
solution that was chosen as a framework to harvest the proposed architecture can be executed after a notifica-
non-used resources from unreliable hosts and inte- tion is being sent, a ’job’ schedule will be assigned to
grate them to be part of the system execution. A virtu- the host with high reliability through the ’VM Service’.
alized scheme of BOINC was created, which is called Based on certain specifications, the scheduler works
V-BOINC. It makes the best utilization of virtualization depending on:
for compiling BOINC features through VMs, starting
with V-BOINC servers and ending with BOINC hosts, a) The previous overall cloud jobs executed.
including administration of the modified host BOINC, b) The previous overall ‘cloud jobs completed.
which is called ’V-BOINC Host’. It is used for install- c) The failure rate of the host (hardware or software).
ing the BOINC framework’s plugins. Figure 10 below
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 18 of 36
Fig. 10 The ‘Ad-hoc Cloud Client-Server’ work flow design starting from data retrieving form the ‘Dependencies’ going through V-Boinc server,
V-Boinc Client, Boinc VMs ending with the output from the ‘Job Results’
Fig. 11 The ‘Ad-hoc Server’ diagram relation between BOINC and the database through both ‘Job Services’ and ‘VM Services’
d) The failure rate of the guest VM errors (configura- The VMs auto-checkpoints were made through the
tion, installation, processing and termination). usage of (Snapshot) function in the VMs API. Plac-
e) The existing available resources. ing the snapshot files in the auto-assigned destination
folder in-which the VMs images were saved. There were
The evaluation of the reliability measurement for various conditions (i.e., VM Settings) that the auto-
every host through the data transmission rate between checkpoint through the VM happens; those configu-
both ad- hoc (Host - Server) would be measured as rations represent the hardware settings (i.e., Memory
explained in equation 1, in which: and Desk Size) for each VM. The recent condition to
the existing VDI in each VM, this was considered to
be done through ’differencing images’ that save all the
⎧ 0, if NF = CA operations’ logs. The recent memory condition, in case
⎪ 100, if NF = 0
Host Reliability = ⎨ � � the snapshot is saved during the VM processing, the
⎪ CC
⎩ CA
∗ 100 Otherwise resultant file size depends on:
(1) The assigned memory size for the VM.
Where:
NF: The overall failure rate of both ad-hoc (Client - Guest). a) The application memory usability.
𝐶𝐴: The overall jobs scheduled for the (Ad-hoc Cloud
Host). Through memory size restrictions, the resultant
𝐶𝐶: The overall finished jobs through the (Ad-hoc saved file will be low-sized. However, it can cause some
Cloud Host). effects on application performance negatively. Regard-
ing monitoring the storage size that is being utilized by
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 19 of 36
the ‘Ad-hoc Cloud Host’, V-BOINC was assigned to erase the ‘Preparation Network’ gradually raises the secret size
the unneeded snapshot. For snapshot recovery, the ’Dif- image till reaching as the same size of the ‘Cover Image’;
ferencing Image’ was turned on. As previously stated, the because of the ‘Secret Image’ distribution throughout
adapted steganography method makes the best use of the overall N × N pixels. ‘Preparation Networks’ convert
auto-encoding systems. However, not only a bottleneck color- based pixels into further usable characteristics,
for encoding an image is used. Also encoding two images; which defines deformations done mostly by the ‘Prepara-
with the purpose of making the raw image (i.e., container tion Network’. This represents the function strength with
image) has the highest possible similarity with the final all concealed images, regardless of their size, as repre-
image (i.e., cover image). The main network mission was sented in the Fig. 12 below, where:
to minimize the error rate through usage of equation 2 as
shown below: • Left Part: Full-colored image.
• Center Part: ‘Preparation Network’ extracts data
channels that represent the center network input.
(2) • Right Part: Edge detectors scaling in.
Steganography based‑deep learning approach The ’Preparation Network’ alters the three-color
The closest previous presented idea to the implemented streams; the second stream was activated for greater res-
steganography in this study was the image size reduc- onance frequencies.
tion via auto-encoding networks, regardless of two terms The ’Hiding Network’ represents the main network
were commonly used interchangeably [88–91]. The deep that imports both the ‘Cover Image’ and the ’Preparation
learning model would be trained through a group of hid- Network’ output and the resultant created part called
den images’ data using ‘Cover Image’ parts. the ’Container’ image. The (N × N) represents the input
Firstly, the ‘Preparation Networks’ gradually expand the size in pixels for this network, convolution ‘Cover Image’
‘Secret Image’ length towards the ‘Cover Image’ length, RGB streams other than the modified ‘Secret Image’
dispersing the ‘Secret Image’ bits throughout the full N × streams with depth concatenation. More than thirty dif-
N pixels, tests with smaller images were avoided for size ferent network topologies were tested along with both 1.
considerations, and rather than we focus on real photos. Variable hidden layer, 2. Convolution sizes. The output
It was crucial to convert the color scheme into further of five convolutions made up the optimum output out
useful elements for concise image recording - that has of 50 filters. Finally, the image receiver that represents
edges – for overall concealed images sizes. The ’Prepa- the decoder uses the ’Reveal Network’, it gets only the
ration Network’ trains the hidden network in order to ’Container Image’ without neither both the secret nor
extract the ‘Secret Image’. the ‘Cover Image’; as the ‘Cover Image’ would be deleted
Regularly, with the existence of a size of M × M ‘Secret by the decoder to expose the ‘Secret Image’. When both
Image’ which was minor than the ’N × N’ ‘Cover Image’, the ‘Preparation Network’ and ‘Cover Image’ were used
Fig. 12 An example that clarifies the resultant deformations output through the ‘Preparation Network’
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 20 of 36
together as inputs, the ‘Hiding Network’ constructs earlier in the ’Preparation Network’ is being controlled
the Docker container. These channels were the depth along with ‘Cover Image’ reconstruction. Figure 13 below
combined: shows the three networks during the training process for:
As stated previously, c = ‘cover image’, s = ‘secret The proposed framework designed in the Fig. 14 rep-
image’ and β = ‘errors rate reformation’, both errors were resents three trained phases in one network. Neverthe-
measured along with every error impact. Specifically, the less, it was less difficult to divide them into three main
error part ||c − c′|| cannot be the same for the reveal networks to describe them easily. About 20 models of
network that imports the ’Cover Image’ and exports the this network, including a different hidden layers’ num-
‘Secret Image’. Contrariwise, all the error signals detected ber. In addition to convolutional sizes, were tested for our
by the systems β ||s – s′|| for re-building the hidden research; the optimum had five convolution layers using
image, through this way the process cycle that modeled 35 filters. Eventually, the image transmitter utilizes the
Fig. 13 The three networks (Prep Network, Hiding Network and Reveal Network) were trained as one network, Error Term ||c - c *|| affected both
(Prep Network) and (Hiding Network) while Error Term ||s – s*||
Fig. 14 The proposed system in divided in three main parts. a Preparing the ‘Secret Image’, b Concealing the image through the cover image and c
The reveal network usage for the hidden image exposure
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 21 of 36
‘Reveal Network’, which can be considered the decoder. is an ongoing recovery process. Therefore, it can result
Neither the cover nor the hidden images were sent to it. in a total reduction of pixel variation. It is necessary to
The hidden data was revealed after the decoder removes consider the error rate while constructing a container
the ‘Cover Image’. During the network training phase, a using LSB replacement [97]. Using an expected average,
small rate of noise volume was injected into the second the persistent bits were reset to produce a fresh image in
network’s result (such as the created container image) which the noise was fully concealed. Using this method to
to confirm that the hidden picture is not automatically reconstruct the "Cover Image," the pixel intensity loss for
encoded using LSBs. However, to prevent the hidden each channel was 4.43 (scaling from 0.0 to 255). Using the
image recovery from being contained solely in the LSB, median value for the deleted LSB bits would result in a
a reverse engineering was used towards the noise for maximum average reconstruction error of 3.81 bits. This
reversing the LSB every now and then. Furthermore, a inaccuracy of 3.0 or more was expected when the average
practical exploration to the network functions. value was used to fill in the LSB. Removing 4 bits from
the encoding of a pixel reduces the number of intensi-
Experiments ties that may be represented by 16 times. By selecting the
The implementation The implementation phase was average value to replace the missing bits, the highest pos-
accomplished by utilizing eight ASUS ROG Strix sible error is eight, while the average error is four, pro-
GL702VS with AMD Radeon RX 580 4 GB processor, vided that bits are equally distributed. Consider using the
RAM 32 GB DDR4 and a 512 GB SanDisk SSD hard disk average value for ’Cover Image’ to avoid any confusion.
with Windows 10 [92], BOINC 7.16.20 [93] and VMware Furthermore, the LSBs of the ‘Cover Image’ were stored
15.2 [94]. The optimal checkpoint rate was determined to where the ‘Secret Image’ MSBs were stored. Therefore,
be 15 per hour for a minimum of 2.52 GB of transmit- those bits must be used in this encoding scheme, and
ted data from each ad-hoc client. If in a worst-case sce- hence the larger error. The reason which led to the recon-
nario, eight ad-hoc clients obtain a checkpoint from the struction cover image’s error being more than 3.81 is
transmitter, the transmitter is capable of transmitting the usage of MSB from the ‘Secret Image’ instead of the
8.2 GB of data every hour. Consequently, a supposition LSB in the ‘Cover Image’. This results in a higher error
would suggest that the ad-hoc cloud system has multiple rate than utilizing the LSBs average values, while both
hosts for each working visitor. Consequently, consider- secret and cover samples were obtained within a similar
ing the execution increment rate of "ad-hoc guests" and range, as they were far superior to the ones in our cur-
"Cloud Jobs," the network failure rate increments for sev- rent set-up, as can be illustrated from Fig. 15 below. If the
eral checkpoints that are compiled in concurrently. The approximate amount was utilized to change the LSB, the
Cloud Job would be terminated if "Ad-hoc Client" assigns error rate of 4.0 was anticipated because 16x fewer inten-
"Ad-hoc Guest" as inactive and sends feedback to "Ad- sity levels could be presented by eliminating 4 bits from
hoc Server". As indicated previously, in the event of a sys- the pixel’s encoding. Considering equally distributed bits,
tem guest failure, the ad hoc scheduler would select the using the arithmetic mean to replace lost bits results in a
optimal client for reintroducing the visitor into opera- total deviation of 8 and a median failure of 4. However,
tion. While gathering requests for restoring the required the cover image’s LSBs were saved in the MSBs of the
checkpoint, the ad-hoc client would compile a series of hidden image. For clarification, this encoding strategy
successive events. According to the earlier ’Ad-hoc Cli- requires the usage of those extra bits, which could lead to
ent’ evaluation of the assigned task in the overall ad- many mistakes. As a result, there was a need to demon-
hoc system, the total operation will likely be completed strate the drawbacks of this approach.
within a minute. Several Steganographic approaches anticipate that an
The primary focus was the testing phase of deep steg- attacker does not have a high detectability rate to reveal
anography operation performance on the ’Ad-hoc Server’, the original ‘Cover Image’ (the encoded ‘Secret Image’
to determine the server’s ability to operate in the ad-hoc was not included) [98, 99]. However, in case the original
cloud system environment. AdaM [95] was used to con- image was discovered or whatever information might be
struct the three networks, while ImageNet was the pri- gleaned regarding the ‘Secret Image’ in the case of the
mary dataset utilized [96]. The testing portion of these decoding network unavailability, As it is shown in Fig. 15,
networks comprised 1,000 photos from this dataset; to there was no difference between both the initial ’Cover–
evaluate the final output of the ’Cover Image’ encoding Container’ and after it had been enhanced. It is essential
with no ’Secret Image’ existence ’= 0’ across the same to highlight that the training was conducted for networks
network, since it gives the optimal reconstruction of the using images from the (ImageNet) dataset, in which a
’Cover Image’ failure through the same network. While wide range of images were covered. Nevertheless, it is
it is possible to replace the picture measurements, there important to analyze the impact when different types of
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 22 of 36
Fig. 15 The results consist of three phases sets, a It includes both the main ‘cover image’ and ‘Secret Image’, b Reconstructed Image and c) Secret/
Cover error rate after improvement by 5x, the lowest error rate in the first row (2.8, 4.2) while the highest error rate in the last row (4.3, 7.8)
features were employed. To illustrate this, five images volume. Circles and consistent noise regarding the last
were included in Fig. 16. Pure white images were uti- two rows replaced the cover artwork. Recognized por-
lized in the first row to monitor changes when a colorful tions of both the "Cover Images" and the "Secret Images"
‘Secret Image’ was hidden. Using the ‘ImageNet’ dataset could no longer be reconstructed due to considerable,
of images in training, this basic scenario was not really but predicted, faults in the process.
observed; the ‘Secret Image’ changes to consistent noise The left portion of Fig. 17 represents the control, as the
in the 2nd and 3rd rows. As from the observation, the statistics in the top-left corner represent the change in
retrieved ‘Secret Image’ was quite noisy, in spite of the bit rate in only the container’s red channel; similarly, the
container image ‘4th column’ including only a small noise volume was correlated with the bit’s significance; this is
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 23 of 36
Fig. 16 The images result in three doubled rows for ‘Cover Image’ and ‘Secret Image’ in three conditions: a Original Images, b Reconstructed
Images, and c Residual Error, along with the pixel error rate on the right side
completely predictable, as the same can be said for both the author does not do so. From this, it can be concluded
black ’B’ and orange ’O’ channels. Consequently, a similar that any type of change to the bit position through a ran-
bit flip within the container image affects all color char- dom channel in the ’Container Image’ can have an effect
acteristics of the retrieved "Secret Image. For instance, on the creation of the ’Secret Image’ via the overall color
the reconstruction of the ’Secret Image’ was affected by channels. In addition, there is no bit order positioning
a single bit flip involving any color channel, which affects standard that the error can adhere to.
all color channels within the container image. In addi-
tion, the error disregarded the priority order of the bit Results discussion and performance analysis
positions. In addition, the data of the ’Secret Image’ was The proposed model of ad-hoc cloud perception was
dispersed across the color channels, which is why it had evaluated in terms of security and reliability. The reli-
remained undetected until now. StegExpose did not find ability was tested by using 9 nodes. In order to make an
any encodings, but such a large volume of data could be environment simulation with a reliable level, the ‘Nagios
easily discovered using other methods. Similar studies Network Analysis Tool’ was used for 14 days on 5 hosts.
have demonstrated that deep neural networks can com- The resultant data of this monitoring operation was
pete with and frequently outperform more established parsed and calculated for every host performance for an
steganalysis techniques based on manually selected hour. The hour in which three hosts got the highest per-
image characteristics. Numerous non-blind steganalysis formance represents the optimum hour. Consequently, to
techniques have been reported; as a result, they believe test the security level, the focus was to enable the possi-
they can exclusively locate photographs that have been bility of validation to encode large data volumes through
concealed using well-documented techniques. Therefore, an image using a restricted visually perceptible object that
it is easier to steganalyze but harder to conceal. Instead of provides the ability to hide the data presence from being
examining the practical implications of this assumption, detected by a machine. At first, the ‘Secret Image’ data
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 24 of 36
Fig. 17 Changes in bits in the container image have the same effect on all the colors in the hidden picture that was retrieved
location should be identified to know in case the network the cover image allocation is still available, this makes the
was hiding the ‘Secret Image’ data in LSB for the ‘Cover work of "Steganalysis" much easier while making the job
Image’. Many existing tools were developed for the hid- of hiding even harder.
den data identification in the least significant bits. Steg-
Expose was chosen to measure the detection rate of the Virtual machine recovery process
hidden data in various tested samples. StegExpose is an In the event that the ad-hoc server receives a notification,
open-source toolkit that is utilized in steganalysis. When the ’Ad-hoc Cloud Hosts’-implemented checkpoints must
the threshold was 0.15, the result was different all over a be activated. The intended P2P strategy has conclusively
wider range. As represented in Fig. 18, the StegExpose determined the resultant overheads associated with peri-
numbers represent the receiver operating characteristics odic checkpoints as well as the potential for traffic man-
of the Report on Compliance (ROC) rate comparison agement that were generated via a resource network. The
between (False Positive Rate) and (True Positive Rate) VM recovery performance expenses must be considered
during the embedded image detection operation through when establishing a time limit for the Cloud Job. If an ad-
the proposed steganography approach. In previous stud- hoc client determines that an ad-hoc user has stopped
ies, it was found that machine learning has the ability to working, it notifies the (Ad-hoc Server) about the ter-
compete or even outperform traditional ‘Steganalysis’ minated (Cloud Job), the nearest optimal ’Ad-hoc Cloud
approaches, which rely on hand-picked image attrib- Host’ selected by the (Ad-hoc Scheduler) for the recov-
utes. Even though different "Steganalysis" algorithms can ery process of the (Ad-hoc Cloud Guest), the decom-
find most of the hidden images using well-known hiding pression process for the checkpoint, and VM recording,
techniques, and even though the data access option for and then completes the recovery. Figure 19 illustrates
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 25 of 36
Fig. 18 When using the proposed steganography analysis to identify hidden images, the rate comparison between (False Positive Rate) and (True
Positive Rate)
Fig. 19 The recovery process overheads of (Ad-hoc Cloud System) through time in seconds
the processing times required to calculate the perfor- availability for one ‘Ad-hoc Host’ should minimally exist
mance of a cloud-based operation over time. From the to schedule the recovery through the ad-hoc scheduler.
moment of the ‘Ad-hoc Client’ detection regarding any On the contrary, the time it takes the ‘Ad-hoc Host’ to
non-functioning ‘Ad-hoc Cloud Guest’ to the moment be accessible could make the complete time for system
of the recovery of the "Ad-hoc Guest" on another "Ad- recovery longer as well; it should be mandatory for the
hoc Host". The overall operation takes around 60 sec- ‘Ad-hoc Client’, which was chosen by the ‘Ad-hoc Host’,
onds. The operation’s overall time calculation was made to make a checkpoint recovery. Lastly, in the event of the
through reliability measurement; the recovery process recovery operation’s success, the client side would notify
would averagely take about 35 seconds. In the event that the "Ad-hoc Server".
any of the ’Ad-hoc Host’ or ’Ad-hoc Server’ goes down,
the ’Ad-hoc Server’ cannot detect failure. In the event of Cloud system performance evaluation
early detection of a failure, the recovery operation may The execution level comparison of ’Cloud Jobs’ between
take longer to complete, averaging about 120 seconds the ’Ad-hoc Cloud System’ in the proposed study and a
on average. Furthermore, an assumption was made that general host was performed on the Amazon platform.
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 26 of 36
Fig. 20 A comparison of the proposed (Ad-hoc Cloud System) and (Amazon EC2) performance metrics (Input/output rate, Memory utilization, and
Disk space)
Specifically, the evaluation includes execution time in virtualization, which results in a lower throughput than
addition to the checkpoint and VM recovery operations. Amazon EC2. However, the overall execution time can be
The total operation time was determined from the time reduced if no migrations occur during the procedure. On
the job was submitted until its completion. The check- the other hand, it was projected that the overall amount
point configuration has been set to 50/hour in an effort of time would increase by 15% to 25% for each migration
to incorporate the VM recovery time. The transaction procedure that occurred within the system.
has already taken approximately 35.8 seconds to com-
plete. Memory, I/O, and Disk resource executions were The (ad‑hoc cloud server) performance
performed on the Ad-hoc Cloud System prototype. Fig- Subsequently, the focus would be on evaluating the per-
ure 20 depicts the results of a comparison of resource use formance of the ‘Ad-hoc Server’ that was implemented
over time between the suggested ad-hoc cloud prototype, in our experiment; to measure the operation level the
a typical public cloud, and Amazon EC2. As anticipated, server can reach within the ad-hoc Cloud prototype
there was a significant variation in the overall (Cloud simulation. The ‘Ad-hoc Cloud Server’ was observed
Job) execution time between the proposed ad hoc cloud through the (Command and Control Message Specifica-
prototype, the public cloud, and Amazon EC2. The vari- tion) through the CPU within one hour. As illustrated in
ance in processor timing was caused by hardware-based Fig. 21, the ‘Ad-hoc Cloud Server’ has used its two main
Fig. 21 CPU usage rate in percentage measurement in the (Ad-hoc Cloud Server) within 60 min
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 27 of 36
processes within the 12 minutes of experiment, it was were hosted on multiple VMs in the ‘Ad-hoc Cloud Host’
noticed a huge increment in CPU usage due to ‘BOINC was the main reason for the high CPU usage, which can
daemon’ that was utilized in both ‘Ad- hoc Server’ and represent an over CPU capacity usage on large scale net-
‘Ad-hoc Clients’; as the utilized memory were 3.56 GB works, as illustrated in Fig. 22.
out of available 4.0 GB. The limitation in this part is not As shown in Fig. 23, there can be a major difference
in the CPU usage level, but due to the work-units that regarding the software execution time. In this study,
Fig. 22 The data input/output in bytes per second for the ‘Ad-hoc Cloud Server’ through 60 min
Fig. 23 The time that has been taken by the (Ad-hoc Cloud System) to execute referred as the ‘Execution Time) in seconds throughout the daytime
in hours
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 28 of 36
the evaluation of both the lowest (41 minutes) and the utilization report from our local machine was used from
highest execution time (82 minutes) has been done for the other side.
12 hours continuously. This difference in time was still
not acceptable as it could result in high costs for the user Deep steganography performance results
for each usage hour. As an example, if the user tried to As shown in Fig. 25, the enhancement level of the remaining
execute software at 8:00 am, it might take 82 minutes at image can be increased by five, ten, or twenty times.
most to finish the task, which is still considered a limi-
tation; for each additional minute, the user might be • First Row: The residue matching high rate regarding
charged with a higher cost. the ‘Cover Image’ (at 20x).
• Second Row: It contains a combination result of both
Data utilization charging method types of images (cover - hidden).
The service (Pay as You Go) that was being delivered • Third Row: ‘Secret Image’ aspects were exposed.
by Amazon EC2 was charging its users for using both
cloud resources and data packages in GB for every pass- The comparison can only be done with an original
ing hour [100]. The charging rates for both ‘Storage’ and image that has not been changed. One standard way to
‘Instance’ were low when compared to the ‘Data Trans- hide a “secret image” once it has been placed in a “cover
fer’ rate charging, as there were various complex matrices image” is to use cryptographic encodings. As an extra
involved during the transfer (i.e. packet size, transferred choice, we show how to change the network’s error func-
packets numbers, and data type) for instances and the tion. It was the relationship between the cover image’s
user. The test here is to evaluate Amazon EC2’s designed residual and the secret image’s corr (Rc, S), where
transfer approach for controlling the cloud users dur- Rc = ||C C′|| and S denotes the ‘Secret Image’, that
ing data transfer. The evaluation was carried out on a caused the most concern. Various several distinct defi-
CentOS 7.2 i386 server [101], with various types of data nitions of this term were evaluated. In the results given
transferred from numerous cloud users, and a compari- in Fig. 25, the scale was assessed to be (0.15 x Channels
son made between the actual data transferred and the x Number of Pixels). Even with 20x of residual image
transferred data measured by Amazon EC2.As an exam- enhancement, various ‘Secret Image’ characteristics were
ple, an installation for a random application, the pack- eliminated; this was accomplished by minimizing the
ages for the application were installed on both the local residual correlation only with encrypted images; because
machine of our implementation and the CentOS 7.2 on of their strength and sturdiness, the reconstructions’
(Amazon EC2), as illustrated in Fig. 24. “Cloud Watch” integrity has degraded in a few random places; the objec-
is a packet analyzer tool, which was used to measure tive of this section was to demonstrate that a significant
the volume of the transferred data every hour, while the data volume could be encoded via a single image while
Fig. 24 Measuring the data transmission between the implemented (Ad-hoc Cloud System) through the utilization report and (Amazon EC2)
platform through the Cloud Watch
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 29 of 36
Fig. 25 A description of the residual image computation that can be achieved in case the container image substation during the original image
leakage, as it shown in column 3, 4 and 5 then enhancement output can be seen in 3rd, 4th and 5th rows with enhancement rate of 5x, 10x and 15x
of residual image
still leaving several discernible artifacts. Despite this, toolbox for steganography, was used to see if the hidden
no effort has been made to conceal the existence of this images could be found.
data from machine detection. Since up to the majority of
a document’s contents are hidden messages, numerous Testing the deep steganography performance
measures can be used to make it more difficult to decode. towards multiple attacks
In order for the network to conceal the existence of the In case for an accessibility to un-authorized users, the
secret image, it was necessary to determine the loca- (Geometrica Attack) has a high rate of resistance to
tion of the hidden image’s data. Inconsequential cover images with watermarks, the danger of these attacks is
image fragments were insufficient to disguise the secret that they have the ability to alter both of the image’s ‘data’
image’s existence. The LSBs may include data that might and ‘features’ as the output of a deformed version. In this
be revealed using a variety of methods. StegExpose, a free study, both (Rotation Attacks) and (Cropping Attacks)
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 30 of 36
Fig. 26 The (Bit Error Rate) results towards different image’s rotation degrees
were considered. Harmonic transformation was mainly simulation testing, some random crops were made to the
utilized for image rotation, the output of pixels’ dis- ‘Pyramid’ image at various parts; this resulted in a high
placement, whether in clockwise motion or anti-clock- extraction image quality from the cropped images.
wise motion, the (Rotation Attack) can cause an image Regarding rotation, as previously stated, the perfor-
deformation output along with edge pixels that have an mance best case of the implemented technique con-
arrange of triangle model, the resultant effects of the cerning cropping attacks for embedding is caused by
(Rotation Attacks) on watermarked images can be simu- both (Arbitrary Block) and (Selection Parameters).
lated based on the image’s rotation degree as illustrated ‘Pyramid Image’ was used to test the implemented
in Fig. 26. technique efficiency. During the transmission phase of
From the resultant output, it possible to state that the watermarked images, many multiplicative noises
the case in which the watermarks obtainability relies would be collected though all over the image; to eval-
on the rotation range. As an instance, the implemented uate the implemented approach’s performance. Sev-
technique has a BER lower than 10% when the rotation eral watermarked images were tested using multiple
degree is 50. In Fig. 27, ‘Pyramid’ image can compare noise attacks with multiple densities of both (Output
the (Normalized Cross Correlation) proposed approach Watermarks) and (NCC - BER), the images were evalu-
comparison to illustrate the (Rotation Attack) outcome ated under case (Variance = 0.005, Mean = 0, Noise
on each angle. The proposed methodology has the abil- Density = 0.05).
ity of attaining high performance towards the (Rota- There is a high resistance level towards noise attacks in
tion Attacks); that’s due to arbitrary block and selection this study along with high robustness level towards (His-
parameters, the random selection role ensures that in togram Attacks). Some limits in the performance can be
case any type of modifications is made regarding the rota- shown with lower results (i.e. Gaussian Noise). Corre-
tion of the watermarked images. In Table 3 below, it pre- spondingly, as it can be illustrated in Fig. 27, the result-
sents the output rotation results based on both (Image’s ant output form the proposed approach when evaluated
Attacks) and (Watermark Extraction), in case of the met- towards several types of attacks were put into compari-
5o rota-
rics results were presented genuinely, there is a 1 son with [89, 103, 104].
tion for multiple gray scale images. (Cropping Attack) is From other different steganography methods, the
a type of attacks in-which it often replaces image por- proposed steganography approach has provided a
tions (i.e. Square, Circle or Rectangle) with white/dark high output results, which has the ability to hide data\
pixels [102]. The difference in the cropped image phase image through the deep learning usage [105], which
that it can start from 1% at least until 100% at most. For a was designed from three main networks (Preparation
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 31 of 36
Fig. 27 The evaluation of ‘Pyramid’ image, a comparing the (Bit Error Rate) for gray scaling, b comparing the (Normalized Cross Correlation) for
color scaling when it is compared to the studies in [89, 103, 104]
Table 3 The output watermarks of both a) rotated images with 20o.b) various cropped lion images
many of these approaches. However, the following steps Cloud System’ platform idea along with its deployment
should be followed to enhance the system robustness: approach were proposed in this work. An end- user’s
hardware was leveraged to launch a cloud feature on an
• After concealing the hidden image, the pixels should irregular basis. V-BOINC is an open-source tool, which
be permuted (in-place) in one of M agreed on tech- allows developers to bypass application-level security
niques; system subsequently hides the permuted- checkpoints by solely using the V-BOINC VMs. The
secret-image as well as key (an index into M). ad-hoc cloud approach can help enhance network per-
• The lack of structural configuration throughout the formance along with usage while lowering costs. Sec-
residuals makes recovery significantly more challeng- ondly, this research expands the study of steganography,
ing, unless the original image was accessible. then the utilization of pertinent data in images through
• Permutation keys were required for this method to the usage of deep learning. Prior attempts to employ
work properly (though this can be sent reliably in only machine-learning models to supplement or replace an
a few bytes). image-hiding scheme ratio have failed. A fully train-
• There were various complications for transmitting a able deep learning system was created consists of three
concatenated ‘Secret Image’, which increases the prob- networks which seamlessly inserts a color image into
ability of reconstruction errors across the system. another one. The scheme would be designed to insert
data or another image. Implementing a steganography
technique through the usage of a deep learning approach,
Study conclusions the message must be hidden from statistical analysis. It
This paper has discussed data security in ad-hoc cloud would require an additional training target and possibly
systems through the usage of enhanced steganography embedding tiny images beneath many ‘Cover Images’. No
approach with the usage of deep learning. The ‘Ad-hoc image loss sources would be re-used with the suggested
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 33 of 36
extracted features. The re-training phase of the already such issues [108]. Further future studies could be con-
trained networks is a required part of this strategy. Since ducted to see if the methods discussed in the presented
hidden systems would no longer take advantage of the work can be modified to test the affection level in a case
local architecture in the concealed image, the rest of the of a cyberattack incident within a cloud environment.
system should be retained. Using steganography and The hidden image’s presence (but not its specific compo-
deep learning techniques for hiding additional data in sition) could be accurately detected, in comparison to the
photographs in the proposed solution has never been data of the ‘Cover Image’ information, this became sig-
more accessible. More than one previously proposed nificantly beyond a state-of-art framework [109]. Unless
technique has attempted to use neural networks includ- the cover image’s residual has a low enough measure of
ing a replacement for some tiny component of an image- correlation to the concealed image, it becomes harder
hiding network. It has been proved where a completely to decipher its elements [110]. Hence, future studies
resilient system that produces visually good performance could be focusing into providing an effective two-factor
in the placement of a full-size, color image further into a encryption mechanism within a normal public cloud as
picture has a possibility to be created. This was discussed a start.
in terms of graphics, but the same system can be trained
to embed text or image as well, the project’s potential
Authors’ contributions
for growth is limitless, both in the short and long terms, All authors contributed to the study conception and design. Material prepara‑
these three were mentioned based on the priority level. tion, data collection and analysis were performed by Ahmed A. Mawgoud,
Amr Abu-Talleb and Mohamed Hamed N. Taha. The first draft of the manu‑
script was written by Ahmed A. Mawgoud under the supervision of Amira
a) In order to create a holistic steganographic scheme, Kotb. All authors read and approved the final manuscript.
the methodology of concealing the message exist-
ence in the statistical analyzer must be addressed, as Funding
This research received no external funding. Open access funding provided by
it most likely demands a new objective in training, in The Science, Technology & Innovation Funding Authority (STDF) in coopera‑
addition to having a small image encoding technique tion with The Egyptian Knowledge Bank (EKB).
in a larger ‘Cover Image’.
Availability of data and materials
b) The proposed embeddings that were addressed in The data that support the findings of this study are available from the cor‑
this study were not planned to be used with image responding author Ahmed A. Mawgoud, upon reasonable request.
loss files, if the lossy encodings, (i.e., jpeg, bmp, or
png) were needed, then there was a possibility to Declarations
work directly with the DCT coefficients rather than
Ethics approval and consent to participate
the spatial field. This material is the authors’ own original work, which has not been previously
c) The SSE error unit was utilized for the networks’ published elsewhere. The paper is not currently being considered for publica‑
training. However, error units associated with SSIM tion elsewhere.
Future works
The proposed study has provided high positive results; Received: 31 March 2022 Accepted: 1 October 2022
there are huge room for improvements for future stud-
ies regarding the previously mentioned limitations in
section 6. The reliability issue represents a real critical
point to be discussed as a separate study; because resource References
1. Iivari N, Sharma S, Ventä-Olkkonen L (2020) Digital transformation of
load capabilities were not included regarding the measur- everyday life–how COVID-19 pandemic transformed the basic educa‑
ing the reliability features [107]. For instance, the ad-hoc tion of the young generation and why information management
cloud system reliability computations were saved within research should care? Int J Inf Manag 55:102183
2. Mollah MB, Azad MAK, Vasilakos A (2017) Security and privacy chal‑
the VM Service project DB; these computations pro- lenges in mobile cloud computing: survey and way ahead. J Netw
vide the needed estimations for the ’Ad-hoc Cloud Host’ Comput Appl 84:38–54
behavior. However, there is a possibility that the reliabil- 3. Grnarov A, Cilku B, Miskovski I, Filiposka S, Trajanov D (2008) Grid com‑
puting implementation in ad hoc networks. In: Advances in computer
ity computations could be modified for the host reliabil- and information sciences and engineering. Springer, Dordrecht, pp
ity measurement (i.e., the host’s weekly/monthly patterns 196–201
might be utilized for allocating a ‘Cloud Job’); it is left for 4. McGilvary GA, Barker A, Atkinson M (2015) Ad hoc cloud computing. In:
2015 IEEE 8th international conference on cloud computing. IEEE, pp
future studies to investigate new approaches for solving 1063–1068
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 34 of 36
5. Buyya, R., Beloglazov, A., and Abawajy, J. (2010). Energy-efficient man‑ 29. Oh K, Zhang M, Chandra A, Weissman J (2021) Network cost-aware
agement of data center resources for cloud computing: A vision, archi‑ geo-distributed data analytics system. IEEE Transact Parallel Distributed
tectural elements, and open challenges. arXiv preprint arXiv:1006.0308 Syst 33(6):1407–1420
6. Tian LQ, Lin C, Ni Y (2010) Evaluation of user behavior trust in cloud 30. Weissman JB, Sundarrajan P, Gupta A, Ryden M, Nair R, Chandra A (2011)
computing. In: 2010 international conference on computer application Early experience with the distributed nebula cloud. In: Proceedings
and system modeling (ICCASM 2010), vol 7. IEEE, pp V7–V567 of the fourth international workshop on Data-intensive distributed
7. Mawgoud AA, Taha MHN, Kotb A (2022) Steganography adaptation model computing, pp 17–26
for data security enhancement in ad-hoc cloud based V-BOINC through 31. Duan X, Guo D, Liu N, Li B, Gou M, Qin C (2020) A new high capacity
deep learning. In: International conference on advanced machine learning image steganography method combined with image elliptic curve
technologies and applications. Springer, Cham, pp 68–77 cryptography and deep neural network. IEEE Access 8:25777–25788
8. Mengistu TM, Alahmadi AM, Alsenani Y, Albuali A, Che D (2018) 32. Yi S, Kondo D, Andrzejak A (2010) Reducing costs of spot instances via
Cucloud: volunteer computing as a service (vcaas) system. In: Interna‑ checkpointing in the amazon elastic compute cloud. In: 2010 IEEE 3rd
tional conference on cloud computing. Springer, Cham, pp 251–264 international conference on cloud computing. IEEE, pp 236–243
9. Kahn D (1996) The history of steganography. In: International workshop 33. Hu D, Wang L, Jiang W, Zheng S, Li B (2018) A novel image steganogra‑
on information hiding. Springer, Berlin, Heidelberg, pp 1–5 phy method via deep convolutional generative adversarial networks.
10. Younes MAB, Jantan A (2008) A new steganography approach for IEEE Access 6:38303–38314
images encryption exchange by using the least significant bit insertion. 34. Mori T, Nakashima M, Ito T (2012) SpACCE: A sophisticated ad hoc cloud
Int J Comput Sci Network Secur 8(6):247–257 computing environment built by server migration to facilitate distrib‑
11. Pradhan A, Sahu AK, Swain G, Sekhar KR (2016) Performance evaluation uted collaboration. Int J Space Based Situated Comput 2(4):230–239
parameters of image steganography techniques. In: 2016, international 35. Zhang C, Benz P, Karjauv A, Sun G, Kweon IS (2020) Udh: universal deep
conference on research advances in integrated navigation systems hiding for steganography, watermarking, and light field messaging. Adv
(RAINS). IEEE, pp 1–8 Neural Inf Proces Syst 33:10223–10234
12. Mawgoud AA (2020) A survey on ad-hoc cloud computing challenges. 36. Wu Y, Cao J, Li M (2011) Private cloud system based on BOINC with
In: 2020 international conference on innovative trends in communica‑ support for parallel and distributed simulation. In: 2011 IEEE 9 th
tion and computer engineering (ITCE). IEEE, pp 14–19 international conference on dependable, autonomic and secure
13. El Karadawy AI, Mawgoud AA, Rady HM (2020) An empirical analysis computing. IEEE, pp 1172–1178
on load balancing and service broker techniques using cloud analyst 37. Girardin CAJ et al (2014) Productivity and carbon allocation in a
simulator. In: 2020 international conference on innovative trends in tropical montane cloud forest in the Peruvian Andes. Plant Ecol Diver
communication and computer engineering (ITCE). IEEE, pp 27–32 7(1–2):107–123
14. Liu Y, Wang L, Wang XV, Xu X, Jiang P (2019) Cloud manufacturing: key 38. Mao, Y., You, C., Zhang, J., Huang, K., and Letaief, K. B. (2017). Mobile
issues and future perspectives. Int J Comput Integr Manuf 32(9):858–874 edge computing: survey and research outlook. arXiv preprint
15. El-Rahman SA (2018) A comparative analysis of image steganography arXiv:1701.01090
based on DCT algorithm and steganography tool to hide nuclear reac‑ 39. Toh CK (2001) Maximum battery life routing to support ubiquitous
tors confidential information. Comput Electr Eng 70:380–399 mobile computing in wireless ad hoc networks. IEEE Commun Mag
16. Cheddad A, Condell J, Curran K, Mc Kevitt P (2010) Digital image 39(6):138–147
steganography: survey and analysis of current methods. Signal Process 40. Wood T, Ramakrishnan KK, Shenoy P, Van der Merwe J (2011) Cloud‑
90(3):727–752 Net: dynamic pooling of cloud resources by live WAN migration of
17. Fridrich J, Pevný T, Kodovský J (2007) Statistically undetectable jpeg virtual machines. ACM SIGPLAN Not 46(7):121–132
steganography: dead-ends challenges, and opportunities. In: Proceed‑ 41. Singhal A, Pallav P, Kejriwal N, Choudhury S, Kumar S, Sinha R (2017)
ings of the 9 th workshop on multimedia and security, pp 3–14 Managing a fleet of autonomous mobile robots (AMR) using cloud
18. Thangadurai K, Devi GS (2014) An analysis of LSB based image steg‑ robotics platform. In: 2017 European conference on Mobile robots
anography techniques. In: 2014 international conference on computer (ECMR). IEEE, pp 1–6
communication and informatics. IEEE, pp 1–4 42. Wengrowski E, Dana K (2019) Light field messaging with deep photo‑
19. Marwaha P, Marwaha P (2010) Visual cryptographic steganography in graphic steganography. In: Proceedings of the IEEE/CVF conference
images. In: 2010 second international conference on computing, com‑ on computer vision and pattern recognition, pp 1515–1524
munication and networking technologies. IEEE, pp 1–6 43. Satyanarayanan M, Schuster R, Ebling M, Fettweis G, Flinck H, Joshi K,
20. Luo X, Song X, Li X, Zhang W, Lu J, Yang C, Liu F (2016) Steganalysis of Sabnani K (2015) An open ecosystem for mobile-cloud convergence.
HUGO steganography based on parameter recognition of syndrome- IEEE Commun Mag 53(3):63–70
trellis-codes. Multimed Tools Appl 75(21):13557–13583 44. Aleem A, Sprott CR (2013) Let me in the cloud: analysis of the benefit
21. Xiang L, Guo G, Yu J, Sheng VS, Yang P (2020) A convolutional neural and risk assessment of cloud platform. J Financial Crime 20(No. 1):6-
network-based linguistic steganalysis for synonym substitution steg‑ 24. https://doi.org/10.1108/13590791311287337
anography. Math Biosci Eng 17(2):1041–1058 45. Mawgoud A, Hamed N, Taha M, Eldeen M, Khalifa N (2020) QoS provi‑
22. Al Mamun MA, Anam K, Onik MFA, Esfar-E-Alam AM (2012) Deploy‑ sion for controlling energy consumption in ad-hoc wireless sensor
ment of cloud computing into vanet to create ad hoc cloud network networks. ICIC Express Lett 14(8):761–767
architecture. In: Proceedings of the world congress on engineering and 46. Suryateja PS (2018) Threats and vulnerabilities of cloud computing: a
computer science, vol 1, pp 24–26 review. Int J Comput Sci Eng 6(3):297–302
23. Alsenani Y, Crosby G, Velasco T (2018) SaRa: A stochastic model to 47. Ge H, Huang M, Wang Q (2011) Steganography and steganalysis
estimate reliability of edge resources in volunteer cloud. In: 2018 IEEE based on digital image. In: 2011 4 th international congress on image
international conference on EDGE computing (EDGE). IEEE, pp 121–124 and signal processing, vol 1. IEEE, pp 252–255
24. Kirby, G., Dearle, A., Macdonald, A., and Fernandes, A. (2010). An 48. Canziani, A., Paszke, A., and Culurciello, E. (2016). An analysis of deep
approach to ad hoc cloud computing. arXiv preprint arXiv:1002.4738 neural network models for practical applications. arXiv preprint
25. Shila DM, Shen W, Cheng Y, Tian X, Shen XS (2016) AMCloud: toward a arXiv:1605.07678
secure autonomic mobile ad hoc cloud computing system. IEEE Wirel 49. Abdullah AM, Aziz RHH (2016) New approaches to encrypt and
Commun 24(2):74–81 decrypt data in image using cryptography and steganography algo‑
26. Chaumont M (2020) Deep learning in steganography and steganalysis. rithm. Int J Comput Appl 143(4):11–17
In: Digital media steganography. Academic Press, pp 321–349 50. Kini NG, Kini VG (2019) A secured steganography algorithm for hiding
27. Chandra A, Weissman J, Heintz B (2013) Decentralized edge clouds. IEEE an image in an image. In: Integrated intelligent computing, com‑
Internet Comput 17(5):70–73 munication and security. Springer, Singapore, pp 539–546
28. Jonathan A, Ryden M, Oh K, Chandra A, Weissman J (2017) Nebula: dis‑ 51. Manisha S, Sharmila TS (2019) A two-level secure data hiding
tributed edge cloud for data intensive computing. IEEE Transact Parallel algorithm for video steganography. Multidim Syst Sign Process
Distributed Syst 28(11):3229–3242 30(2):529–542
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 35 of 36
52. Montes D, Añel JA, Pena TF, Uhe P, Wallom DC (2017) Enabling 74. Nie, J., Zhang, Z., Liu, Y., Gao, H., Xu, F., and Shi, W. (2019). Point cloud
BOINC in infrastructure as a service cloud system. Geosci Model Dev ridge-valley feature enhancement based on position and normal guid‑
10(2):811–826 ance. arXiv preprint arXiv:1910.04942
53. Zhang F, Wang MM, Deng R, You X (2021) QoS optimization for 75. Kim N, Cho J, Seo E (2014) Energy-credit scheduler: an energy-aware
Mobile ad hoc cloud: A multi-agent independent learning approach. virtual machine scheduler for cloud systems. Futur Gener Comput Syst
IEEE Transactions on Vehicular Technology 32:128–137
54. Sun D, Zhao H, Cheng S (2016) A novel membership cloud model- 76. Zhu QH, Tang H, Huang JJ, Hou Y (2021) Task scheduling for multi-cloud
based trust evaluation model for vehicular ad hoc network of T-CPS. computing subject to security and reliability constraints. IEEE/CAA J
Secur Commun Networks 9(18):5710–5723 Automat Sin 8(4):848–865
55. Mengistu TM, Che D (2019) Survey and taxonomy of volunteer com‑ 77. Jeyalaksshmi S, Nidhya MS, Suseendran G, Pal S, Akila D (2021) Develop‑
puting. ACM Comput Surv 52(3):1–35 ing mapping and allotment in volunteer cloud systems using reliability
56. Mbongue JM, Hategekimana F, Kwadjo DT, Bobda C (2018) Fpga profile algorithms in a virtual machine. In: 2021 2nd international
virtualization in cloud-based infrastructures over virtio. In: 2018 IEEE conference on computation, automation and knowledge management
36th international conference on computer design (ICCD). IEEE, pp (ICCAKM). IEEE, pp 97–101
242–245 78. Tang X (2021) Reliability-aware cost-efficient scientific workflows
57. Arunarani AR, Manjula D, Sugumaran V (2019) Task scheduling tech‑ scheduling strategy on multi-cloud systems. IEEE Transactions on Cloud
niques in cloud computing: A literature survey. Futur Gener Comput Computing
Syst 91:407–415 79. Li XY, Liu Y, Lin YH, Xiao LH, Zio E, Kang R (2021) A generalized petri
58. Liu S, Guo L, Webb H, Ya X, Chang X (2019) Internet of things monitor‑ net-based modeling framework for service reliability evaluation and
ing system of modern eco-agriculture based on cloud computing. IEEE management of cloud data centers. Reliab Eng Syst Saf 207:107381
Access 7:37050–37058 80. Wang LC, Chen CC, Liu JL, Chu PC (2021) Framework and deployment
59. Barik RK, Lenka RK, Dubey H, Mankodiya K (2018) Tcloud: cloud SDI of a cloud-based advanced planning and scheduling system. Robot
model for tourism information infrastructure management. In: GIS Comput Integr Manuf 70:102088
applications in the tourism and hospitality industry. IGI Global, pp 81. Nanjappan M, Natesan G, Krishnadoss P (2021) An adaptive neuro-fuzzy
116–144 inference system and black widow optimization approach for optimal
60. Gong S, Yin B, Zheng Z, Cai KY (2019) Adaptive multivariable control resource utilization and task scheduling in a cloud environment. Wirel
for multiple resource allocation of service- based systems in cloud Pers Commun 121(3):1891–1916
computing. IEEE Access 7:13817–13831 82. Lakhan A, Mastoi QUA, Elhoseny M, Memon MS, Mohammed MA (2021)
61. Arora R, Redondo C, Joshua G (2018) Scalable software infrastructure Deep neural network-based application partitioning and scheduling for
for integrating supercomputing with volunteer computing and cloud hospitals and medical enterprises using IoT assisted mobile fog cloud.
computing. In: Workshop on software challenges to Exascale comput‑ Enterprise Information Systems, pp 1–23
ing. Springer, Singapore, pp 105–119 83. Kristiani E, Yang CT, Huang CY, Wang YT, Ko PC (2021) The implementa‑
62. El-Moursy A, Abdelsamea A, Kamran R, Saad M (2019) Multi-dimen‑ tion of a cloud-edge computing architecture using OpenStack and
sional regression host utilization algorithm (MDRHU) for host overload Kubernetes for air quality monitoring application. Mobile Networks
detection in cloud computing. J Cloud Comput 8(1):1–17 Appl 26(3):1070–1092
63. John J, Norman J (2019) Major vulnerabilities and their prevention 84. Massie M, Li B, Nicholes B, Vuksan V, Alexander R, Buchbinder J et al
methods in cloud computing. In: Advances in big data and cloud (2012) Monitoring with ganglia: tracking dynamic host and application
computing. Springer, Singapore, pp 11–26 metrics at scale. O’Reilly Media, Inc.
64. Kiss T et al (2019) MiCADO—microservice-based cloud application- 85. Fatema K, Emeakaroha VC, Healy PD, Morrison JP, Lynn T (2014) A survey
level dynamic orchestrator. Futur Gener Comput Syst 94:937–946 of cloud monitoring tools: taxonomy, capabilities and objectives. J
65. Taylor SJ, Kiss T, Anagnostou A, Terstyanszky G, Kacsuk P, Costes J, Fantini Parallel Distributed Comput 74(10):2918–2933
N (2018) The CloudSME simulation platform and its applications: A 86. Pippal SK, Kushwaha DS (2013) A simple, adaptable and efficient
generic multi-cloud platform for developing and executing commercial heterogeneous multi-tenant database architecture for ad hoc cloud. J
cloud- based simulations. Futur Gener Comput Syst 88:524–539 Cloud Comput Adv Syst Appl 2(1):1–14
66. Larsen, Peter Gorm, et al (2020). "A cloud-based collaboration platform 87. Jonas, E., Schleier-Smith, J., Sreekanti, V., Tsai, C. C., Khandelwal, A., Pu, Q.,
for model-based design of cyber-physical systems." arXiv preprint ... and Patterson, D. A. (2019). Cloud programming simplified: A Berkeley
arXiv:2005.02449 view on serverless computing. arXiv preprint arXiv:1902.03383
67. Anderson, D. P. (2020). BOINC: a platform for volunteer computing. 88. Kich, I, El Bachir Ameur, YT, and Benhfid, A (2020) Image steganography
Journal of Grid Computing, 18(1), 99-122. by deep CNN auto-encoder networks. Int J 9:4707–16. https://doi.org/
68. McGilvary GA, Barker A, Lloyd A, Atkinson M (2013) V-boinc: the virtu‑ 10.30534/ijatcse/2020/75942020
alization of boinc. In: 2013 13th IEEE/ACM international symposium on 89. Wu P, Yang Y, Li X (2018) Stegnet: mega image steganography capacity
cluster, cloud, and grid computing. IEEE, pp 285–293 with deep convolutional network. Future Internet 10(6):54
69. Alsenani Y, Crosby GV, Velasco T, Alahmadi A (2018) ReMot reputa‑ 90. Wang Z, Gao N, Wang X, Qu X, Li L (2018) SSteGAN: self-learning steg‑
tion and resource-based model to estimate the reliability of the host anography based on generative adversarial networks. In: International
machines in volunteer cloud environment. In: 2018 IEEE 6th interna‑ conference on neural information processing. Springer, Cham, pp
tional conference on future internet of things and cloud (FiCloud). IEEE, 253–264
pp 63–70 91. Yang ZL, Zhang SY, Hu YT, Hu ZW, Huang YF (2020) VAE-Stega: linguistic
70. Pretz JE, Link JA (2008) The creative task Creator: A tool for the genera‑ steganography based on variational auto-encoder. IEEE Transact Inform
tion of customized, Web-based creativity tasks. Behav Res Methods Forensics Secur 16:880–895
40(4):1129–1133 92. strix gl702 | ROG - Republic of Gamers Global. 2022. strix gl702 | ROG -
71. Do Q, Martini B, Choo K-KR (2015) A cloud-focused Mobile forensics Republic of Gamers Global. Available at: https://rog.asus.com/tag/strix-
methodology. IEEE Cloud Comput 2(4):60–65. https://doi.org/10.1109/ gl702/ Accessed 20 June 2021
MCC.2015.71 93. Boinc.berkeley.edu. 2022. Windows client 7.16.20 released. Available at:
72. Bharathi PD, Prakash P, Kiran MVK (2017) Energy efficient strategy for https://boinc.berkeley.edu/forum_thread.php?id=14437 Accessed 31
task allocation and VM placement in cloud environment. In: 2017 May 2021
innovations in power and advanced computing technologies (i-PACT). 94. Docs.vmware.com. 2022. VMware Workstation 15.5.2 Pro Release Notes.
IEEE, pp 1–6 Available at: https://docs.vmware.com/en/VMware-Workstation-Pro/
73. Yaqoob I, Ahmed E, Gani A, Mokhtar S, Imran M, Guizani S (2016) Mobile 15.5/rn/VMware-Workstation-1552-Pro-Release-Notes.html Accessed
ad hoc cloud: A survey. Wirel Commun Mob Comput 16(16):2572–2589 15 Jul 2021
Mawgoud et al. Journal of Cloud Computing (2022) 11:97 Page 36 of 36
95. Zhang Z (2018) Improved Adam optimizer for deep neural networks.
In: 2018 IEEE/ACM 26th international symposium on quality of service
(IWQoS). IEEE, pp 1–2
96. Yang K, Qinami K, Fei-Fei L, Deng J, Russakovsky O (2020) Towards fairer
datasets: filtering and balancing the distribution of the people subtree
in the imagenet hierarchy. In: Proceedings of the 2020 conference on
fairness, accountability, and transparency, pp 547–558
97. Hanif MA, Khalid F, Putra RVW, Rehman S, Shafique M (2018) Robust
machine learning systems: reliability and security for deep neural net‑
works. In: 2018 IEEE 24th international symposium on on-line testing
and robust system design (IOLTS). IEEE, pp 257–260
98. Mehdi H, Mureed H (2013) A survey of image stegano-graphy tech‑
niques. Int J Adv Sci Technol 54(3):113–124. https://doi.org/10.1109/
CICT.2016.34
99. Goel S, Rana A, Kaur M (2013) A review of comparison techniques of
image steganography. Global. J Comput Sci Technol
100. Agmon Ben-Yehuda O, Ben-Yehuda M, Schuster A, Tsafrir D (2013)
Deconstructing Amazon EC2 spot instance pricing. ACM Transactions
on Economics and Computation (TEAC) 1(3):1–20.
101. index, B., 7, C. and Support, C., 2022. CentOS 7.2 linux-atm rpm i386
– CentOS. Forums.centos.org. Available at: https://forums.centos.org/
viewtopic.php?t=57778 Accessed 12 March 2021
102. Jang HU, Choi HY, Son J, Kim D, Hou JU, Choi S, Lee HK (2018) Cropping-
resilient 3D mesh watermarking based on consistent segmentation
and mesh steganalysis. Multimed Tools Appl 77(5):5685–5712
103. Kim DH, Lee HY (2017) Deep learning-based steganalysis against spatial
domain steganography. In: 2017 European conference on electrical
engineering and computer science (EECS). IEEE, pp 1–4
104. Ye J, Ni J, Yi Y (2017) Deep learning hierarchical representations forim‑
age steganalysis. IEEE Transactions on Information Forensics andSecu‑
rity 12(11):2545–2557.
105. Wang, Chengyou, Yunpeng Zhang, and Xiao Zhou. 2018. "Robust
Image Watermarking Algorithm Based on ASIFT against Geometric
Attacks" Applied Sciences 8, no. 3: 410. https://doi.org/10.3390/
app8030410
106. Mittal N, Sharma D, Joshi ML (2018) Image sentiment analysis using
deep learning. In: 2018 IEEE/WIC/ACM international conference on web
intelligence (WI). IEEE, pp 684–687
107. Huang F, Li B, Huang J (2007) Attack LSB matching steganography by
counting alteration rate of the number of neighbourhood gray levels.
In: 2007 IEEE international conference on image processing, vol 1. IEEE,
pp I–401
108. Tianze L, Muqing W, Min Z, Wenxing L (2017) An overhead-optimizing
task scheduling strategy for ad-hoc based mobile edge computing.
IEEE Access 5:5609–5622
109. Johnson NF, Duric Z, Jajodia S (2001) Information hiding: steganogra‑
phy and watermarking-attacks and countermeasures: steganography
and watermarking: attacks and countermeasures, vol 1. Springer Sci‑
ence and Business Media
110. Li S, Xue M, Zhao BZH, Zhu H, Zhang X (2020) Invisible backdoor attacks
on deep neural networks via steganography and regularization. IEEE
Transact Dependable Secure Comput 18(5):2088–2082
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in pub‑
lished maps and institutional affiliations.