0% found this document useful (0 votes)
15 views35 pages

Thesis v5.1

Uploaded by

Root Lindow
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views35 pages

Thesis v5.1

Uploaded by

Root Lindow
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

ABSTRACT

Encryption algorithms serve a key role in guaranteeing information security in today's constantly
increasing internet and network applications. In this thesis, two symmetric key encryption
techniques, AES and BLOWFISH are considered encryption speed, throughput, and power
consumption were compared in order to see how well they performed. The simulation findings
show that Blowfish beats AES because it has no known security weaknesses. As a result, it can
be considered a good standard encryption technique.
It observed that, as the BLOWFISH algorithm requires more processing resources, it runs faster
than AES and achieves worse performance outcomes when compared to BLOWFISH
approaches. However, combining the two methods as a hybrid technique end up being
advantageous for an encryption system.
Moreover, the symmetric key algorithm AES and BLOWFISH is incredibly quick and powerful.
AES security will be significantly stronger and more difficult to attack with the use of a big
block size of AES and Blowfish to encrypt keys.

Keywords: Encryption, AES, BLOWFISH, Simulation, Algorithm


CHAPTER 1
INTRODUCTION

People's lives have been influenced by digital technologies. To meet their memory requirements,
the majority of these digital devices rely on cloud storage. Hundreds of thousands of
photographs, movies, and audio recordings are being stored in the cloud. Thousands of people
access this media every second all across the world.
The cloud has evolved into a data storage platform as a result of widespread acceptance of cloud
computing in our daily lives. Data security is one of the most major impediments to cloud
adoption, particularly among business users. One of these subjects is cloud security, which
encompasses aspects such as technology, control, and a set of policies that aid in the protection
of sensitive data (James & Girish Tere, 2018).
Although a single strong cipher such as AES may be considered sufficient for data security, there
is still a theoretical concern about trusting AES0 static S-Box components. To deal with this
issue, many cyphers in the chain have been suggested to reduce the chances of the secret data
being compromised (James & Girish Tere, 2018).
The main cloud computing components include virtualization, multi-tenancy cloud storage, and a
cloud network, which is why this strategy is recommended (Keijo et al., 2014).
Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure as a Service
(IaaS) are the three service models utilized by cloud service providers to deliver cloud services to
their customers (IaaS). There are four different types of cloud deployment methods (public,
private, hybrid, and community) that can be used to satisfy the needs of cloud users (James &
Girish Tere, 2018).
Computing security, data storage, virtualization security, internet services-related security,
network security, access control, software security, trust management, and legal security and
compliance difficulties are the categories in which it’s classified cloud security issues.
A list of criteria for cloud security is provided by the National Institute of Standards and
Technology. The Cloud Security Alliance (CSA) has stressed the same point in its dissemination
of legislation for cloud security standards i.e., authentication, confidentiality, integrity, and
availability (Keijo et al., 2014).
1.1 Statement of the Problem
Aside from all the services offered by cloud computing, it can also handle the various threats and
attacks that could have an indirect or direct influence on a system.
Data integrity, source availability, and confidentiality of the cloud infrastructure as a whole, as
well as specific layers and their services, are all covered by cloud security. Another aspect to
consider is that encryption and decryption are resource-intensive activities that consume a
significant amount of CPU, memory, and time.
Another consideration is that not all data is created equal. While certain data must be kept
private, others, such as material in public training manuals, advertising media, and so on, can be
left unencrypted without causing end-users any problems. In terms of the level of protection
required, the remaining data is also not uniform.
It can be split and encrypted further using several levels of encryption, with greater encryption
designated for more sensitive information.
1.2 Aim of the Study
• Combining the AES and blowfish encryption and decryption techniques to improve cloud
data security.
• In the hybrid approach, AES-256 will be used as the first layer (represent the first group
of encryptions), followed by blowfish as the second layer (represent the second group of
encryptions).

After calculating the execution times of all the approaches, each algorithm is combined with
AES 256 to calculate the execution times of all the encryption and decryption operations.
Comparison is required for all of these results against AES 256 to see which approach has the
quickest execution time.
AES 256 and Blowfish have been allocated to Level 3 in the proposed scheme. AES
encryption and decryption, as well as AES-Blowfish cascade encryption and decryption, are
discussed because AES 256 and Blowfish have the fastest execution times.

1.3 Research Questions

The goal of this thesis is to address two concerns that users face when using cloud computing
services. The first is users' fears of internal and external hacking threats. The other is that
encrypting all data without considering its level of confidentiality is impossible.
The following questions must be answered in order for our research to be relevant to our goals.
• Despite all of the advantages that cloud storage services provide, what are the restrictions
and how do they effect the users?
• Are the AES-256 and blowfish ciphers suitable for user protection in the hybrid
approach?
• Which of the AES-256 and blowfish ciphers is the more efficient?

1.4 Research Hypothesis

Based on the above-mentioned research aims, the following hypothesis was developed.
H1: The hybrid strategy for AES-256 and Blowfish cipher proves to be more accurate than
anticipated.
H2: Blowfish cipher is more efficient that AES-256 cipher.
H3: The hybrid strategy was not the best option.
1.5 Dissertation Outline

This research is divided into five sections, which are followed by references and an appendix
section.
Chapter 1: The study's background, research topic, objective and goals, research questions, and
limitations were all discussed.
Chapter 2: A literature review is provided, as well as a theoretical basis, an empirical
assessment, a discussion of research gaps, and a description of the methodological approach.
Chapter 3: Research technique was discussed, including research design, sample design, target
population, and sample selection justification, data collection tools, questionnaires, the validity
and reliability of the research instrument, data interpretation, and ethical considerations.
Chapter 4: The study's findings, interpretation of the findings, conclusions, and controversy
were all discussed.
Chapter 5: The study's findings and recommendations were summarized.
CHAPTER 2
LITERATURE REVIEW
A crucial distinguishing feature of a thriving information technology (IT) sector is its capacity to
contribute to cyber infrastructure in a truthful, valuable, and cost-effective manner. Cloud
computing is a broad term for a type of network-based computing in which a program or
application runs on a linked server or servers rather than on a local computing device like a PC,
tablet, or smartphone. Cloud computing is a distributed architecture for providing on-demand
computer resources and services by centralizing server resources on a scalable platform.
It's a configuration computing pool that everyone can use. Cloud computing allows customers to
access services on demand (Keijo et al., 2014).
For decades, businesses have tried to preserve and safeguard data in order to protect their clients'
personal information. Cloud computing was created by businesses as a way to give secure data
storage and processing capacity to businesses and consumers. Cloud storage is used by a wide
range of businesses.
Cloud computing, or just computing, is a type of dynamic application and storage that makes use
of Internet technologies. On-demand self-service, broad network access, resource pooling, quick
elasticity, and measured service are the five major characteristics of cloud computing.
Cloud computing also encompasses three key service types: Infrastructure as a Service, Platform
as a Service, and Software as a Service. Furthermore, cloud computing can be used in four
different ways: public cloud, private cloud, community cloud, and hybrid cloud (Purwinarko &
Hardyanto, 2018).
The benefits of cloud computing include increased processing power, storage, flexibility,
scalability, and lower IT infrastructure overhead costs. Startups have been able to benefit from
migrating to the cloud by redirecting capital spending into operational spending, making cloud
computing appealing when IT resources are being cut (Purwinarko & Hardyanto, 2018).
The smallest businesses are the most likely to use cloud computing, while medium-sized
businesses have lower rates, and organizations with less than hundred employees have the lowest
rates (Malik et al, 2016). Larger companies have ample processing power in-house. Cloud
computing, on the other hand, has several drawbacks, such as the need for Internet access, speed,
and direct access to resources. As a result, businesses may find it dangerous to rely solely on
cloud computing service providers. Any disruption in cloud services might be disastrous for
businesses. Clouds are transparent to users and apps, which implies there are no barriers or
obstacles to using the cloud from either side (Malik et al, 2016).
The following are the key benefits: cost savings, security, privacy, and dependability.
Stakeholders believe that in the future, these challenges with cloud computing adoption will be
minimized or removed. The client connects to the server to obtain data, much like in a traditional
client-server network model. The main distinction in cloud computing is that it can run in
parallel or provide data to several users at the same time, thanks to the concept of virtualization
(Mohammed, 2019)
Virtualization is an abstraction of an execution environment that may be made dynamically
available to authorized clients via the use of well-defined protocols, resource quotas (e.g. CPU,
memory share), and software configuration (e.g. operating system, offered services). End users
and operators benefit from "granular" computing resources, which include on-demand self-
service, broad access across various devices, resource pooling, quick flexibility, and service
metering capability (Keijo et al., 2014).
2.1 The technology's strategy
Collaboration between IT personnel and executives is essential for a unified cloud strategy that
leads to good business outcomes. Cloud computing cannot work as a silo within IT, just as IT
cannot function as a silo within the business. The proprietary technology of other cloud providers
need that you adapt your IT architecture to their capabilities. A long-term plan necessitates an
open infrastructure that can adapt to changing business and IT goals. Example of Dell company
that focuse on business and cultural expectations while building and implementing an open cloud
strategy (Dell, 2019).
The Dell Technologies Cloud is good example because, is meant to make operating hybrid cloud
systems simpler. Dell Technologies Cloud offers a standardized set of tools and a broad variety
of IT and management choices with tight integration and a single-vendor experience thanks to
the infrastructure expertise of Dell Technologies.
IT organizations can easily manage different forms of cloud computing with Dell Technologies
Cloud by depending on:
• Consistent infrastructure that makes workload migrations easier and prevents application
rework tax.
• Uniform operations that reduce expenses by eradicating silos and using technologies that
provide consistent control over all cloud resources.
• Reliable services with flexible, consumption-based pricing that provide knowledgeable
assistance in developing and implementing a successful cloud strategy.

Dell Cloud Computing includes:


• VMware Cloud Foundation on VxRail (VCF), a fully integrated Dell Technologies HCI
(hyperconverged infrastructure) system co-engineered with VMware and supplied as a turnkey
solution.
• VMware Cloud on Dell Technologies, a Data Center-as-a-Service offering for data centers
and edge locations that is completely managed and subscription-based.
• Dell Technologies Cloud Validated Designs, a set of best-in-class Dell Technologies storage,
computing, and networking resources that have been pre-tested for VMware Cloud Foundation
compatibility.
• Dell Technologies partner clouds support over 4,200 cloud providers, including Amazon
Web Services, Microsoft Azure, and Google Cloud Platform.(Gandhi, 2020)
The technology's strategy is as follows:(Gandhi, 2020)
1. Cloud computing is a strategy, not a technology. Cloud computing is part of a larger
plan to boost revenue, empower your workers, and revolutionize your company. It’s
dedicated to create solutions that align with the company's vision and help it move
forward with maximum flexibility and minimal risk.
2. Cloud computing should adapt to you rather than the other way around. The
majority of firms have already begun their cloud journey, but each has its own set of
requirements and challenges. Working with you and your team to align the plan with the
best cloud options while minimizing downtime.
3. Cloud computing is most effective when it is seamlessly integrated. Instead of ripping
and replacing, choosing to make use of existing investments and develop from the ground
up. Cloud computing as a logical extension of what companies are already doing, as well
as a tool for companies to capitalize on existing technology and processes.
Cloud Computing's main characteristics are as follows:(Mohammed, 2019)
1. On-demand self-service enables users to operate computing capabilities without
requiring human intervention from the service provider.
2. Access to the network is omnipresent - access is facilitated by the use of a variety of
technological gadgets.
3. Resource pooling based on location - the provider's computing resources are pooled
to serve all customers, with varied resources assigned based on the user's demand.
4. Rapid flexibility - available capabilities may be easily scaled up or down, allowing
the end user to buy any quantity at any moment.
5. Pay per use - monitoring storage, bandwidth, and computing resources used and then
billing for the number of active user accounts per month are examples.

2.2 Types of Cloud Computing


Cloud computing is a wide word that refers to a group of services that provide organizations with
a low-cost way to expand their IT capacity and usefulness.
Businesses may select where, when, and how they employ cloud computing to provide an
effective and dependable IT solution based on their individual needs.
The many forms of cloud computing are discussed here.
2.2.1 Mobile Cloud Computing
Mobile cloud computing must be addressed in supporting apps and needed compute
power due to the widespread availability and advancements in smartphones. As a result,
mobile cloud computing is a hybrid of mobile computing and cloud computing.
Mobile computing is the combination of cloud computing and mobile devices to deliver
computational power, memory, and storage to mobile devices (Dowling, 2017).
Performance, resources, and methodologies are all issues and obstacles in mobile cloud
computing. A common design would significantly increase mobile devices' cloud
processing and storage-power capabilities.
Mobile cloud computing is now widely used for online social network services such as
gaming, image processing, video processing, and general e-commerce. The importance of
mobile cloud computing has been highlighted in a number of general polls.
2.2.2 Quantum Computing
Quantum computing is one of the trendiest subjects in the cloud industry, as it challenges
the current state of cloud computing and has the potential to completely revolutionize it.
In this environment, service providers are attempting to cut-throat competition, and
Quantum Computing is poised to take over cloud computing in the near future (Dowling,
2017).
2.2.3 Hybrid Cloud Solutions
Hybrid Cloud Solutions are predicted to take their place in the cloud computing arena
very soon, in addition to other anticipated cloud computing trends. Hybrid Cloud
Solutions are also known for being dynamic, cost-effective, and adaptable to changing
market demands. Due to the increased rivalry among large organizations, it is possible to
meet these market expectations using Hybrid Cloud Solutions (Dowling, 2017).
2.2.4 Automation
Cloud adoption is important and rapidly rising, which implies that businesses will have to
cope with more computing, resulting in more data and application resources. More
administrative jobs and time-consuming tasks would be required. Execution automation
will eliminate repetitive tasks, decrease errors, and boost efficiency.
As a result, businesses of all sizes should strive to automate certain procedures. Cloud
administrators' tasks will be made easier by automation, which will save them money and
time (Dowling, 2017).

2.3 Cloud Computing Problems


There are numerous hurdles and issues associated with cloud computing adoption. This section
highlights the main hurdles and issues that may stymie cloud computing adoption and must be
solved in order to persuade businesses to use this new technology.
2.3.1 Cloud Computing Governance
Cloud computing has seen widespread adoption and use in the previous and present decade.
Because cloud computing is so critical for increasing organizational performance, its governance
is crucial for decision-makers. Cloud computing governance can be regarded a subset of IT
governance in general.
Compliance with general norms and legislation for users and cloud providers is one challenge
that could stymie cloud computing uptake and use.
Another significant challenge with cloud computing governance is a scarcity of competence in
cloud computing-based IT management (Abirami et al, 2007).
Despite the numerous advantages that cloud computing may provide for businesses, many
companies are still hesitant to use the technology (Abirami et al, 2007). The main cause could be
linked to poorly understood reasons that prevent people from accepting and using technology, as
well as the fact that hardware systems have become increasingly sophisticated, which has been
exacerbated by the advent of cloud computing.
2.3.2 Cloud Computing Security
For scientists and practitioners, computer security is an important and vital topic. The issue is
exacerbated by the emergence of cloud computing, because users do not have complete control
over the resources offered by cloud computing service providers. Customers and cloud
computing providers face greater security challenges in cloud computing. IT governance
provides visibility and control over IT; as a result, corporate governance measures can decrease
operational risks, ensure compliance, and protect invested value.
Data protection from cloud provider tampering is a difficult issue, especially if hackers are
working along with the cloud provider.
The use of data encryption is one way utilized to address the security and privacy challenges of
cloud computing. The encryption of data, on the other hand, adds to the processing cost and so
slows down data searches and retrieval(Abirami et al, 2007).
A technique for cloud environments can be utilized to alleviate such challenges and achieve the
best results in receiving encrypted data from outside sources while decreasing the computational
cost of cloud servers. The objective is to protect user privacy by using matrix encryption.
A lightweight-secure-conjunctive-keyword-search scheme in hybrid cloud environments (LCKS)
based on a ciphertext-policy ABE algorithm, which supports file owner authorized conjunctive
keyword searches for multiple parties, is another interesting method used to address the problem
of cloud computing security.
The security and performance evaluations of LCKS show that it is safe, efficient, and well-suited
for hybrid cloud use(Sampath et al., 2019).
There are two primary groups of approaches for securing computer data from unwanted access
and alterations. Encryption and decryption are used in the first type. While solutions based on
this approach are very effective at safeguarding computer data from unauthorized access, they
are inefficient, particularly for data with severe storage and retrieval requirements, because
encryption and decryption take a long time. Customers interested in cloud computing should
undertake thorough investigations into their demands and whether efficiency is a top priority for
them before choosing this technology or cloud computing service providers who use it,
according to the authors of this research.
The second class of approaches for protecting data from illegal access and modifications,
particularly internal threats, is based on Blockchains, a novel technology that can be used to
safeguard customers from internal threats perpetrated by cloud computing service providers'
personnel(Sampath et al., 2019).
2.3.3 Data Privacy
The issue of privacy is one of the most difficult aspects of cloud computing. Several research
have attempted to address such issues and challenges. Devices set to receive material or
resources from different resources of variously interconnected devices at the edge pose the issue
of data privacy in the Internet of Things (IoT). A way to describe the privacy content of
numerous sources by mapping them as resources of sorts of data, information, and knowledge in
the well-known DIKW architecture was developed to solve such issues. The concept provides a
security solution for typed data privacy objectives based on explicit and implicit divisions.
The usage of cloud computing for the storing of patient medical data has increased dramatically
in the healthcare industry. It is critical for patients to protect their medical data stored on the
server from unauthorized personnel(Patil et al., 2016).
When data is kept in a cloud-computing environment, the problem becomes even worse. For
patients, maintaining their privacy becomes a top priority. Patients are apprehensive about their
data being stored on a cloud server being private. As a result, healthcare providers must develop
solutions that ensure patient data security. Researchers have proposed a variety of strategies to
solve medical-data privacy concerns.
To address this issue, medical cyber physical systems could leverage elliptic curve cryptography
as an identity-based and proxy-oriented technique to outsource public audits. Surprisingly, the
suggested solution protects data privacy while also allowing any third-party auditor to efficiently
audit medical data without having to retrieve all of it(Patil et al., 2016).
Another issue that cloud computing consumers confront is choosing a cloud computing service
provider and selecting which cloud computing service provider will best ensure data
confidentiality. The behaviors of cloud computing service providers and clients should be
consistent, as stated in security level agreements.
Ciphertext is one of the most widely used technologies for protecting data privacy in the cloud
and providing structured access control.
Due to the expansion of IoT devices with limited resources, data privacy remains one of the most
pressing concerns in cloud computing. At the same time, there is a compelling need to increase
security to secure consumer private data. The authors of this work propose that efficient
algorithms and protocols be developed that can run under resource constraints while maintaining
the high level of security needed to secure private data(Dell, 2019).
2.4 Cryptographic Algorithms
Cryptography is one such way for safeguarding data, and in this part, we'll look at cryptographic
algorithms.
2.4.1 DES
The Data Encryption Standard (DES) is a block cipher using symmetric keys. The key is 56 bits
long, while the block is 64 bits long. When a weak key is utilized, it is prone to key attack. IBM
developed DES in 1972 as a data encryption technique. It was accepted as a standard encryption
algorithm by the United States government. It started with a 64-bit key, but the NSA restricted its
usage to 56-bit keys, thus DES discards 8 bits of the 64-bit key and then uses the compressed 56-
bit key produced from the 64-bit key to encrypt data in 64-bit blo
Triple DES is alock cipher that is also known as Triple Data Encryption Algorithm in
cryptography. The Triple Data Encryption Standard (3DES) was initially released in 1998, and it
derives its name from the fact that it encrypts, decrypts, and encrypts each block of data three
times using the DES cipher. The key is 112 bits or 168 bits long, while the block is 64 bits long.
Because of the increasing computational power available today and the weakness of the original
DES cipher, it was vulnerable to brute force and other cryptanalytic attacks; Triple DES was
created to provide a relatively simple method of increasing the key size of DES to protect against
such attacks without having to create a completely new block cipher algorithm(Patil et al., 2016).
2.4.2 Blowfish
The first edition of Blowfish was published in 1993. It's a symmetric key block cipher with a
block size of 64 bits and a key length ranging from 32 to 448 bits. It has a fiestal network
structure. Blowfish is a symmetric block cipher that can be used as a stand-in for DES or IDEA
in some situations. It takes a variable-length key, from 32 bits to 448 bits, making it ideal for
both domestic and commercial use. Bruce Schneider created Blowfish as a quick, free alternative
to conventional encryption techniques. It has been extensively studied since then, and it is
steadily gaining acceptance as a reliable encryption scheme. Blowfish is not patented, has a non-
exclusive license, and is free to use for any purpose(Patil et al., 2016).

Figure 4.2 Blowfish Algorithm Flow (Fiestel Network)

The data is encrypted using a 16-round Feistel network. Every round includes a key-dependent
permutation as well as a key-data dependent substitution. Additional operations are four indexed
array data and all operations are XORs.
Any function may be converted into a permutation using a Feistel network. There are several
block ciphers designs. The Feistel Network's operation is as follows:
1. Use a fixed string to initialize the P-aray and then the four S-boxes in that sequence. The
hexadecimal digits of pi make up this string (less the initial 3). P1 = 0x243f6a88, P2 =
0x85a308d3, P3 = 0x13198a2e, and P4 = 0x03707344, for example.
2. XOR PI with the first 32 bits of the key, XOR P2 with the second 32 bits of the key, and
so on for all key bits (possibly up to P14). XOR the full P-array with key bits by cycling
over the key bits repeatedly. (There is at least one larger key for every short key; for
example, if A is a 64-bit key, then AA, AAA, and so on are comparable keys.)
3. Using the sub-keys described in steps (1) and (2), encrypt the all-zero text with the
Blowfish technique (2).
4. Substitute the result of step 4 for P1 and P2 (3).
5. Encrypt the result of step (3) with the changed sub-keys using the Blowfish technique.
6. Substitute the result of step 6 for P3 and P4 (5).
7. Repeat the procedure, replacing all elements in the P- array, then all four S-boxes in
order, with the output of the Blowfish algorithm, which is constantly changing.

2.4.3 RSA
RSA is a public key cryptosystem that was created in 1977. Rivest, Shamir, and Adelman created
the RSA cryptographic algorithm, which is asymmetric. It produces two keys: a public key for
encryption and a private key for message decryption. RSA algorithm consists of three steps, step
one is key generation which is to be used as key to encrypt and decrypt data, step two is
encryption, where actual process of conversion of plaintext to cipher text is being carried out and
third step is decryption, where encrypted text is converted in to plain text at other side. The RSA
algorithm is based on the issue of factoring the product of two big prime integers. The key size
ranges from 1024 to 4096 bits(Patil et al., 2016).
2.4.4 AES

The Advance Encryption Standard (AES) algorithm is a symmetric key block cipher that was
invented in 1998. Any combination of data and key lengths of 128, 192, or 256 bits is supported
by the AES algorithm. AES provides for a data length of 128 bits, which may be divided into
four main working blocks. These blocks are structured as a matrix of the order of 44, which is
also known as state, and are subject to rounds in which different transformations are performed.
The number of rounds utilized for complete encryption is variable N = 10, 12, 14 for key lengths
of 128,192, and 256, respectively. Each cycle of AES employs a permutation and substitution
network, and it may be implemented in both hardware and software(Patil et al., 2016).

Figure 2.2z AES Algorithm Flow

2.5 Scientific Researches


Before the advent of cloud computing, a number of widely utilized technologies had already
been researched and developed, including the following:
2.5.1 A Hybrid Security Algorithm AES and Blowfish for Authentication in Mobile
Applications
In this study:
For authentication, Android and the receiving server use a hybrid security technique called AES
and Blowfish. Create it in software, and it will run swiftly and effectively on any platform, even
smartphones. AES will give higher security in the long run with a larger block size and longer
keys using 128-bit blocks and 128, 192, and 256-bit keys. Man in the Middle (MitM) attacks
may be prevented using a hybrid security algorithm AES and Blowfish for authentication
(Purwinarko & Hardyanto, 2018a).
2.5.2 A Comprehensive Evaluation of Cryptographic Algorithms: DES, 3DES, AES,
RSA and Blowfish
In this study:
Each encryption method has its own set of strengths and weaknesses. In order to apply an
appropriate cryptography algorithm to an application, we must first understand the algorithms'
performance, strengths, and weaknesses. According to the findings of the experiment, the
memory required for implementation is the least in blowfish and the biggest in RSA. The RAM
requirements for DES and AES are medium. As a result, if the need of any application is the
shortest memory size, Blowfish is the best choice; nonetheless, the results demonstrate that RSA
takes longer to encrypt and decode than the other options. Blowfish takes the least amount of
time of all of the species. Blowfish is a good software compression algorithm, at least on some
systems.
Each one of them We may conclude that AES can be utilized in cases where secrecy and
integrity are of the utmost importance after assessing algorithms based on the parameter
Avalanche effect (Patil et al., 2016).
• Comparison of different method from different researches

Studies Algorithm Memory used Description Time Encryption Time


(Mahmud et al., DES 18.2 kb 1000ms 1300ms
2018).
(Abirami et al, 3DES 20.7 kb 800ms 1550ms
2007).
(Patil et al., AES 14.7 kb 600ms 600ms
2016)
(Purwinarko & Blowfish 9.38 kb 450ms 500ms
Hardyanto,
2018a)
(Sampath et al., RSA 31.5 kb 1800ms 2200ms
2019)
Table 1. Comparison table
CHAPTER 3
METHODOLOGY OF THE DESIGN OF THE SECURE CLOUD COMPUTING
MODEL

This chapter is devoted to outlining the approach and limitations of this thesis' design.
The data in cloud computing systems is stored on remote servers that may be accessed via the
internet. The growing volume of personal and essential data necessitates a greater attention on
data security. Financial transactions, significant paperwork, or school data in this thesis case
content are all examples of data. Cloud computing services can help reduce dependency on local
storage while also lowering operational and maintenance costs. However, due of the possibility
of unwanted access within the service providers, users still have substantial security and privacy
concerns regarding their outsourced data (Tawalbeh et al., 2015).
Existing systems encrypt all data with the same key size, regardless of data confidentiality,
which increases the cost and processing time (Dowling, 2010; Ivan et al., 2013). In this study it
is proposed a secure cloud computing model based on data classification over the security system
for dynamic groups in cloud. The suggested cloud architecture reduces the overhead and
processing time necessary to secure data by utilizing several security techniques with variable
key sizes to give the data with the appropriate level of confidentiality. The suggested model will
be tested using several encryption algorithms so that the simulation results can be reliability and
efficiency.
The proposed contain the following are the features:
1. Data Security
The provider must ensure that data sent to the cloud is secure, and they must take security
precautions to protect their data in the cloud.
2. Privacy
The supplier must ensure that all sensitive information is encrypted and that only authorized
individuals have complete access to the data. Any data that the provider gathers regarding
customer behavior in the cloud, including passwords and digital identities, must be safe.
3. Data confidentiality
Cloud customers want to ensure that their data is not made public or given to unauthorized
individuals. Only authorized users should have access to sensitive data, and others should not
have access to any data in the cloud.
4. Fine-grained access control
Unauthorized users cannot access data that has been outsourced to the cloud. A set of users is
granted varying access privileges to the data by the data owner, while others are not permitted to
view the data without permission. In untrusted cloud settings, the access permission should be
managed solely by the owner.
5. User revocation
When a user regains access to the data, no other user will be able to access the data at the time
specified. The revocation of a user must not affect the group's other authorized users.
6. Scalable and Efficient
Because the number of Cloud users is so large, and people join and depart at random times, it's
critical that the system maintains efficiency and scalability. In order to share data effectively in a
cloud computing environment, all security standards must be met.
3.1 The system functionality

The information is divided into three groups (level 1, level 2, level 3) per categorization. The
user should be able to intuitively determine which data belongs to which category.
Adoption of a classification system for data saved in the cloud is one of the methods proposed to
address the problem of cloud data security(Dowling, 2010). Depending on the classification, the
data may be stored without client-side encryption or encrypted using one of two encryption
methods (Masud & Huang, 2012). The system is divided into three parts, as in Fig. 1 shows.
Figure 1: schematic diagram architecture.
The proposed Architecture is based on three stages: client-side encryption, cloud storage, and
client-side decryption.
3.2 Client-side encryption
Based on the relative significance of the data, this can be further divided into three groups, and
then the encryption svheme is applied based on the data category. The end-user chooses the data
category on the client side (i.e., level 1 level 2, level 3).
Any encryption that is applied to data before it is transferred from a user device to a server is
known as client-side encryption. End-to-end encryption may thus be thought of as a particular
use of client-side encryption for the purpose of transmitting communications.
It provides clients peace of mind by ensuring that their data is secure before it leaves their
devices or networks, as well as ensuring that cloud providers (or other third parties) are unable to
access the encrypted data.
• Categories with a simple App (cloud environment)

Level 1 Level 3

Level 2
Figure 2: The proposed encryption scheme's flowchart

Three levels are determined by the architect.


Level 1: This is non-sensitive data that can be shared with the public. This includes, among other
things, public CVs, public keys, and previously made public content. On the client side, this data
is not encrypted and is delivered to a cloud provider in an unencrypted state (Purwinarko &
Hardyanto, 2018).
Level 2: This is data that the customer regards to be of medium importance and, as a result, must
be encrypted while stored on the cloud platform. This category may contain profile images,
personal documents for students, and other types of information (Purwinarko & Hardyanto,
2018b).
This data is encrypted on the client side before being sent to the cloud provider using AES-256
encryption. The private key for encryption is retained on the user's computer.
Level 3: This is data that is considered to be of critical importance, to the degree where the
theoretical scenario of a single cypher failing becomes significant, and the user need the
maximum level of data protection (Ghosh, 2020). This necessitates the employment of two
ciphers cascade. The data is first encrypted using AES-256, and then the output is further
encrypted using the blowfish cypher with a second key. Both keys are retained on the person of
the user.
3.3 Cloud storage
Data transmission between the client and the cloud, as well as data storage on the cloud platform,
are both part of this phase. If the cloud provider supports it, an HTTPS connection between the
client and the cloud may be possible. The cloud provider may encrypt the data further once it is
present, but the proposed solution considers it for more security but does not totally rely on this
because relevant data is already encrypted before transmission using one of two ways (Ul Haq et
al, 2021).
3.4 Decryption on the client side
This phase is imilar to the encryption side hence, it can be also divided into three divisions
depending on the type of data in question.
Level 1: This data can be retrieved without the user needing to do any decryption operations
because it was saved without client-side encryption.
Level 2: The user has access to the key, which is encrypted with AES-256.
Level 3: Using the two keys used for encryption in the first stage, this data must be decrypted
first with Blowfish and then with AES-256. In 1977, the National Bureau of Standards, which
later became the National Institute of Standards and Technology, certified DES. The DES
algorithm accepts 64 bits of input text (for example, plain data) and outputs 64 bits of cipher text
data. The technique encrypts each 64-bit block of input in 16 cycles.
The symmetric key size for this technique is 56 bits, and the block size is 64 bits. The 3DES
algorithm encrypts twice as much as the DES algorithm. As a result, the quality of the encryption
improves, making it more difficult to crack.

A triple DES is a block cipher that works on 48 rounds (three times SDS) in various counts and
employs a key that is 168 bits long. The method 3DES employs a block size of 64 bits during
encryption. The AES uses substitution-permutation networks instead of Feistel networks, as
many other ciphers do. To efficiently disperse plaintext information across the cipher text, the
substitution-permutation strategy uses substitution and permutation boxes. The algorithm is
based on the blowfish cypher. The major purpose of the method was to produce a completely
open structure with the added benefit of dynamic S-Boxes(Purwinarko & Hardyanto, 2018a).
S-Box also known as substitution-box is a fundamental component of symmetric key algorithms
that allows for replacement. They're usually employed in block ciphers to hide the relationship
between the key and the ciphertext, assuring Shannon's property of bewilderment.
These S-Boxes are key-dependent, rather than being defined statically by the Blowfish
implementation provider.
This protects the S-Box and the algorithm's fundamental driving force. Our system's two cyphers
are block cyphers, which means they only work on a single block of data at a time. The block
cypher mode of operation we used in our method was the cypher block chaining mode. In this
mode, the Initialization Vector (IV) of the first block of data is XORed before encryption. IV is a
random data collection in the form of a block. After the first block is XORed with IV, the output
is encrypted with the appropriate cipher. The result of the encryption is XORed with the next
block to be encrypted. Decryption in CBC mode is the inverse of encryption, with the XOR
operation coming after the decryption. Preceding being XORed with the cipher text block before
it, the cipher text blocks are decoded(Mahmud et al., 2018).
And the client-side cryptography will be the focus in this research because the AES 256 and
blowfish cascade ciphers are the safest and fastest. There hasn't been a single attack on either of
these algorithms thus yet. Because they're both really safe and speedy. According to a literature
review, all alternative algorithms are less secure and faster. Although some academics say that
AES 256 is vulnerable when used alone (C. H & Gunalan, 2015), (Pocatilu et al., 2010), the bulk
of scholars claim that the algorithm is now 99 percent secure.
3.4.1 The proposed algorithm
In this section encryptions and decryptions algorithms with AES 256 and blowfish are explained
in details.
1. AES-256 encryption
Symmetric key encryption, used by AES 256, requires just one secret key to cipher and decipher
data, it works as follow:
• A crucial generation (Key Generation)
• To make the input file a multiple of the AES block size, pad it.
• Make an IV for the CBC mode.
• Put IV at the start of the output file.
• Read the file's blocks.
• Encrypt the block that was read in step 4
• To the result file, add the encrypted block from step 5.
• Continue to step 4 until more blocks are available.
In this study, key generation is conducted using the object of a class that generates a random byte
stream of a specified size and then passes it through SHA-256 to obtain a useable 256-bit key.
Because the cipher uses the CBC (cypher block chaining) mode of operation, the padding of
inputs and generation of the IV are the next phases. To use it during decryption, the IV is saved
with the output file as the head of the output file. Then a loop is started, which removes and
encrypts one block of data from the padded input file at a time. The loop will run until there are
no more blocks to read from the input file.
2. AES-256 decryption
The same symmetric key encryption is used by AES Decryption, which preloads it in order to
decrypt data.
• Pre-stored key is loaded.
• To read the IV, read the first block of the encrypted file.
• Read the file's blocks.
• Decrypt the block that was read in step 3.
• Step 4's output should be written to the output file.
• Step 3 was reached while blocks remained in the input file.
• To get the original file, unpad the output file.
The key generated during the AES-256 encryption procedure must be loaded during the
decryption step. The IV is then recovered by examining the encrypted file's first block. Then a
loop begins to operate, simulating the functioning of the encryption loop. The loop reads and
decrypts successive blocks of data from the encrypted file before writing them to an output file.
The resulting data is unpadded after the decryption procedure is completed.
3. Blowfish encryption
Blowfish is a symmetric encryption method like AES 256, which means that it uses the same
secret key for both message encryption and decryption.
Here are the steps:
• Produce a Blowfish key.
• Input data should be padped to make it a multiple of the block size.
• Make a vector for the initialization.
• S-Boxes should be generated.
• Make a Subkeys array.
• CBC mode is used to encrypt blocks one by one.
• To save the output, save it to the output file.
Because sharing the key between the two ciphers in the cascade mode of operation would defeat
the entire purpose of utilizing a cascade cipher, the Blowfish key is created independently from
the AES-256 encryption. The CBC mode is being used by Blowfish. It's necessary to make sure
the data being encrypted is a multiple of the blowfish cipher's block size, which is 64 bits. The
next step is to pad the input data to the closest 64 bits after the key is generated. The PKCS#5
padding scheme has been utilize.
The IV is created using the class's object. The S-boxes were created automatically as part of the
blowfish operation process. A subkey array of 18 subkeys is likewise constructed in the same
way. Finally, the data can be encrypted block by block using the following statement:
The class = X_Param
X_Param iv = new X_Param (IV.getBytes(StandardCharsets.UTF_8));
cipher = Cipher.getInstance(“Blowfish/CBC/PKCS5Padding”);
Secretes secretKey = new SecretKeySpec (KeyData,“Blowfish”);
cipher.init (Cipher.ENCRYPT_MODE, secretKey, iv);

4. Blowfish decryption
The same symmetric key encryption is used by Blowfish Decryption, which preloads it in order
to decrypt data.
• IV and the load key.
• The input file should be read.
• Under CBC mode, do block-by-block decryption.
• Remove the padding from the output.
• To the output file, write the unpadded output.
The counterpart of the blowfish encryption method is the blowfish decryption procedure.
Because the decryption is done in CBC mode, the decryption must also be done in CBC mode.
The original IV is required for this. Unlike AES, where the IV was stored with the encrypted file,
the IV was not stored with the encrypted file in the case of blowfish. It is kept with the user,
much like the key. The loading of both the IV and the key is the first step in the decryption
process. They're supplied as arguments to the CBC mode decryption command as follows:
X_Param iv = new X_Param (IV.getBytes (StandardCharsets.UTF_8));
cipher = Cipher.getInstance(“Blowfish/CBC/PKCS5Padding”);
Secretes secretKey = new SecretKeySpec (KeyData, “Blowfish”);
cipher.init (Cipher.ENCRYPT_MODE, secretKey, iv);

The blowfish decryption process revolves around this statement, which is provided by the
blowfish java package. In a previous stage, the key was saved in the cipher object. The padding
added during the encryption must still be removed from the output retrieved after decryption.
The PKCS#5 padding that was introduced during the encryption phase is removed in the next
statement. The original file is used to obtain the data, which is subsequently written to a file and
used.
3.4.2 Electronic code book
Feedback is used in Cipher Block Chaining (CBC) to feed the result of encryption into the
encryption of the following block. Before being encrypted, the plain-text is XORed with the
preceding cipher-text block. Each block's encryption is based on the previous blocks. This
necessitates the decryption side sequentially processing all encrypted blocks.
This option necessitates the use of a random initialization vector that is XORed with the first data
block before encryption. It is not necessary to keep the initialization vector a secret.
To guarantee that each communication is encrypted uniquely, the initialization vector should be a
random number (or a serial number). When there is a mistake in an encrypted block (because to a
transmission failure, for example), the block containing the fault becomes utterly distorted.
Bit mistakes will appear in the following block at the same locations as the initial erroneous
block.
The mistake will not impact the blocks that follow the second block. As a result, CBC is capable
of self-recovery. While CBC quickly recovers from bit mistakes, it does not recover from
synchronization faults at all.
All succeeding blocks are mangled if a bit is added or removed from the cipher-text stream. As a
result, a CBC system must ensure that the block structure is preserved. Before encryption to take
place, it, like the ECB mode, requires a whole block on its input.
3.4.3 Electronic code book

The electronic codebook mode (ECB), which is a block cipher mode of operation, is the simplest
block cipher mode of operation. When analysts discover that the block "8d226acd" encrypts to
the cipher-text block "1c7ed351", they may decrypt that cipher-text block anytime it comes in a
transmission. This vulnerability is greatest at the start and conclusion of communications, where
well-defined headers and footers contain information about the sender, receiver, date, and other
pertinent details.
CHAPITER 4
ALGORITHM FLOW AND SIMULATION RESULTS

The AES algorithm stipulates those various key sizes of 128, 192, and 256 bits be supported. The
technique is implemented as a two-dimensional array with four rows of bytes. Each row has Ni
bytes, where Ni is the block length divided by 32 because when used with 32-bit
microprocessors, it is exceedingly quick.

The array is represented by the letter A, and each byte has two indices: a row number Ri in the
range 0 <= Ri < 4 and a column number Ci in the range 0=Ci Ni. This allows each individual byte
to be designated as A [Ri, Ci]. Because AES defines Ni = 4, the range for Ci, the State's column
number, is 0 <= Ci < 4. The number of rounds in the AES algorithm is determined by the key
size.
Ng represents the number of rounds, with Ng = 10 when Nk = 4, Ng=12 when Nk = 6, and
Ng=14 when Nk = 8.
Key Length Block Size Number of Rounds
(Nk words) (Nb words) (Ng)
AES 128 4 4 10
AES 192 6 4 12
AES 256 8 4 14

Table 4.1 Key Length, Block Size, Number of Rounds.


The AES algorithm uses 128 bits for input and output, with 0 and 1 being the input and output
values. The encryption key can be 128, 192, or 256 bits long. The numbers in the sequences
begin and conclude with 0 and 1 (binaries). The AES algorithm works using an eight-byte
sequence.
In the AES algorithm, byte values are represented as concatenation of individual bits 0 and 1 in
the form of polynomial expressions are used to represent these data.

b7´7+b6´6+b5´5+b4´4+b3´3+b2´2+b11+b0=Σbi´i (Equation 1)
BLOWFISH is the other algorithm employed in this study. Comparing to the AES algorithm, it
is a good way to keep data safe from attackers. The data is secured using a configurable key
length of 32 to 448 bits. When the key does not change regularly, the BLOWFISH algorithm is
employed on communication links and for file encryption. On 32-bit microprocessors, the
BLOWFISH algorithm is an extremely fast encryption technique.
The data is encrypted using a 16-round Feistel network (Explained in chapter 2). Every round
includes a key-dependent permutation as well as a key-data dependent substitution. Additional
operations are four indexed array data and all operations are XORs

3.5 Simulation procedure

Because the primary goal of this study is to assess the algorithm's performance, these algorithms
were written in a standardized programming language. They were written in JAVA (18.0.1
) and tested on a MACBOOK PRO laptop with the MACOS 12.4 operating system. The
throughput of an encryption technique is calculated using encryption time. It denotes the
encryption speed. The encryption scheme's throughput is estimated by dividing the entire
plaintext.
Table 4.2 Comparative Encryption and Decryption Algorithm Execution Times (in
Milliseconds) for Different Packet Sizes based on the overall encryption time for each method in
megabytes encrypted.
The power consumption of this encryption technology decreases as the throughput value
increases. The methods were assessed in terms of the time necessary to encrypt and decode data
blocks of various sizes (0.5MB to 20MB). All of the implementations were meticulous in order
to ensure that the outcomes would be fair and accurate.

3.6 Simulation result

At the encryption and decryption stages, the simulation results for this compassion point are
displayed in Fig. 4.3 & 4.4 and Table 1. In terms of processing time, the results suggest that the
Blowfish method outperforms other algorithms.

Figure 4.3 AES and Blowfish Algorithm Throughput (mb/sec)


Figure 4.4 AES and Blowfish time comparison (mb/sec)
Two modes were implemented: electronic code book and cipher block chaining, and the
performance results and a brief description of each are shown below.

3.6.1 Electronic code book mode (ECB)

Figure 4.5 The ECB mode results.

In terms of processing time, the results reveal that the Blowfish method outperforms the AES
algorithms. It also demonstrates that AES uses more resources when data blocks are reasonably
large.
3.6.2 Cipher block chaining mode (CBC)
Figure 4.6 The CBC mode results.
Because of the key-chaining structure of CBC, it takes longer to process than ECB. The findings
shown in Fig.4.5 further show that the additional time is not substantial for many applications,
given that CBC is far superior to ECB in terms of protection.
Although the difference between the two modes is difficult to notice with the human eye, the
findings revealed that the average difference between ECB and CBC is just 0.059896 second,
which is a modest difference.

Studies Algorithm Memory used Description Encryption


Throughput time Throughput time
(Mahmud et al., DES 18.2 kb 1000ms 1300ms
2018).
(Abirami et al, 3DES 20.7 kb 800ms 1550ms
2007).
(Patil et al., AES 14.7 kb 600ms 600ms
2016)
(Purwinarko & Blowfish 9.38 kb 450ms 500ms
Hardyanto,
2018a)
(Sampath et al., RSA 31.5 kb 1800ms 2200ms
2019)
Thesis results Blowfish Up to 20mb 40000 ms 21000ms
AES Up to 20mb 140000 ms 150000ms
CHAPITER 5
CONCLUSION
In today's rapidly growing internet and network applications, encryption algorithms play a
critical role in ensuring information security. We looked at two symmetric key encryption
techniques in this thesis: AES and Blowfish. For their performance, we looked at encryption
speed, throughput, and power consumption.
Because Blowfish has no known security flaws, the simulation results demonstrate that it
outperforms AES. As a consequence, it may be regarded an excellent standard encryption
method. Because it demands more processing resources, the Blowfish algorithm sprints quicker
than AES and shows worse performance results when compared to Blowfish methods. As a
result, the Blowfish algorithm may be better suited for wireless setups that interchange tiny
packets.
However, as compared to AES, the Blowfish method has the issue of security guaranteed, which
might be poor. This is because it suffers from a weak key problem.
As a result, the notion of combining them as a hybrid approach ends up being the ideal way to
encrypt and decode files with a large block size and longer keys using 128-bit blocks and 128,
192, and 256-bit keys. In the long run, AES will give higher security. AES with Blowfish, a
hybrid security method, can protect against Man in the Middle (MitM) attacks.
REFERENCES
Abirami, M., & Chellaganeshavalli, S. (2007). Performance Analysis of AES and Blowfish
Encryption Algorithm. In International Journal of Innovative Research in Science,
Engineering and Technology (An ISO (Vol. 3297, Issue 11). www.ijirset.com
C. H, S., & Gunalan, B. (2015). A Review of Secure Data Sharing in Cloud Computing.
International Journal of Computer Trends and Technology, 30(3), 152–156.
https://doi.org/10.14445/22312803/ijctt-v30p127
Dell, C. (2019). Cloud Computing Services: Driving added IT functionality and cost savings
opportunities Education Services.
Dowling, J. (n.d.). Introduction to Cloud Computing.
Dowling, J. (2010). Introduction to Cloud Computing.
Gandhi, P. (2020). Data visualization techniques: Traditional data to big data. In Data
Visualization: Trends and Challenges Toward Multidisciplinary Perception (pp. 53–74).
Springer Singapore. https://doi.org/10.1007/978-981-15-2282-6_4
Ghosh, A. (2020). Comparison of Encryption Algorithms: AES, Blowfish and Twofish for Security
of Wireless Networks. https://doi.org/10.13140/RG.2.2.31024.38401
Ivan, P., Tommi, M., & Adnan, A. (2013). Introduction to Cloud Computing Technologies. ACM
Papers. https://doi.org/10.13140/2.1.1747.8082
James, D., & Girish Tere. (2018). Cloud-Computing. The University of Mumbai.
Keijo, H., Adnan, A., Johan, L., & Tommi, M. (2014). Introduction to Cloud Computing
Technologies. TUCS General Publication. https://doi.org/10.13140/2.1.1747.8082
Mahmud, A. H., Angga, B. W., Tommy, Marwan, A. E., & Siregar, R. (2018). Performance
analysis of AES-Blowfish hybrid algorithm for security of patient medical record data.
Journal of Physics: Conference Series, 1007(1). https://doi.org/10.1088/1742-
6596/1007/1/012018
Malik, M. I., Wani, S. H., & Rashid, A. (n.d.). CLOUD COMPUTING-TECHNOLOGIES.
International Journal of Advanced Research in Computer Science, 9(2).
https://doi.org/10.26483/ijarcs.v9i2.5760
Masud, M., & Huang, X. (2012). An e-learning system architecture based on cloud computing.
System, 10(11), 255–259.
Mohammed, K. (2019). Data Visualization: Methods, Types, Benefits, and Checklist. Independent
Researcher.
Patil, P., Narayankar, P., Narayan, D. G., & Meena, S. M. (2016). A Comprehensive Evaluation
of Cryptographic Algorithms: DES, 3DES, AES, RSA and Blowfish. Procedia Computer
Science, 78, 617–624. https://doi.org/10.1016/j.procs.2016.02.108
Pocatilu, P., Alecu, F., & Vetrici, M. (2010). Measuring the efficiency of cloud computing for E-
learning systems. WSEAS Transactions on Computers, 9(1), 42–51.
Purwinarko, A., & Hardyanto, W. (2018a). A Hybrid Security Algorithm AES and Blowfish for
Authentication in Mobile Applications. Scientific Journal of Informatics, 5(1), 2407–7658.
http://journal.unnes.ac.id/nju/index.php/sji
Purwinarko, A., & Hardyanto, W. (2018b). A Hybrid Security Algorithm AES and Blowfish for
Authentication in Mobile Applications. Scientific Journal of Informatics, 5(1), 2407–7658.
http://journal.unnes.ac.id/nju/index.php/sji
Sampath, D. M., Ch, U. K., & T, P. (2019). Generating Cipher Text using BLOWFISH Algorithm
for Secured Data Communications. International Journal of Innovative Technology and
Exploring Engineering, 9(2), 117–121. https://doi.org/10.35940/ijitee.a5063.129219
Tawalbeh, L., Darwazeh, N. S., Al-Qassas, R. S., & AlDosari, F. (2015). A secure cloud computing
model based on data classification. Procedia Computer Science, 52(1), 1153–1158.
https://doi.org/10.1016/j.procs.2015.05.150
Ul Haq, M. N., & Kumar, N. (2021). A novel data classification-based scheme for cloud data
security using various cryptographic algorithms. International Review of Applied Sciences
and Engineering, September. https://doi.org/10.1556/1848.2021.00317

You might also like