Thesis v5.1
Thesis v5.1
Encryption algorithms serve a key role in guaranteeing information security in today's constantly
increasing internet and network applications. In this thesis, two symmetric key encryption
techniques, AES and BLOWFISH are considered encryption speed, throughput, and power
consumption were compared in order to see how well they performed. The simulation findings
show that Blowfish beats AES because it has no known security weaknesses. As a result, it can
be considered a good standard encryption technique.
It observed that, as the BLOWFISH algorithm requires more processing resources, it runs faster
than AES and achieves worse performance outcomes when compared to BLOWFISH
approaches. However, combining the two methods as a hybrid technique end up being
advantageous for an encryption system.
Moreover, the symmetric key algorithm AES and BLOWFISH is incredibly quick and powerful.
AES security will be significantly stronger and more difficult to attack with the use of a big
block size of AES and Blowfish to encrypt keys.
People's lives have been influenced by digital technologies. To meet their memory requirements,
the majority of these digital devices rely on cloud storage. Hundreds of thousands of
photographs, movies, and audio recordings are being stored in the cloud. Thousands of people
access this media every second all across the world.
The cloud has evolved into a data storage platform as a result of widespread acceptance of cloud
computing in our daily lives. Data security is one of the most major impediments to cloud
adoption, particularly among business users. One of these subjects is cloud security, which
encompasses aspects such as technology, control, and a set of policies that aid in the protection
of sensitive data (James & Girish Tere, 2018).
Although a single strong cipher such as AES may be considered sufficient for data security, there
is still a theoretical concern about trusting AES0 static S-Box components. To deal with this
issue, many cyphers in the chain have been suggested to reduce the chances of the secret data
being compromised (James & Girish Tere, 2018).
The main cloud computing components include virtualization, multi-tenancy cloud storage, and a
cloud network, which is why this strategy is recommended (Keijo et al., 2014).
Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure as a Service
(IaaS) are the three service models utilized by cloud service providers to deliver cloud services to
their customers (IaaS). There are four different types of cloud deployment methods (public,
private, hybrid, and community) that can be used to satisfy the needs of cloud users (James &
Girish Tere, 2018).
Computing security, data storage, virtualization security, internet services-related security,
network security, access control, software security, trust management, and legal security and
compliance difficulties are the categories in which it’s classified cloud security issues.
A list of criteria for cloud security is provided by the National Institute of Standards and
Technology. The Cloud Security Alliance (CSA) has stressed the same point in its dissemination
of legislation for cloud security standards i.e., authentication, confidentiality, integrity, and
availability (Keijo et al., 2014).
1.1 Statement of the Problem
Aside from all the services offered by cloud computing, it can also handle the various threats and
attacks that could have an indirect or direct influence on a system.
Data integrity, source availability, and confidentiality of the cloud infrastructure as a whole, as
well as specific layers and their services, are all covered by cloud security. Another aspect to
consider is that encryption and decryption are resource-intensive activities that consume a
significant amount of CPU, memory, and time.
Another consideration is that not all data is created equal. While certain data must be kept
private, others, such as material in public training manuals, advertising media, and so on, can be
left unencrypted without causing end-users any problems. In terms of the level of protection
required, the remaining data is also not uniform.
It can be split and encrypted further using several levels of encryption, with greater encryption
designated for more sensitive information.
1.2 Aim of the Study
• Combining the AES and blowfish encryption and decryption techniques to improve cloud
data security.
• In the hybrid approach, AES-256 will be used as the first layer (represent the first group
of encryptions), followed by blowfish as the second layer (represent the second group of
encryptions).
After calculating the execution times of all the approaches, each algorithm is combined with
AES 256 to calculate the execution times of all the encryption and decryption operations.
Comparison is required for all of these results against AES 256 to see which approach has the
quickest execution time.
AES 256 and Blowfish have been allocated to Level 3 in the proposed scheme. AES
encryption and decryption, as well as AES-Blowfish cascade encryption and decryption, are
discussed because AES 256 and Blowfish have the fastest execution times.
The goal of this thesis is to address two concerns that users face when using cloud computing
services. The first is users' fears of internal and external hacking threats. The other is that
encrypting all data without considering its level of confidentiality is impossible.
The following questions must be answered in order for our research to be relevant to our goals.
• Despite all of the advantages that cloud storage services provide, what are the restrictions
and how do they effect the users?
• Are the AES-256 and blowfish ciphers suitable for user protection in the hybrid
approach?
• Which of the AES-256 and blowfish ciphers is the more efficient?
Based on the above-mentioned research aims, the following hypothesis was developed.
H1: The hybrid strategy for AES-256 and Blowfish cipher proves to be more accurate than
anticipated.
H2: Blowfish cipher is more efficient that AES-256 cipher.
H3: The hybrid strategy was not the best option.
1.5 Dissertation Outline
This research is divided into five sections, which are followed by references and an appendix
section.
Chapter 1: The study's background, research topic, objective and goals, research questions, and
limitations were all discussed.
Chapter 2: A literature review is provided, as well as a theoretical basis, an empirical
assessment, a discussion of research gaps, and a description of the methodological approach.
Chapter 3: Research technique was discussed, including research design, sample design, target
population, and sample selection justification, data collection tools, questionnaires, the validity
and reliability of the research instrument, data interpretation, and ethical considerations.
Chapter 4: The study's findings, interpretation of the findings, conclusions, and controversy
were all discussed.
Chapter 5: The study's findings and recommendations were summarized.
CHAPTER 2
LITERATURE REVIEW
A crucial distinguishing feature of a thriving information technology (IT) sector is its capacity to
contribute to cyber infrastructure in a truthful, valuable, and cost-effective manner. Cloud
computing is a broad term for a type of network-based computing in which a program or
application runs on a linked server or servers rather than on a local computing device like a PC,
tablet, or smartphone. Cloud computing is a distributed architecture for providing on-demand
computer resources and services by centralizing server resources on a scalable platform.
It's a configuration computing pool that everyone can use. Cloud computing allows customers to
access services on demand (Keijo et al., 2014).
For decades, businesses have tried to preserve and safeguard data in order to protect their clients'
personal information. Cloud computing was created by businesses as a way to give secure data
storage and processing capacity to businesses and consumers. Cloud storage is used by a wide
range of businesses.
Cloud computing, or just computing, is a type of dynamic application and storage that makes use
of Internet technologies. On-demand self-service, broad network access, resource pooling, quick
elasticity, and measured service are the five major characteristics of cloud computing.
Cloud computing also encompasses three key service types: Infrastructure as a Service, Platform
as a Service, and Software as a Service. Furthermore, cloud computing can be used in four
different ways: public cloud, private cloud, community cloud, and hybrid cloud (Purwinarko &
Hardyanto, 2018).
The benefits of cloud computing include increased processing power, storage, flexibility,
scalability, and lower IT infrastructure overhead costs. Startups have been able to benefit from
migrating to the cloud by redirecting capital spending into operational spending, making cloud
computing appealing when IT resources are being cut (Purwinarko & Hardyanto, 2018).
The smallest businesses are the most likely to use cloud computing, while medium-sized
businesses have lower rates, and organizations with less than hundred employees have the lowest
rates (Malik et al, 2016). Larger companies have ample processing power in-house. Cloud
computing, on the other hand, has several drawbacks, such as the need for Internet access, speed,
and direct access to resources. As a result, businesses may find it dangerous to rely solely on
cloud computing service providers. Any disruption in cloud services might be disastrous for
businesses. Clouds are transparent to users and apps, which implies there are no barriers or
obstacles to using the cloud from either side (Malik et al, 2016).
The following are the key benefits: cost savings, security, privacy, and dependability.
Stakeholders believe that in the future, these challenges with cloud computing adoption will be
minimized or removed. The client connects to the server to obtain data, much like in a traditional
client-server network model. The main distinction in cloud computing is that it can run in
parallel or provide data to several users at the same time, thanks to the concept of virtualization
(Mohammed, 2019)
Virtualization is an abstraction of an execution environment that may be made dynamically
available to authorized clients via the use of well-defined protocols, resource quotas (e.g. CPU,
memory share), and software configuration (e.g. operating system, offered services). End users
and operators benefit from "granular" computing resources, which include on-demand self-
service, broad access across various devices, resource pooling, quick flexibility, and service
metering capability (Keijo et al., 2014).
2.1 The technology's strategy
Collaboration between IT personnel and executives is essential for a unified cloud strategy that
leads to good business outcomes. Cloud computing cannot work as a silo within IT, just as IT
cannot function as a silo within the business. The proprietary technology of other cloud providers
need that you adapt your IT architecture to their capabilities. A long-term plan necessitates an
open infrastructure that can adapt to changing business and IT goals. Example of Dell company
that focuse on business and cultural expectations while building and implementing an open cloud
strategy (Dell, 2019).
The Dell Technologies Cloud is good example because, is meant to make operating hybrid cloud
systems simpler. Dell Technologies Cloud offers a standardized set of tools and a broad variety
of IT and management choices with tight integration and a single-vendor experience thanks to
the infrastructure expertise of Dell Technologies.
IT organizations can easily manage different forms of cloud computing with Dell Technologies
Cloud by depending on:
• Consistent infrastructure that makes workload migrations easier and prevents application
rework tax.
• Uniform operations that reduce expenses by eradicating silos and using technologies that
provide consistent control over all cloud resources.
• Reliable services with flexible, consumption-based pricing that provide knowledgeable
assistance in developing and implementing a successful cloud strategy.
The data is encrypted using a 16-round Feistel network. Every round includes a key-dependent
permutation as well as a key-data dependent substitution. Additional operations are four indexed
array data and all operations are XORs.
Any function may be converted into a permutation using a Feistel network. There are several
block ciphers designs. The Feistel Network's operation is as follows:
1. Use a fixed string to initialize the P-aray and then the four S-boxes in that sequence. The
hexadecimal digits of pi make up this string (less the initial 3). P1 = 0x243f6a88, P2 =
0x85a308d3, P3 = 0x13198a2e, and P4 = 0x03707344, for example.
2. XOR PI with the first 32 bits of the key, XOR P2 with the second 32 bits of the key, and
so on for all key bits (possibly up to P14). XOR the full P-array with key bits by cycling
over the key bits repeatedly. (There is at least one larger key for every short key; for
example, if A is a 64-bit key, then AA, AAA, and so on are comparable keys.)
3. Using the sub-keys described in steps (1) and (2), encrypt the all-zero text with the
Blowfish technique (2).
4. Substitute the result of step 4 for P1 and P2 (3).
5. Encrypt the result of step (3) with the changed sub-keys using the Blowfish technique.
6. Substitute the result of step 6 for P3 and P4 (5).
7. Repeat the procedure, replacing all elements in the P- array, then all four S-boxes in
order, with the output of the Blowfish algorithm, which is constantly changing.
2.4.3 RSA
RSA is a public key cryptosystem that was created in 1977. Rivest, Shamir, and Adelman created
the RSA cryptographic algorithm, which is asymmetric. It produces two keys: a public key for
encryption and a private key for message decryption. RSA algorithm consists of three steps, step
one is key generation which is to be used as key to encrypt and decrypt data, step two is
encryption, where actual process of conversion of plaintext to cipher text is being carried out and
third step is decryption, where encrypted text is converted in to plain text at other side. The RSA
algorithm is based on the issue of factoring the product of two big prime integers. The key size
ranges from 1024 to 4096 bits(Patil et al., 2016).
2.4.4 AES
The Advance Encryption Standard (AES) algorithm is a symmetric key block cipher that was
invented in 1998. Any combination of data and key lengths of 128, 192, or 256 bits is supported
by the AES algorithm. AES provides for a data length of 128 bits, which may be divided into
four main working blocks. These blocks are structured as a matrix of the order of 44, which is
also known as state, and are subject to rounds in which different transformations are performed.
The number of rounds utilized for complete encryption is variable N = 10, 12, 14 for key lengths
of 128,192, and 256, respectively. Each cycle of AES employs a permutation and substitution
network, and it may be implemented in both hardware and software(Patil et al., 2016).
This chapter is devoted to outlining the approach and limitations of this thesis' design.
The data in cloud computing systems is stored on remote servers that may be accessed via the
internet. The growing volume of personal and essential data necessitates a greater attention on
data security. Financial transactions, significant paperwork, or school data in this thesis case
content are all examples of data. Cloud computing services can help reduce dependency on local
storage while also lowering operational and maintenance costs. However, due of the possibility
of unwanted access within the service providers, users still have substantial security and privacy
concerns regarding their outsourced data (Tawalbeh et al., 2015).
Existing systems encrypt all data with the same key size, regardless of data confidentiality,
which increases the cost and processing time (Dowling, 2010; Ivan et al., 2013). In this study it
is proposed a secure cloud computing model based on data classification over the security system
for dynamic groups in cloud. The suggested cloud architecture reduces the overhead and
processing time necessary to secure data by utilizing several security techniques with variable
key sizes to give the data with the appropriate level of confidentiality. The suggested model will
be tested using several encryption algorithms so that the simulation results can be reliability and
efficiency.
The proposed contain the following are the features:
1. Data Security
The provider must ensure that data sent to the cloud is secure, and they must take security
precautions to protect their data in the cloud.
2. Privacy
The supplier must ensure that all sensitive information is encrypted and that only authorized
individuals have complete access to the data. Any data that the provider gathers regarding
customer behavior in the cloud, including passwords and digital identities, must be safe.
3. Data confidentiality
Cloud customers want to ensure that their data is not made public or given to unauthorized
individuals. Only authorized users should have access to sensitive data, and others should not
have access to any data in the cloud.
4. Fine-grained access control
Unauthorized users cannot access data that has been outsourced to the cloud. A set of users is
granted varying access privileges to the data by the data owner, while others are not permitted to
view the data without permission. In untrusted cloud settings, the access permission should be
managed solely by the owner.
5. User revocation
When a user regains access to the data, no other user will be able to access the data at the time
specified. The revocation of a user must not affect the group's other authorized users.
6. Scalable and Efficient
Because the number of Cloud users is so large, and people join and depart at random times, it's
critical that the system maintains efficiency and scalability. In order to share data effectively in a
cloud computing environment, all security standards must be met.
3.1 The system functionality
The information is divided into three groups (level 1, level 2, level 3) per categorization. The
user should be able to intuitively determine which data belongs to which category.
Adoption of a classification system for data saved in the cloud is one of the methods proposed to
address the problem of cloud data security(Dowling, 2010). Depending on the classification, the
data may be stored without client-side encryption or encrypted using one of two encryption
methods (Masud & Huang, 2012). The system is divided into three parts, as in Fig. 1 shows.
Figure 1: schematic diagram architecture.
The proposed Architecture is based on three stages: client-side encryption, cloud storage, and
client-side decryption.
3.2 Client-side encryption
Based on the relative significance of the data, this can be further divided into three groups, and
then the encryption svheme is applied based on the data category. The end-user chooses the data
category on the client side (i.e., level 1 level 2, level 3).
Any encryption that is applied to data before it is transferred from a user device to a server is
known as client-side encryption. End-to-end encryption may thus be thought of as a particular
use of client-side encryption for the purpose of transmitting communications.
It provides clients peace of mind by ensuring that their data is secure before it leaves their
devices or networks, as well as ensuring that cloud providers (or other third parties) are unable to
access the encrypted data.
• Categories with a simple App (cloud environment)
Level 1 Level 3
Level 2
Figure 2: The proposed encryption scheme's flowchart
A triple DES is a block cipher that works on 48 rounds (three times SDS) in various counts and
employs a key that is 168 bits long. The method 3DES employs a block size of 64 bits during
encryption. The AES uses substitution-permutation networks instead of Feistel networks, as
many other ciphers do. To efficiently disperse plaintext information across the cipher text, the
substitution-permutation strategy uses substitution and permutation boxes. The algorithm is
based on the blowfish cypher. The major purpose of the method was to produce a completely
open structure with the added benefit of dynamic S-Boxes(Purwinarko & Hardyanto, 2018a).
S-Box also known as substitution-box is a fundamental component of symmetric key algorithms
that allows for replacement. They're usually employed in block ciphers to hide the relationship
between the key and the ciphertext, assuring Shannon's property of bewilderment.
These S-Boxes are key-dependent, rather than being defined statically by the Blowfish
implementation provider.
This protects the S-Box and the algorithm's fundamental driving force. Our system's two cyphers
are block cyphers, which means they only work on a single block of data at a time. The block
cypher mode of operation we used in our method was the cypher block chaining mode. In this
mode, the Initialization Vector (IV) of the first block of data is XORed before encryption. IV is a
random data collection in the form of a block. After the first block is XORed with IV, the output
is encrypted with the appropriate cipher. The result of the encryption is XORed with the next
block to be encrypted. Decryption in CBC mode is the inverse of encryption, with the XOR
operation coming after the decryption. Preceding being XORed with the cipher text block before
it, the cipher text blocks are decoded(Mahmud et al., 2018).
And the client-side cryptography will be the focus in this research because the AES 256 and
blowfish cascade ciphers are the safest and fastest. There hasn't been a single attack on either of
these algorithms thus yet. Because they're both really safe and speedy. According to a literature
review, all alternative algorithms are less secure and faster. Although some academics say that
AES 256 is vulnerable when used alone (C. H & Gunalan, 2015), (Pocatilu et al., 2010), the bulk
of scholars claim that the algorithm is now 99 percent secure.
3.4.1 The proposed algorithm
In this section encryptions and decryptions algorithms with AES 256 and blowfish are explained
in details.
1. AES-256 encryption
Symmetric key encryption, used by AES 256, requires just one secret key to cipher and decipher
data, it works as follow:
• A crucial generation (Key Generation)
• To make the input file a multiple of the AES block size, pad it.
• Make an IV for the CBC mode.
• Put IV at the start of the output file.
• Read the file's blocks.
• Encrypt the block that was read in step 4
• To the result file, add the encrypted block from step 5.
• Continue to step 4 until more blocks are available.
In this study, key generation is conducted using the object of a class that generates a random byte
stream of a specified size and then passes it through SHA-256 to obtain a useable 256-bit key.
Because the cipher uses the CBC (cypher block chaining) mode of operation, the padding of
inputs and generation of the IV are the next phases. To use it during decryption, the IV is saved
with the output file as the head of the output file. Then a loop is started, which removes and
encrypts one block of data from the padded input file at a time. The loop will run until there are
no more blocks to read from the input file.
2. AES-256 decryption
The same symmetric key encryption is used by AES Decryption, which preloads it in order to
decrypt data.
• Pre-stored key is loaded.
• To read the IV, read the first block of the encrypted file.
• Read the file's blocks.
• Decrypt the block that was read in step 3.
• Step 4's output should be written to the output file.
• Step 3 was reached while blocks remained in the input file.
• To get the original file, unpad the output file.
The key generated during the AES-256 encryption procedure must be loaded during the
decryption step. The IV is then recovered by examining the encrypted file's first block. Then a
loop begins to operate, simulating the functioning of the encryption loop. The loop reads and
decrypts successive blocks of data from the encrypted file before writing them to an output file.
The resulting data is unpadded after the decryption procedure is completed.
3. Blowfish encryption
Blowfish is a symmetric encryption method like AES 256, which means that it uses the same
secret key for both message encryption and decryption.
Here are the steps:
• Produce a Blowfish key.
• Input data should be padped to make it a multiple of the block size.
• Make a vector for the initialization.
• S-Boxes should be generated.
• Make a Subkeys array.
• CBC mode is used to encrypt blocks one by one.
• To save the output, save it to the output file.
Because sharing the key between the two ciphers in the cascade mode of operation would defeat
the entire purpose of utilizing a cascade cipher, the Blowfish key is created independently from
the AES-256 encryption. The CBC mode is being used by Blowfish. It's necessary to make sure
the data being encrypted is a multiple of the blowfish cipher's block size, which is 64 bits. The
next step is to pad the input data to the closest 64 bits after the key is generated. The PKCS#5
padding scheme has been utilize.
The IV is created using the class's object. The S-boxes were created automatically as part of the
blowfish operation process. A subkey array of 18 subkeys is likewise constructed in the same
way. Finally, the data can be encrypted block by block using the following statement:
The class = X_Param
X_Param iv = new X_Param (IV.getBytes(StandardCharsets.UTF_8));
cipher = Cipher.getInstance(“Blowfish/CBC/PKCS5Padding”);
Secretes secretKey = new SecretKeySpec (KeyData,“Blowfish”);
cipher.init (Cipher.ENCRYPT_MODE, secretKey, iv);
4. Blowfish decryption
The same symmetric key encryption is used by Blowfish Decryption, which preloads it in order
to decrypt data.
• IV and the load key.
• The input file should be read.
• Under CBC mode, do block-by-block decryption.
• Remove the padding from the output.
• To the output file, write the unpadded output.
The counterpart of the blowfish encryption method is the blowfish decryption procedure.
Because the decryption is done in CBC mode, the decryption must also be done in CBC mode.
The original IV is required for this. Unlike AES, where the IV was stored with the encrypted file,
the IV was not stored with the encrypted file in the case of blowfish. It is kept with the user,
much like the key. The loading of both the IV and the key is the first step in the decryption
process. They're supplied as arguments to the CBC mode decryption command as follows:
X_Param iv = new X_Param (IV.getBytes (StandardCharsets.UTF_8));
cipher = Cipher.getInstance(“Blowfish/CBC/PKCS5Padding”);
Secretes secretKey = new SecretKeySpec (KeyData, “Blowfish”);
cipher.init (Cipher.ENCRYPT_MODE, secretKey, iv);
The blowfish decryption process revolves around this statement, which is provided by the
blowfish java package. In a previous stage, the key was saved in the cipher object. The padding
added during the encryption must still be removed from the output retrieved after decryption.
The PKCS#5 padding that was introduced during the encryption phase is removed in the next
statement. The original file is used to obtain the data, which is subsequently written to a file and
used.
3.4.2 Electronic code book
Feedback is used in Cipher Block Chaining (CBC) to feed the result of encryption into the
encryption of the following block. Before being encrypted, the plain-text is XORed with the
preceding cipher-text block. Each block's encryption is based on the previous blocks. This
necessitates the decryption side sequentially processing all encrypted blocks.
This option necessitates the use of a random initialization vector that is XORed with the first data
block before encryption. It is not necessary to keep the initialization vector a secret.
To guarantee that each communication is encrypted uniquely, the initialization vector should be a
random number (or a serial number). When there is a mistake in an encrypted block (because to a
transmission failure, for example), the block containing the fault becomes utterly distorted.
Bit mistakes will appear in the following block at the same locations as the initial erroneous
block.
The mistake will not impact the blocks that follow the second block. As a result, CBC is capable
of self-recovery. While CBC quickly recovers from bit mistakes, it does not recover from
synchronization faults at all.
All succeeding blocks are mangled if a bit is added or removed from the cipher-text stream. As a
result, a CBC system must ensure that the block structure is preserved. Before encryption to take
place, it, like the ECB mode, requires a whole block on its input.
3.4.3 Electronic code book
The electronic codebook mode (ECB), which is a block cipher mode of operation, is the simplest
block cipher mode of operation. When analysts discover that the block "8d226acd" encrypts to
the cipher-text block "1c7ed351", they may decrypt that cipher-text block anytime it comes in a
transmission. This vulnerability is greatest at the start and conclusion of communications, where
well-defined headers and footers contain information about the sender, receiver, date, and other
pertinent details.
CHAPITER 4
ALGORITHM FLOW AND SIMULATION RESULTS
The AES algorithm stipulates those various key sizes of 128, 192, and 256 bits be supported. The
technique is implemented as a two-dimensional array with four rows of bytes. Each row has Ni
bytes, where Ni is the block length divided by 32 because when used with 32-bit
microprocessors, it is exceedingly quick.
The array is represented by the letter A, and each byte has two indices: a row number Ri in the
range 0 <= Ri < 4 and a column number Ci in the range 0=Ci Ni. This allows each individual byte
to be designated as A [Ri, Ci]. Because AES defines Ni = 4, the range for Ci, the State's column
number, is 0 <= Ci < 4. The number of rounds in the AES algorithm is determined by the key
size.
Ng represents the number of rounds, with Ng = 10 when Nk = 4, Ng=12 when Nk = 6, and
Ng=14 when Nk = 8.
Key Length Block Size Number of Rounds
(Nk words) (Nb words) (Ng)
AES 128 4 4 10
AES 192 6 4 12
AES 256 8 4 14
b7´7+b6´6+b5´5+b4´4+b3´3+b2´2+b11+b0=Σbi´i (Equation 1)
BLOWFISH is the other algorithm employed in this study. Comparing to the AES algorithm, it
is a good way to keep data safe from attackers. The data is secured using a configurable key
length of 32 to 448 bits. When the key does not change regularly, the BLOWFISH algorithm is
employed on communication links and for file encryption. On 32-bit microprocessors, the
BLOWFISH algorithm is an extremely fast encryption technique.
The data is encrypted using a 16-round Feistel network (Explained in chapter 2). Every round
includes a key-dependent permutation as well as a key-data dependent substitution. Additional
operations are four indexed array data and all operations are XORs
Because the primary goal of this study is to assess the algorithm's performance, these algorithms
were written in a standardized programming language. They were written in JAVA (18.0.1
) and tested on a MACBOOK PRO laptop with the MACOS 12.4 operating system. The
throughput of an encryption technique is calculated using encryption time. It denotes the
encryption speed. The encryption scheme's throughput is estimated by dividing the entire
plaintext.
Table 4.2 Comparative Encryption and Decryption Algorithm Execution Times (in
Milliseconds) for Different Packet Sizes based on the overall encryption time for each method in
megabytes encrypted.
The power consumption of this encryption technology decreases as the throughput value
increases. The methods were assessed in terms of the time necessary to encrypt and decode data
blocks of various sizes (0.5MB to 20MB). All of the implementations were meticulous in order
to ensure that the outcomes would be fair and accurate.
At the encryption and decryption stages, the simulation results for this compassion point are
displayed in Fig. 4.3 & 4.4 and Table 1. In terms of processing time, the results suggest that the
Blowfish method outperforms other algorithms.
In terms of processing time, the results reveal that the Blowfish method outperforms the AES
algorithms. It also demonstrates that AES uses more resources when data blocks are reasonably
large.
3.6.2 Cipher block chaining mode (CBC)
Figure 4.6 The CBC mode results.
Because of the key-chaining structure of CBC, it takes longer to process than ECB. The findings
shown in Fig.4.5 further show that the additional time is not substantial for many applications,
given that CBC is far superior to ECB in terms of protection.
Although the difference between the two modes is difficult to notice with the human eye, the
findings revealed that the average difference between ECB and CBC is just 0.059896 second,
which is a modest difference.