Next Article in Journal
Investigating the Shear Strength of Granitic Gneiss Residual Soil Based on Response Surface Methodology
Next Article in Special Issue
A Microservice and Serverless Architecture for Secure IoT System
Previous Article in Journal
Individual Cell-Level Temperature Monitoring of a Lithium-Ion Battery Pack
Previous Article in Special Issue
Privacy-Preserving Indoor Trajectory Matching with IoT Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SM2-Based Offline/Online Efficient Data Integrity Verification Scheme for Multiple Application Scenarios

1
State Key Laboratory of Integrated Service Networks, Xidian University, Xi’an 710126, China
2
Cryptographic Engineering College, Chinese People’s Armed Police Force Engineering University, Xi’an 710086, China
3
Key Lab of the Armed Police Force for Network and Information Security, Xi’an 710086, China
*
Author to whom correspondence should be addressed.
Submission received: 14 March 2023 / Revised: 17 April 2023 / Accepted: 23 April 2023 / Published: 26 April 2023
(This article belongs to the Special Issue Security in IoT Environments)

Abstract

:
With the rapid development of cloud storage and cloud computing technology, users tend to store data in the cloud for more convenient services. In order to ensure the integrity of cloud data, scholars have proposed cloud data integrity verification schemes to protect users’ data security. The storage environment of the Internet of Things, in terms of big data and medical big data, demonstrates a stronger demand for data integrity verification schemes, but at the same time, the comprehensive function of data integrity verification schemes is required to be higher. Existing data integrity verification schemes are mostly applied in the cloud storage environment but cannot successfully be applied to the environment of the Internet of Things in the context of big data storage and medical big data storage. To solve this problem when combined with the characteristics and requirements of Internet of Things data storage and medical data storage, we designed an SM2-based offline/online efficient data integrity verification scheme. The resulting scheme uses the SM4 block cryptography algorithm to protect the privacy of the data content and uses a dynamic hash table to realize the dynamic updating of data. Based on the SM2 signature algorithm, the scheme can also realize offline tag generation and batch audits, reducing the computational burden of users. In security proof and efficiency analysis, the scheme has proven to be safe and efficient and can be used in a variety of application scenarios.

1. Introduction

Cloud storage technology is convenient and flexible, its use growing rapidly at home and abroad [1]. Big data from Internet of Things (IoT) devices and medical big data also use cloud storage technology to provide services. However, after users have stored data in the cloud, although they can thereby access convenient storage and management services, they also lose the power to control the data directly. Therefore, ensuring data integrity in the cloud has become a hot research topic for scholars [2]. Data integrity verification technology uses cryptography-related technology to design appropriate schemes that convince users that their data, when stored in the cloud server, is secure and complete, by means of a series of interactions between the auditor and the cloud server. Using this technique can effectively deter cloud service providers (CSP) from deliberately concealing the issues of data loss or corruption from users due to their fear of damaging their reputations. It also effectively stops users from unreasonably making accusations or claims against CSPs simply because of suspicion, thus effectively protecting the legitimate rights of both users and CSPs [3].
IoT devices have been widely used and have become a convenient and universal access terminal for Internet services. However, IoT devices have limited storage space and weak computing power to support complex data computing and big data storage [4]. Therefore, the cloud, with its powerful computing potential and storage capacity, is generally used to expand the functions of IoT devices so that IoT devices can obtain massive data storage and strong data analysis capabilities. As one of the main service areas of cloud computing, the cloud storage mode of IoT device data allows IoT users to store their data in the cloud to compensate for the lack of storage space on IoT devices [5]. However, with the loss of the physical ownership and control of outsourced data, IoT device users are concerned about the integrity of their data. Therefore, it is necessary to conduct an integrity audit on the data of IoT devices using the cloud storage mode. Compared with general cloud audit schemes, the design of data audit schemes for the IoT has higher requirements [6]. First, public verification is required. IoT devices are often resource-constrained and need to support complex calculations; therefore, audit schemes require a third-party auditor (TPA) to be able to verify data integrity on behalf of the users. Second, privacy protection is needed. Data privacy protection is the most important problem in the IoT’s cloud auditing scheme. Over the duration of the scheme’s implementation, the contents of the challenge file should remain confidential to the TPA. Many IoT-embedded devices will generate a large quantity of personal and private data information. If this sensitive information is exposed in the integrity verification process, the privacy of IoT users may be disclosed to the integrity verifier or to the public. Third, the scheme is required to be lightweight. Computing capacity, storage capacity, bandwidth, and other resources of IoT devices are often greatly limited. Thus, audit schemes with lower computing costs are more suitable for the IoT. Fourth, a batch audit is required. There are many types and numbers of IoT devices. The audit scheme must support batch audits for multiple users to quickly verify the integrity of the massive amount of IoT data.
The sources of healthcare-derived big data mainly include clinical big data generated during patients’ medical treatment, health-related big data generated by wearable human health-monitoring devices, and biological big data generated by life sciences research and medical institutions. However, despite the large amount of data stored in medical databases, it is still not easy to comprehensively record information on all diseases. Since electronic medical records are not fully available, a large amount of data comes from manual records. Biases and incomplete content arising from the recording process, uncertainty in textual expressions, and incomplete data storage are the root causes of incomplete medical and health big data, so it is crucial to audit the integrity of medical data [7]. In addition, the integrity audit scheme of medical data needs to achieve privacy protection. Detailed personal information and the health status of patients are often directly recorded in healthcare big data, and these sensitive forms of data require greater privacy protection. Finally, dynamic update capabilities are also necessary. Patients’ consultation and onset times involve process changes, while the waveform and image data of the medical examination are time series. The patient’s health status is not static but is always in a state of dynamic change.
Motivation. We believe that it is essential and urgent to design a data integrity verification scheme that can be better applied to the cloud storage environment of IoT data and medical data, and the most appropriate scheme must meet the following functions:
(1)
Public auditing: anyone can perform the audit. Generally, experienced and skilled TPAs are entrusted by the users to perform the audit task.
(2)
Dynamic updating of cloud data: users can insert, delete, and modify the data stored in the cloud at any time.
(3)
Privacy protection: the TPA cannot know the contents of the user data. It is also preferable that CSP should not know the contents of the user data.
(4)
Lightweight computation: the users’ computational overhead should be as small as possible.
(5)
Batch audits for multiple users: the most appropriate scheme is able to implement batch audits for multi-user data.
However, we found that most existing cloud storage schemes do not meet the above five conditions well. Therefore, we designed an efficient offline/online data integrity verification scheme. The proposed scheme is not only applicable to the integrity audit of cloud data but is also applicable to the integrity verification of IoT data and medical data.

2. Related Works

In early remote data integrity verification schemes, the auditor needs to download all data from the cloud and use the locally stored metadata to confirm the integrity, which requires high communication and calculation costs and takes a long time to achieve, resulting in a great waste of computing power. In 2007, Ateniese et al. [8] proposed the first provable data possession (PDP) scheme. Their scheme divided the data files into blocks. The auditor only needed to download partial data blocks from the CSP to verify the integrity of all data, with a high probability. For 1,000,000 4 KB blocks, assuming that 1% of the blocks have been deleted or tampered with by the CSP, the auditor only needs to verify the integrity of 460 blocks to judge the integrity of all data with a greater than 99% confidence probability. In 2007, Juels et al. [9] first proposed the proofs of a retrievability scheme to audit data. Their scheme used error correction codes and sampling detection technology to recover the damaged data after detecting that the integrity of the cloud data was damaged. However, their scheme does not support public auditing, and the number of audits is limited.
With the increasing demands of users, scholars have expanded various functions based on the scheme proposed by Ateniese et al. In their study [8], a dynamic data updating function is added to the cloud audit scheme to enable users to modify the data stored in the cloud more flexibly. If the cloud data are directly modified, the tag and index will not match, and subsequent verification work cannot be completed. Therefore, various appropriate data structures are proposed to achieve dynamic data updates. In order to prevent malicious auditors from colluding with CSPs or stealing users’ data privacy, the random mask technology and blockchain technology are combined in cloud audit schemes to achieve security goals. In order to enable auditors to audit the data integrity of more than one user at a time, the batch audit function is added to the cloud audit scheme, which improves the efficiency of large-scale audits. In meeting the needs of one user after another, cloud data audit schemes gradually become more mature. However, with cloud storage technology, the existing cloud audit schemes are no longer fully applicable to the cloud storage environment for IoT and medical data.
The cloud audit scheme proposed in [10] constructs a multi-leaf authentication method based on the Merkle tree. The scheme can simultaneously authenticate multiple leaf nodes and realize batch data updates. The proposed scheme also supports log auditing. Users can verify whether the auditors perform their audit work honestly by checking the log files generated by auditors. However, the scheme does not mention comprehensive privacy protection, and there is a security problem wherein attackers can forge data tags to pass the audit. Hou et al. [11] designed a public audit protocol supporting blockless verification and batch verification practices; the protocol uses a chameleon certification tree to implement the efficient dynamic operations of outsourcing data, reduces the computational cost caused by data updates, and further improves audit efficiency. Nevertheless, the scheme does not describe how to achieve privacy protection for users and requires the computation of many bilinear pairs during the upload block verification and bulk audit phases. Based on the BLS signature, Mishra et al. [12] used a binomial binary tree and an indexed hash table data structure to construct an efficient and dynamically updated cloud audit scheme. However, the scheme cannot achieve batch audits.
Fan et al. [13] built a flexible auditing scheme that supports efficient dynamic updating based on the alliance blockchain. However, the scheme does not consider the batch audits of large-scale users. The ID-based offline/online PDP protocol that was constructed in [14] is based on an offline/online signature. The scheme supports batch verification and entire dynamic data operation but cannot realize data content privacy protection for cloud servers. The audit scheme introduced in [15] is based on an ID with compressed cloud storage, and it only uses encrypted data blocks in a self-verified way to audit the cloud data. Xu et al. [16] introduced the concept of transparent integrity auditing. They proposed a concrete scheme, based on the blockchain, which does not rely on third-party auditors while freeing users from high communication costs in data integrity auditing.
Ji et al. [17] proposed an ID-based data integrity verification scheme with the designated auditor. In their scheme, only the auditor designated by the user could join the audit task, which improved the scheme’s security compared with the previous ID-based audit schemes. However, the scheme needed to be more comprehensive. Li et al. [18] proposed an audit scheme based on a redactable signature. CSP can transform the signature directly, without the additional sanitizer, while sharing sensitive data. The signature can also be used to authenticate the source of sharing data. Lin et al. [19] proposed a consortium blockchain-based audit protocol. This protocol can check the abnormal behavior of auditors, but the scheme needs to be more comprehensive to achieve batch audits. In addition, during the audit process, the above schemes used numerous high-cost operations, such as the power index, point hash function, and bilinear mapping, thereby incurring high computing costs; thus, it cannot be applied to the environment of IoT data and medical data cloud storage completely.
Our Contributions. In this paper, we propose an efficient offline/online data integrity verification scheme for multiple application scenarios. Our contributions can be summarized as follows:
(1)
Based on the SM2 signature algorithm and the SM4 block encryption algorithm, we have constructed an offline/online remote data audit scheme. The scheme supports dynamic data updates, comprehensive privacy protection, and batch audit capability. Based on the advantages of offline tags and scheme design, our scheme has low computational overheads and is suitable for lightweight environments.
(2)
We have carried out a security analysis and proof of the scheme. The scheme is resistant to forgery attacks from the storage side and achieves comprehensive privacy protection; even the storage side cannot obtain the real content of the data.
(3)
We analyzed the scheme’s efficiency and compared the functions and computing costs with the existing schemes, proving the comprehensiveness of the scheme’s functions and its high efficiency.
Organization. We have organized the rest of this paper as follows. Section 3 introduces the system model and the security model. The background knowledge used in the scheme’s construction process and defines the proposed scheme’s system and security model are introduced in Section 4. In Section 5, the concrete scheme is described. We analyze the scheme’s performance and compared it with other schemes in Section 6. In Section 7, we conclude our work. We analyze the security of the scheme in Appendix A.

3. The System Model and Security Model

3.1. System Model

The system model of the scheme is shown in Figure 1. Three interacting entities are included: the CSP provides data storage services to users for payment, but it is not trusted and may delete data from the cloud or pry into the data privacy of its users for profit. The data owner (DO) is the owner of the data, uploading the data to the cloud to save their own storage overhead, but does not want the data privacy to be compromised. The TPA is a semi-honest auditor commissioned by users. They will faithfully perform the task of auditing the integrity of the data in the cloud, on the one hand, but on the other hand, they are curious about the content of the data.
The operation process of the proposed audit scheme includes the following algorithms:
(1)
Setup: the CSP runs the algorithm, which inputs the security parameter, λ , and generates the public parameters { E , G , q , g } .
(2)
KeyGen: the DO runs the algorithm, which outputs the private key, k s , and the public key, k p .
(3)
OffTagGen: the DO runs the algorithm, which inputs k s and the random numbers d i , l , outputting the offline tags, r i , s i .
(4)
OnTagGen: the DO runs the algorithm, which inputs r i , s i and data blocks m i , then outputs the online tags r i , s i .
(5)
ChalGen: the TPA runs the algorithm, which inputs the random number π and outputs the indexes, { i j } ( 1 j c ) .
(6)
ProofGen: the CSP runs the algorithm, which inputs the { m i j , r i j , s i j , i j } ( 1 j c ) and outputs the proof { ρ , s , r } .
(7)
VerifyProof: the TPA runs the algorithm, which inputs the proof { ρ , s , r } and outputs “true” or “false” to indicate the integrity of the data.

3.2. Security Model

In the existing data integrity audit schemes, security analysis often considers the CSP to be unreliable; it will forge tags in an attempt to pass the audit. Therefore, we mainly prove the unforgeability of the current scheme in the security analysis; this means that if the DO’s data are corrupted, this must be detected by the interaction between the CSP and TPA when executing the scheme. That is, the CSP cannot forge integrity evidence and pass the data integrity audit under the condition that the data security is damaged; thus, it must carefully maintain the cloud data. We can define the unforgeability of the scheme with the following game:
Game: Assuming that C is the challenger, C runs the Setup algorithm to generate the system parameters and sends the system parameters to an adversary, A . In this security model, we assume that the adversary A has great privileges, although these privileges are unlikely to be possessed in a real situation. In Appendix A, we will show that even if the adversary, A , has all the privileges assumed herein, he/she is unable to break the auditing scheme proposed in this paper, thus demonstrating that the scheme has high security strength. Except for the target user that adversary A wants to attack, he/she can inquire about any other user’s information. Specifically, A can ask the following predictor:
(1)
Public key query: When A queries the public key of ID w , C runs the KeyGen algorithm to generate k w p and returns k w p to A .
(2)
Private key query: When A queries the public key of ID w , C runs the KeyGen algorithm to generate k w s and returns k w s to A .
(3)
Tags query: A can obtain the tag of m w i under the public key k w p of ID w .
Based on the above query, after A is challenged, if A outputs the aggregate tag { ρ w * , s w * , r w * } with the ID w * , k w p * , and the following conditions are met, then A wins the game. That is, our scheme is forgery-resistant.
Condition 1: The forged aggregation tags { ρ w * , s w * , r w * } meet the verification equations.
Condition 2: There is no interruption of the public key query.
Condition 3: All the blocks m w i * of ID w * have been queried tags.

4. Preliminaries

4.1. Chinese Commercial Cryptography Algorithm

In 2010, the State Cryptography Administration of China released the elliptic curve-based SM2 cryptographic algorithm. The SM2 algorithm has high cryptographic complexity, fast processing speed, lower machine performance consumption, better performance, and more security. Its security has been proven by the authors of [20], and SM2 is more secure against generalized key substitution attacks. In 2012, the Security Commercial Code Administration Office of China released the SM4 block cipher standard. This is similar to AES-128, with simplified round key generation, and it is mainly used for data encryption. The encryption algorithms and decryption algorithms both use 32 rounds of a nonlinear iterative structure, the S box is a fixed 8-bit input and 8-bit output, the number of calculation rounds is large, and nonlinear changes are added, which make them more effective in defending against key-leaking Trojans [21]. The SM2/4 algorithm has been incorporated into the ISO/IEC international standard. Given its excellent security and performance, it is believed that it will be recognized or adopted by more and more organizations and individuals in China or outside of China.
Our scheme uses the SM2 digital signature algorithm to construct the audit scheme and the specific steps of the SM2 digital signature algorithm are as follows [22]. To facilitate understanding, we define and explain the various notations that appear in this paper in Table 1.
(1)
Key generation: the selected elliptic curve equation is y 2 = x 3 + a x + b . Let g be the base point on the elliptic curve; the integer k s Z q * is randomly selected as the private key, then the public key k p = k s g is calculated.
(2)
Signature: Let the data to be signed be m . The signer first selects a random integer d Z q * , sets d g = ( x 1 , y 1 ) , and computes r = m + x 1 , s = ( 1 + k s ) 1 ( d r k s ) ; the signature of the message m is { r , s } .
(3)
Verification: After receiving m and { r , s } , the verifier calculates t = r + s , ( x 1 , y 1 ) = s g + t k p , and r = x 1 + m . If the values of r and r are equal, the signature is correct.

4.2. Dynamic Hash Table

Our scheme uses the dynamic hash table data structure proposed in Reference [23] to achieve a dynamic update of the data in the cloud. The dynamic hash table is a two-dimensional data structure, as shown in Figure 2.
The table includes both file and data block elements. In the file element, NO . indicates the index value of the corresponding file, while ID indicates the identification of the corresponding file and a pointer of the first data block of this file. In the data block element, t i indicates the timestamp of the data block, and v i indicates the version number of the data block. The version number is initially set to 1 and its value is incremented by 1 for each change of the data block. The data block elements in the dynamic hash table are connected by a chain table, and each data block element is a node in the chain table, while each node includes the version information of the data block, the timestamp, and a pointer to the next node. Once the dynamic hash table is established, operations such as search, insert, deletion, and modification can be performed at either the file level or the data block level.

4.3. Elliptic Curve Discrete Logarithm Problem

The elliptic curve discrete logarithm problem (ECDLP): Let G be an additive cyclic group of elliptic curves of the order of the large prime q and set g G as a generator. ECDLP means that, given g , a g G , an attacker A calculates a Z q * . The probability that the attacker A can solve the ECDLP in polynomial time is negligible:
Pr [ A ( a g , g ) = a : a R Z q ] ε
where ε represents the negligible probability; that is, it is computationally infeasible to solve the ECDLP.

5. SM2-Based Offline/Online Efficient Data Integrity Verification Scheme

In this section, we give a detailed description of the proposed scheme.
(1)
Setup ( λ ) ( E , G , q , g ) : the CSP inputs the security parameter λ and generates the public parameters { E , G , q , g } . E : y 2 = x 3 + a x + b mod p is the elliptic curve, p and q are large prime numbers, G is an additive cyclic group of order q defined on E , and g is the generator of the group, G .
(2)
KeyGen ( k s , k p ) : the DO randomly selects k s Z q * as the private key and calculates k p = k s g G as the public key.
(3)
OffTagGen ( k s , d i , l ) ( r i , s i ) : we set the number of blocks for the file to n , and the block processing can improve the calculation efficiency and realize sampling verification. The DO randomly selects { d i , l Z q * } 1 i n , calculates D i = d i g G , and sets the coordinates of D i to { x i , y i } . For i [ 1 , n ] , the DO calculates:
r i = x i + l
s i = ( 1 + k s ) 1 ( d i r i k s )
and obtains the offline tag { r i , s i } 1 i n .
(4)
OnTagGen ( { r i , s i } , m i ) { r i , s i } : the DO uses the SM4 block cipher algorithm to encrypt the data file M with identity ID , and then divides M into n blocks as { m i Z q * } 1 i n , for each data block m i . The DO generates the corresponding timestamp t i and version number v i , and calculates:
r i = m i + r i
s i = s i k s ( 1 + k s ) 1 m i .
The DO receives the online tag { r i , s i } 1 i n , then sends { ID , i , m i , r i , s i , t i , v i } 1 i n to CSP, sends { ID , i , t i , v i , D i , l } 1 i n to TPA, and finally delete the local data.
(5)
ChalGen ( π ) { i j } : the TPA selects the random number π Z q * and sends it to the cloud server. Both parties take π as input, run the same pseudo-random function, per , and obtain the random c numbers { i j } ( 1 j c ) in [ 1 , n ] as the indexes of the challenged data blocks.
(6)
ProofGen ( { m i j , r i j , s i j , i j } ( 1 j c ) ) proof : after the CSP receives the audit request and generates the indexes of the challenged data blocks, it calculates ρ = j = 1 c m i j , s = j = 1 c s i j , and r = j = 1 c r i j , and sends the proof { ρ , s , r } to the TPA as the proof of data possession.
(7)
VerifyProof ( ρ , s , r , k p , D i j , x i j ) true / false : the TPA receives the proof { ρ , s , r } , calculates t = r + s , D = j = 1 c D i j , x = j = 1 c x i j , and verifies whether the following equations hold:
s g + t k p = D
x + ρ + c l = r .
If Equations (6) and (7) hold, the DO is informed that the data integrity is not compromised. The correctness of them is derived as follows:
s g + t k p = s g + ( r + s ) k s g = j = 1 c s i j ( 1 + k s ) g + j = 1 c r i j k s g = j = 1 c ( d i j g r i j k s g k s m i j g + ( m i j + r i j ) k s g ) = D
x + ρ + c l = j = 1 c x i j + j = 1 c m i j + c l = j = 1 c ( x i j + m i j + l ) = j = 1 c ( r i j + m i j ) = r
(8)
DynamicUpdate : our scheme enables dynamic update operations on the cloud data, including insertion, deletion, and modification. Since the number of data blocks involved in the dynamic update is small, offline tags are not required in the dynamic update process. When a data block, m i , needs to be modified to m j , the DO selects a random number, d j , to calculate D j = d j g G , where the coordinate of D j is set to { x j , y j } . Then, v j and t j are generated for the data block m j , and the tags r j = m j + x j + l and s j = ( 1 + k s ) 1 ( k j r j k s ) are calculated. Finally, { ID , i , m j , r j , s j } and { ID , j , D j , t j , v j } are sent to the CSP and TPA, respectively. After receiving { ID , i , D j , t j , v j } , the TPA finds the i -th node of the linked list corresponding to the file M in the dynamic hash table, and then replaces v i and t i with v j and t j . After receiving { ID , i , m j , r j , s j } , the CSP finds the location of m i and replaces m i , r i , s i with m j , r j , s j .
When the DO needs to insert the data block m j in front of the data block m i , they first select a random number d j to calculate D j = d j g and set the coordinate of D j as ( x j , y j ) . Then, they generate v j and t j for data block m j and calculate the tags r j = m j + x j + l , s j = ( 1 + k s ) 1 ( k j r j k s ) . Finally, the DO sends { ID , i , m j , r j , s j } and { ID , i , D j , t j , v j } to the CSP and TPA, respectively. After receiving { ID , i , D j , t j , v j } , the TPA finds the i t h node of the linked list corresponding to the file M in the dynamic hash table and inserts a new node after the i t h node with the content v j , t j . After receiving { ID , i , m j , r j , s j } , the CSP finds the location of m i , r i , and s i according to i , ID , and inserts m j , r j , s j in front of them.
When the data block m i needs to be deleted, { ID , i } is sent to the CSP and TPA. After receiving { ID , i } , the TPA deletes the i t h node of the linked list corresponding to the file M in the dynamic hash table. After receiving { ID , i } , the CSP deletes m i , r i , and s i according to i .
(9)
BatchAudit : the scheme can implement a batch audit for multi-user cloud data. Each DO { u w } 1 w x randomly selects the private key, k w s Z q * , and calculates the public key, k w p = k w s g G . The DO { u w } 1 w x randomly selects { d w i , l w Z q * } 1 i n , calculates D w i = d w i g G , and sets the coordinates of D i to { x w i , y w i } for i [ 1 , n ] , calculates: r w i = x w i + l w , s w i = ( 1 + k w s ) 1 ( d w i r w i k w s ) , and obtains the offline tag { r w i , s w i } 1 i n . The DO u w uses the SM4 block cipher algorithm to encrypt the data file M w with the identity, ID w , and then divides M w into n blocks, expressed as { m w i Z q * } 1 i n ; for each data block m w i , the DO u w generates the corresponding timestamp t w i and version number v w i , and calculates: r w i = m w i + r w i , s w i = s w i k w s ( 1 + k w s ) 1 m w i , as the online tag { r w i , s w i } 1 i n , then sends { ID w , i w , m w i , r w i , s w i , v w i , t w i } 1 i n to the CSP, sends { ID w , i w , t w i , v w i , D w i , l w } 1 i n to the TPA, and finally deletes the local data. The TPA selects a random number π as the parameter of per and sends it to the CSP. Both sides run the same pseudo-random function, per , and obtain the random number i j w ( 1 j c ) as the index of the challenged data block. After the CSP generates the indexes of the challenged data blocks, it calculates ρ = w = 1 x j = 1 c m w i j , s = w = 1 x j = 1 c s w i j , and r = w = 1 x j = 1 c r w i j , then { ρ , s , r } will be sent to the TPA as the proof. The TPA receives the proof, computes t = r + s , D = w = 1 x j = 1 c D w i j , and x = w = 1 x j = 1 c x w i j , and verifies the following equations:
s g + w = 1 x t k w p = D
x + ρ + w = 1 x l w = r .
If Equations (10) and (11) hold, the TPA informs the total x DOs that data integrity has not been compromised. The correctness of them is derived as follows:
s g + w = 1 x t k w p = w = 1 x j = 1 c s w i j g + w = 1 x ( k w s j = 1 c r w i j g + s w i j g ) = w = 1 x ( ( j = 1 c s w i j g + k w s s w i j g ) + j = 1 c r w i j k w s g ) = w = 1 x ( j = 1 c ( 1 + k w s ) s w i j g + j = 1 c r w i j k w s g ) = w = 1 x ( j = 1 c ( d w i j g r w i j k w s g k w s m w i j g + r w i j k w s g ) ) = w = 1 x ( j = 1 c ( d w i j g r w i j k w s g k w s m w i j g + r w i j k w s g ) ) = D
x + ρ + c w = 1 x l w = w = 1 x j = 1 c x w i j + w = 1 x j = 1 c m w i j + c w = 1 x l w = w = 1 x ( j = 1 c x w i j + j = 1 c m w i j + c l w ) = w = 1 x ( j = 1 c ( x w i j + m w i j + l w ) ) = r

6. Performance Analysis

In this section, the computational overhead of the scheme and the advantage of the offline/online tags are first analyzed, then we compare the functions of our scheme with existing schemes [10,11,12,13,14], which proves that our scheme is more suitable for the IoT data storage environment and medical data storage environment. The schemes in Refs. [10,11,12,13,14] are novel cloud data audit schemes proposed in recent years. They are not out of date and, at the same time, they have been tested by scholars in the past two years. Then, we compare the computational overhead of our scheme with the schemes in Refs. [10,11,12,13,14] numerically. Finally, we experimentally verify the results of the numerical analysis of computational overhead to visualize the performance of our scheme.
We set G 1 and G 2 to be the additive cyclic group of E : y 2 = x 3 + a x + b mod p and the multiplicative cyclic group. p is a 512-bit prime number and q is a 160-bit prime number. The experiment was run on a 64-bit Windows 10 operating system with an i5 CPU, 2.5 GHz main frequency, and a 4 GB memory environment, using the JPBC library. After selecting a Type A elliptical curve and defining each operation, we ran each operation 10,000 times to obtain the average time overhead. The meaning of each operation and the corresponding time cost are shown in Table 2. To simplify the description, n is used here to denote the total number of data blocks, and c is used to denote the number of challenged data blocks. Because of the large values of n and c , we omit the operations’ single occurrence in our analysis of the calculation overhead.
In the OffTagGen phase, the user needs to compute D i = d i g and r i = x i + l , so the computational overhead is about n | M G 1 | + n | A Z | . In the OnTagGen phase, the user needs to compute r i = m i + r i and s i = s i k s ( 1 + k s ) 1 m i , so the computational overhead is about n | M Z | + 2 n | A Z | . In the ProofGen phase, the CSP computes ρ = j = 1 c m i j , s = j = 1 c s i j , and r = j = 1 c r i j , and the computational overhead is about 3 c | A Z | . In the VerifyProof phase, after computing t = r + s , D = j = 1 c D i j , and x = j = 1 c x i j , the auditor also verifies the equations s g + t k p = D and x + ρ + c l = r , and the computational overhead is about c | A Z | + c | A G 1 | . After using the offline/online tags, the computational overhead of the user in the scheme is about n | M G 1 | + 3 n | A Z | + n | M Z | . If offline/online tags are not used, the user needs to calculate D i = d i g , r i = m i + x i + l and s i = ( 1 + k s ) 1 ( d i r i k s ) ; the computational overhead of the user is about n | M G 1 | + 3 n | A Z | + 2 n | M Z | .
We compared our scheme with the existing certificateless schemes; the function comparison is shown in Table 3. As can be seen from Table 3, although other schemes are novel, their functions are not comprehensive. Our proposed scheme is the most comprehensive and the most suitable for the cloud storage environment of IoT data and medical data.
The numerical computational overhead comparison of our scheme and other existing schemes is shown in Table 4. In the current cloud data audit schemes, the calculation overhead of the ProofGen and VerifyProof stages is borne by the CSP and TPA, respectively, while the calculation overhead of the TagGen stage is borne by the users themselves; the users only need to bear the calculation overhead in the TagGen stage. Because of the strong computing capability of the CSP and TPA, in the design of cloud data audit schemes, more emphasis should be placed on reducing the computing cost of the user side, that is, reducing the computing cost of the audit scheme in the TagGen stage. It can be seen from Table 4 that in the TagGen stage, the computational overhead of this scheme and the scheme in [14] is the smallest and is significantly smaller than other schemes. Therefore, this scheme and the scheme in [14] are more user-friendly and can be applied to equipment with lower computational power, which is more reasonable and efficient in its design. At the ProofGen stage, the computational overhead of our scheme is also significantly lower than that of other schemes. In the case where the number of challenged data blocks, c , increases gradually, the computational overhead of the other schemes increases at a faster and more dramatic rate than that of this scheme, and the advantages of our scheme are more significant.
In order to test the performance of the scheme in terms of practical application and more intuitively compare the computational cost of each scheme, each scheme is run within the experimental environment, and the time costs in the stages of TagGen, ProofGen, and VerifyProof are recorded, as shown in Figure 3, Figure 4 and Figure 5. The number of sectors s is set at 10 [23].
Figure 3 shows the time cost of each scheme in the TagGen phase when the total number of data blocks is set to 2000, 4000, 6000, 8000, and 10,000, respectively. It can be concluded that the time cost of each scheme increases as the number of data blocks increases, but the time costs of the scheme in [14] and of our scheme do not increase significantly as the number of data blocks increases. This is due to the use of exponential operations in Refs. [10,11,12,13], which consume a significant amount of computational capacity. However, in our proposed scheme, the computation of tags is divided into two stages: OffTagGen and OnTagGen. For the users, their computation burden should mainly take into account the online tag computation. In our scheme, the online tag computation only requires simple addition and multiplication operations, resulting in a small computation overhead. Even with a large amount of data, it will not impose a significant computation burden on users. Under the conditions of the same number of data blocks, the time cost of the schemes in Refs. [10,11,12,13] is significantly higher than that of the scheme in Ref. [14] and in this scheme.
The time cost of the GenProof and VerifyProof phases is shown in Figure 4 and Figure 5, when the number of challenged blocks is set to 200, 400, 600, 800, and 1000, respectively. It can be concluded that in the GenProof stage, the time cost of the schemes in Refs. [10,14] and our scheme is relatively low, and ours is the lowest. Scheme [12] has the highest time cost. In the VerifyProof stage, the time cost of our scheme and the schemes in Refs. [10,12,14] are significantly lower than that of the schemes in Refs. [11,13]. With the increase in the number of data blocks, the audit efficiency of our scheme becomes more prominent.
According to the above performance analysis, our scheme has more comprehensive functions and less time cost at each stage, especially in the TagGen stage, so it is more compatible with lightweight devices. Therefore, our scheme is more suitable for the IoT storage environment and medical data storage environment.

7. Conclusions

In this paper, we constructed an efficient SM2-based offline/online data integrity verification scheme for IoT and medical data. In the stage of preprocessing data of the scheme, users use the SM4 symmetric encryption algorithm to encrypt data. We used the encrypted data to generate tags and then uploaded them to the cloud, thus achieving full data privacy protection. In the scheme, users employ the SM2 signature algorithm to construct data tags in the uploading data stage. The scheme divided tags into offline parts and online parts. Users can calculate the offline tags in advance to reduce computing costs. The scheme uses a dynamic hash table to support the dynamic update of cloud data and realizes batch audits of multi-user data. It can adapt to the IoT and medical data storage environment. The theoretical safety analysis proves the scheme’s safety. The high level of efficiency of the proposed scheme is demonstrated by comparing it with five existing schemes in terms of efficiency. In future work, we will focus on adding more functions to the existing audit schemes to meet the increasing needs of users in the cloud storage environment.

Author Contributions

X.L. and Z.Y. contributed equally to this work; X.L. was responsible for the writing of the article and the construction of the scheme. Z.Y. was responsible for the derivation of the formulas in the article and gave some significant ideas. R.L. was responsible for the validation and formal analysis. X.-A.W. was responsible for the collecting of resources related to this article. H.L. was responsible for the verification of the security of this article. X.Y. revised the finished manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China [62172436] and [62102452].

Data Availability Statement

All relevant data has been provided in the article. If someone have any other needs, he or she can contact the authors by email.

Conflicts of Interest

The authors declare that they have no conflict of interest to report regarding the present study.

Appendix A

In this section, we provide a provable security analysis of our scheme via the following theorems.
Theorem A1. 
(Unforgeability): Under the random prediction model, it is assumed that adversary  A  breaks the proposed scheme with a nonnegligible advantage  ε  within time  t . The execution times of  A  accessing the public key query, private key query, and tags query are  q pk , q sk , and  q t , respectively. Then there is an algorithm  C , which can solve the DL problem by calculating ε ε ( 1 1 q pk ) q pk n ( n 1 ) ( n c + 1 ) q t ( q t 1 ) ( q t c + 1 )  in time  t < t + t inv + ( 3 q t + 1 ) t a + 2 q t t m + q t t M , where  t sm  and  t inv  represent the scalar multiplication time on  G 1  and the inverse operation time on  Z q * , respectively.
Proof. 
A is the adversary and C is the DL problem challenger. Given that ( g , B ) B = b g G , the goal of C is to use A to solve the DL problem to compute b .
C runs the Setup algorithm to generate the system parameters and sends the system parameters to A . A can ask the following predictor:
Public key query: C holds the list F = { ID w , k w s , k w p , c w } ; the initial list is empty. When A queries the public key of ID w , if F has the public key of ID w , then k w p is returned. Otherwise, C randomly selects c w { 0 , 1 } , the probability of c w = 0 is ζ = 1 / q pk ; if c w = 0 , the challenge is terminated. If c w 0 , C selects B w Z q * , returns B w to A as the public key k w p , then adds { ID w , k w s , k w p , c w } to F .
Private key query: C holds the list E = { ID w , k w s } and the initial list is empty. When A queries the partial private key of ID w , if E has ID w , then k w s is returned. Otherwise, C selects k w s Z q * , returns k w s to A , then adds E = { ID w , k w s } to E .
Tags query: C holds the list { ID w , m w i , r w i , s w i , d w i , l w } and the initial list is empty. When A queries the tag of ( ID w , m w i , k w p ) , if L has { ID w , m w i , r w i , s w i , d w i , l w } , then r w i and s w i are returned. Otherwise, C randomly selects d w i , l w Z q * , calculates D w i = d w i g , and sets the coordinates of D w i to ( x w i , y w i ) , then calculates r w i = m w i + x w i + l w , s w i = ( 1 + k w s ) 1 ( k i r i k w s ) , returns r w i , s w i to A , and adds { ID w , m w i , r w i , s w i , d w i , l w } to L . □
Challenge: Let ID w * be a user’s identity, where ID w * has never been queried as the private key. Let all blocks m w i * of ID w * have been queried tags and where C has queried the public key of ID w * . C runs the ChalGen algorithm to select the random number π Z q * and send it to A . C and A take π as the input, run the same pseudo-random function, per , and obtain the random c numbers i j ( 1 j c ) in [ 1 , n ] as the indexes of the challenged data blocks.
Forge: A calculates ρ w * = j = 1 c m w i j * , s w * = j = 1 c s w i j * , and r w * = j = 1 c r w i j * , and sends ρ w * , s w * , r w * to C ; C calculates D w = j = 1 c D w i j and x w = j = 1 c x w i j . A wins the game if { ρ w * , s w * , r w * } pass the Equations (A1) and (A2):
s w * g + ( s w * + r w * ) B w = D w
x w + ρ w * + c l w = r w * .
Therefore:
s w * g + ( s w * + x w + ρ w * + c l w ) B w = D w ( s w * + x w + ρ w * + c l w ) B w = j = 1 c d w i j * g s w * g B w = j = 1 c ( d w i j s w i j * ) ( s w * + x w + ρ w * + c l w ) 1 g .
So C can calculate:
b = j = 1 c ( d w i j s w i j * ) ( s w * + x w + ρ w * + c l w ) 1
and solve the DL problem.
We define the terms as follows. Event E 1 indicates that there is no interruption in the public key query. Event E 2 indicates that the forged aggregation tags { ρ w * , s w * , r w * } are valid. Event E 3 indicates that all the blocks m w i * of ID w * have been queried tags. Therefore:
Adv C DL = Pr [ E 1 E 2 E 3 ] ε ( 1 1 q pk ) q pk n ( n 1 ) ( n c + 1 ) q t ( q t 1 ) ( q t c + 1 )
and C uses the time t :
t < t + t inv + ( 3 q t + 1 ) t a + 2 q t t m + q t t M
We can reach the following conclusion: under the random prediction model, if A can break our scheme with a non-negligible ε within t , then there is an algorithm C that can solve the DL problem by the advantage ε ε ( 1 1 q pk ) q pk n ( n 1 ) ( n c + 1 ) q t ( q t 1 ) ( q t c + 1 ) in time t < t + t inv + ( 3 q t + 1 ) t a + 2 q t t m + q t t M .
Theorem A2. 
(Privacy protection): The scheme supports privacy protection for the user’s data and a private key against both the CSP and TPA.
Proof. 
In the OnTagGen stage of the scheme, the user first employs the SM4 block encryption algorithm to encrypt the original data file and obtains the encrypted data blocks, m i . The online tags are calculated using the encrypted data block, m i , and the uploaded data are also the encrypted data. Therefore, even if the cloud stores a large quantity of data and tags, it is impossible to know the original data content. In the VerifyProof stage, TPA is unable to calculate the original data value from the aggregate data obtained and the aggregate tag. As a result, entities in the scenario other than the users cannot know the contents of the users’ data. □
The user’s private key, k s , is only related to { s i } ( 1 i n ) in { ID , i , t i , v i , m i , r i , s i } ( 1 i n ) , stored at the cloud server. Therefore, the following system of equations will be listed when the cloud server tries to obtain the private key:
{ s 1 = ( 1 + k s ) 1 ( d 1 r 1 k s ) s 2 = ( 1 + k s ) 1 ( d 2 r 2 k s ) s n = ( 1 + k s ) 1 ( d n r n k s )
k s and d i are unknown to CSP. Since there are n + 2 unknowns in n equations, the number of unknowns is always more than the number of equations so the private key k s cannot be calculated.

References

  1. Ji, Y.; Shao, B.; Chang, J.; Bian, G. Flexible identity-based remote data integrity checking for cloud storage with privacy preserving property. Clust. Comput. 2021, 25, 337–349. [Google Scholar] [CrossRef]
  2. Gudeme, J.R.; Pasupuleti, S.; Kandukuri, R. Certificateless Privacy Preserving Public Auditing for Dynamic Shared Data with Group User Revocation in Cloud Storage. J. Parallel Distrib. Comput. 2021, 156, 163–175. [Google Scholar] [CrossRef]
  3. Li, J.; Yan, H.; Zhang, Y. Certificateless Public Integrity Checking of Group Shared Data on Cloud Storage. IEEE Trans. Serv. Comput. 2021, 14, 71–81. [Google Scholar] [CrossRef]
  4. Tian, Y.; Zhang, Z.; Xiong, J.; Chen, L.; Ma, J.; Peng, C. Achieving graph clustering privacy preservation based on structure entropy in social IoT. IEEE Internet Things J. 2022, 9, 2761–2777. [Google Scholar] [CrossRef]
  5. Li, Q.; Xia, B.; Huang, H.; Zhang, Y.; Zhang, T. TRAC: Traceable and Revocable Access Control Scheme for mHealth in 5G-enabled IIoT. IEEE Trans. Ind. Inform. 2021, 18, 3437–3448. [Google Scholar] [CrossRef]
  6. Xiong, J.; Ma, R.; Chen, L.; Tian, Y.; Li, Q.; Liu, X.; Yao, Z. A personalized privacy protection framework for mobile crowdsensing in IIoT. IEEE Trans. Ind. Inform. 2020, 16, 4231–4241. [Google Scholar] [CrossRef]
  7. Zhang, X.; Huang, C.; Zhang, Y.; Zhang, J.; Gong, J. LDVAS: Lattice-Based Designated Verifier Auditing Scheme for Electronic Medical Data in Cloud-Assisted WBANs. IEEE Access 2020, 8, 54402–54414. [Google Scholar] [CrossRef]
  8. Ateniese, G.; Burns, R.; Curtmola, R.; Herring, J.; Kissner, L.; Peterson, Z.; Song, D. Provable Data Possession at Untrusted Stores. In Proceedings of the 14th ACM Conference on Computer and Communications Security (CCS ‘07), Alexandria, VA, USA, 29 October–2 November 2007; pp. 598–609. [Google Scholar]
  9. Juels, A.; Kaliski, B.S. Pors: Proofs of retrievability for large files. In Proceedings of the 14th ACM Conference on Computer and Communications Security, Alexandria, VA, USA, 29 October–2 November 2007; pp. 584–597. [Google Scholar]
  10. Guo, W.; Zhang, H.; Qin, S.; Gao, F.; Jin, Z.; Li, W.; Wen, Q. Outsourced Dynamic Provable Data Possession with Batch Update for Secure Cloud Storage. Future Gener. Comput. Syst. 2019, 95, 309–322. [Google Scholar] [CrossRef]
  11. Hou, G.; Ma, J.; Liang, C.; Li, J. Efficient Audit Protocol Supporting Virtual Nodes in Cloud Storage. Trans. Emerg. Telecommun. Technol. 2020, 32, e3911. [Google Scholar] [CrossRef]
  12. Mishra, R.; Ramesh, D.; Edla, D.R. BB-tree based secure and dynamic public auditing convergence for cloud storage. J. Supercomput. 2020, 77, 4917–4956. [Google Scholar] [CrossRef]
  13. Fan, K.; Li, F.; Yu, H.; Yang, Z. A Blockchain-Based Flexible Data Auditing Scheme for the Cloud Service. Chin. J. Electron. 2021, 30, 1159–1166. [Google Scholar]
  14. Rabaninejad, R.; Asaar, M.R.; Attari, M.A.; Aref, M. An identity-based online/offline secure cloud storage auditing scheme. Clust. Comput. 2020, 23, 1455–1468. [Google Scholar] [CrossRef]
  15. Yang, Y.; Chen, Y.; Chen, F.; Chen, J. An Efficient Identity-Based Provable Data Possession Protocol with Compressed Cloud Storage. IEEE Trans. Inf. Forensics Secur. 2022, 17, 1359–1371. [Google Scholar] [CrossRef]
  16. Li, S.; Xu, C.; Zhang, Y.; Du, Y.; Chen, K. Blockchain-Based Transparent Integrity Auditing and Encrypted Deduplication for Cloud Storage. IEEE Trans. Serv. Comput. 2023, 16, 134–146. [Google Scholar] [CrossRef]
  17. Ji, Y.; Shao, B.; Chang, J.; Xu, M.; Xue, R. Identity-based remote data checking with a designated verifier. J. Cloud Comput. 2022, 11, 7. [Google Scholar] [CrossRef]
  18. Li, S.; Han, J.; Tong, D.; Cui, J. Redactable Signature-Based Public Auditing Scheme with Sensitive Data Sharing for Cloud Storage. IEEE Syst. J. 2022, 16, 3613–3624. [Google Scholar] [CrossRef]
  19. Lin, Y.; Li, J.; Kimura, S.; Yang, Y.; Ji, Y.; Cao, Y. Consortium Blockchain-Based Public Integrity Verification in Cloud Storage for IoT. IEEE Internet Things J. 2022, 9, 3978–3987. [Google Scholar] [CrossRef]
  20. Yang, A.; Nam, J.; Kim, M.; Choo, K.K.R. Provably-Secure (Chinese Government) SM2 and Simplified SM2 Key Exchange Protocols. Sci. World J. 2014, 2014, 825984. [Google Scholar] [CrossRef] [PubMed]
  21. Wang, D.; Wu, L.; Zhang, X. Key-leakage hardware Trojan with super concealment based on the fault injection for block cipher of SM4. Electron. Lett. 2018, 54, 810–812. [Google Scholar] [CrossRef]
  22. Yan, J.; Lu, Y.; Chen, L.; Nie, W. A SM2 Elliptic Curve Threshold Signature Scheme without a Trusted Center. KSII Trans. Internet Inf. Syst. (TIIS) 2016, 10, 897–913. [Google Scholar]
  23. Tian, H.; Chen, Y.; Chang, C.; Jiang, H.; Huang, Y.; Chen, Y.; Liu, J. Dynamic-Hash-Table Based Public Auditing for Secure Cloud Storage. IEEE Trans. Serv. Comput. 2017, 10, 701–714. [Google Scholar] [CrossRef]
Figure 1. System model.
Figure 1. System model.
Sensors 23 04307 g001
Figure 2. Dynamic hash table.
Figure 2. Dynamic hash table.
Sensors 23 04307 g002
Figure 3. The time cost of the TagGen phase (Schemes 1–5 correspond to references [10,11,12,13,14], respectively).
Figure 3. The time cost of the TagGen phase (Schemes 1–5 correspond to references [10,11,12,13,14], respectively).
Sensors 23 04307 g003
Figure 4. The time cost of the GenProof phase (Schemes 1–5 correspond to references [10,11,12,13,14], respectively).
Figure 4. The time cost of the GenProof phase (Schemes 1–5 correspond to references [10,11,12,13,14], respectively).
Sensors 23 04307 g004
Figure 5. The time cost of the VerifyProof phase (Schemes 1–5 correspond to references [10,11,12,13,14], respectively).
Figure 5. The time cost of the VerifyProof phase (Schemes 1–5 correspond to references [10,11,12,13,14], respectively).
Sensors 23 04307 g005
Table 1. Notations used in this paper.
Table 1. Notations used in this paper.
NotationsDescriptions
λ The system initialization parameter.
E The elliptic curve.
G The additive cyclic group.
q A large prime number.
g G .
k s The user’s secret key.
k p The user’s public key.
Z q * The prime field.
d i , m i , l Random numbers.
D i , t Intermediate parameters.
r i , s i Offline tags.
M The user’s data file.
( m 1 m n ) n data blocks.
ID The identity of the file.
r i , s i Online tags.
t i The   timestamps   of   m i .
v i The   version   numbers   of   m i .
n The number of total data blocks.
c The number of challenged blocks.
per The pseudo-random function.
π The   input   parameter   of   per .
x i , y i The   coordinates   of   D i .
ρ , s , r The proof of data possession.
Table 2. Time cost of each operation.
Table 2. Time cost of each operation.
SymbolsDescriptionTime Cost/ms
| A Z | computational cost of an addition on Z q * 0.0003
| M Z | computational cost of a multiplication on Z q * 0.0006
| E Z | computational cost of an exponentiation on Z q * 0.0226
| A G 1 | computational cost of an addition on G 1 0.0055
| M G 1 | computational cost of a doubling on G 1 0.7179
| M G 2 | computational cost of a multiplication on G 2 0.0511
| H Z | computational cost of a hash operation to Z q * 0.0002
| H G 2 | computational cost of a hash operation to G 2 1.1268
| E G 2 | computational cost of an exponentiation on G 2 0.8107
| P | Bilinear pair operations5.8853
Table 3. Function comparison of each scheme.
Table 3. Function comparison of each scheme.
Dynamic UpdateBatch AuditOffline TagsPrivacy Protection Against the Cloud
Scheme [10]YesYesNoNo
Scheme [11]YesYesNoNo
Scheme [12]YesYesNoYes
Scheme [13]YesNoNoYes
Scheme [14]YesYesYesNo
Our schemeYesYesYesYes
Table 4. Comparison of the computational overhead.
Table 4. Comparison of the computational overhead.
TagGenGenProofVerifyProof
Scheme [10] n ( | H Z | + | M G 2 | + 3 | E G 2 | + s | M Z | + s | A Z | ) 2.4924 n c s | M Z | + c s | A Z | 0.009 c c ( | M Z | + | A Z | ) + s ( | E G 2 | + | M G 2 | ) + 2 | P | 0.0009 c + 2 | P |
Scheme [11] n ( | H G 2 | + 3 | E G 2 | + | M G 2 | ) 3.61 n c ( | M Z | + | A Z | + | E G 2 | + | M G 2 | ) 0.8627 c c ( | H | + 2 | M G 2 | + 2 | E G 2 | ) + 2 | P | 1.7238 c + 2 | P |
Scheme [12] n ( | H G 2 | + | E G 2 | + | E Z | ) 1.9601 n c ( | M Z | + | H G 2 | + 2 | E G 2 | + 2 | M G 2 | + | A Z | ) 2.0406 c 2 | P |
Scheme [13] n ( s + 1 ) ( | E G 2 | + | M G 2 | ) + n | H G 2 | 10.6066 n c s ( | M Z | + | A Z | ) + c | M G 2 | + c | E G 2 | 0.8708 c ( c + s ) ( | M G 2 | + | E G 2 | ) + 2 | P | + c | H G 2 | 1.9886 c + 2 | P |
Scheme [14] n ( 2 | A Z | + | M Z | ) 0.0012 n c ( 2 | M Z | + | A Z | + | E Z | ) 0.0241 c c | A Z | + c | M Z | + 3 | P | 0.0009 c + 3 | P |
Our scheme n ( 2 | A Z | + | M Z | ) 0.0012 n 3 c | A Z | = 0.0009 c c | A Z | + c | A G 1 | 0.0058 c
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Yi, Z.; Li, R.; Wang, X.-A.; Li, H.; Yang, X. SM2-Based Offline/Online Efficient Data Integrity Verification Scheme for Multiple Application Scenarios. Sensors 2023, 23, 4307. https://doi.org/10.3390/s23094307

AMA Style

Li X, Yi Z, Li R, Wang X-A, Li H, Yang X. SM2-Based Offline/Online Efficient Data Integrity Verification Scheme for Multiple Application Scenarios. Sensors. 2023; 23(9):4307. https://doi.org/10.3390/s23094307

Chicago/Turabian Style

Li, Xiuguang, Zhengge Yi, Ruifeng Li, Xu-An Wang, Hui Li, and Xiaoyuan Yang. 2023. "SM2-Based Offline/Online Efficient Data Integrity Verification Scheme for Multiple Application Scenarios" Sensors 23, no. 9: 4307. https://doi.org/10.3390/s23094307

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop