Privacy-Preserving Public Cloud Audit Scheme Supporting Dynamic Data for Unmanned Aerial Vehicles

As one of the most popular applications in recent years, cloud storage has been gradually integrated into all walks of life. In the field of communication techniques for the unmanned aerial vehicles (UAVs), UAVs use sensors to upload collected data to cloud servers during various exploration activities, but UAVs can only store and calculate valid information currently collected due to the limited storage and computing performance. In the actual exploration, UAVs need not only to upload complete data to the cloud server, but also to support the data dynamic update efficiently. Moreover, due to security requirements, the privacy of data uploaded by UAVs must be protected. However, the existing auditing schemes for dynamic data and integrity have many problems, such as high computation cost, low efficiency of dynamic update, low privacy and security. For this reason, we propose a public cloud audit scheme that supports dynamic data and privacy protection based on distributed string equality check protocol and Merkle-hash tree multi-level index structure. First of all, a third-party server (TPS) is set between the cloud service provider and users, which complete digital signature, integrity auditing, and data dynamic operations significantly in place of users to reduce the local computing cost. Users then locally upload data which has been encrypted to the TPS. Secondly, to further improve the security of the scheme, TPS implement signature for encrypted data based on the distributed string equality check protocol. By designing authorizations with time constraints, it is guaranteed that only the legitimate TPS with time constraints can operate with cloud servers. Finally, we implement dynamic data operation efficiently based on MHT multi-level index structure. The security proof and performance analysis show that our proposed scheme is safe and effective.


I. INTRODUCTION
With the arrival of the era of big data, cloud computing [22], [23] has become a hot spot for people to research and apply. It decomposes huge data calculation processing programs into countless applets, which are analyzed and processed by distributed servers, and ultimately returns the results to users.
The associate editor coordinating the review of this manuscript and approving it for publication was Venanzio Cichella .
As one of the core services of cloud computing, the cloud storage is purposed to provide users with secure, reliable, high-performance and low-cost data storage services. With the development of technology and the society, more and more companies and individuals are choosing to outsource data to cloud service providers to reduce local storage costs.
At the same time, with the development of communication technology and wireless network [15]- [17] unmanned aerial vehicle is gradually being applied to all walks of life in society. In recent years, scientific and technological news about UAV exploration has often appeared in news reports. UAVs upload real-time data collected by sensors to cloud servers, and provide real-time analysis of data behind the scenes. People can make the initial flight trajectory of an unmanned air fleet and modify it instantly based on the realtime data. Data transfer activities for this process can be shown in Figure 1: However, it is important to note that this process can ensure the data uploaded by the UAV fleet is complete and can be dynamically updated in real time according to actual conditions. Unfortunately, the computing and storage capabilities of the UAVs are limited by their limited size. They can only support the storage of the real-time data, as well as the simple encryption of current data. Therefore, a public audit scheme supporting dynamic data is needed to accomplish this process. In the current scenario, in order to reduce the user's computing overhead in the audit process, a third-party auditor (TPA) is usually introduced to complete the tedious audit process instead of the user. TPA can verify the integrity of outsourced data through proven data holding (PDP) and retrievable proof (POR).
In order to efficiently and safely realize dynamic data integrity auditing with privacy-preserving in cloud storage system, this paper is set out to realize a dynamic public auditing based on the distributed string equality check protocol and hierarchical index structure. At the same time, in order to better realize the privacy protection of outsourced data and reduce the computing and communication overhead of users, we set up an intermediate node TPS between users and cloud service providers, which is responsible for signing the Encrypted data uploaded by users and completing the integrity audit for the users. In addition, users need to grant TPS an authentication that is only valid within a specified time interval before each operation, so as to assure the security of TPS.
The scheme not only realizes the dynamic operation and meets the security requirements, but also further protects the privacy of data and reduces the computing overhead of users. In summary, the scheme in this paper can achieve the following objectives: 1) Support multi-granularity dynamic operation of outsourcing data. 2) Implement public integrity audit of outsourced data under privacy protection. 3) Reduce the user's computation and communication overhead. 4) Set up a security node TPS to reduce the computing overhead of the UAV through proxy signature. Security and reliability are guaranteed through authorization sent to TPS by unmanned aerial vehicles. The rest of this article is in full description below. In the second chapter, we introduce the system model and security objectives of our scheme, and in the third chapter, we introduce our scheme in detail. In the fourth chapter, we analyze the effectiveness, security and performance of the system scheme. Finally, we summarize the article in the fifth chapter. VOLUME 8, 2020

II. RELATED WORK
To evaluate cloud storage system security, the integrity and availability of outsourced data are necessary indicators. In order to ensure the integrity and availability of outsourced data, many studies have been carried out around data integrity auditing [1]- [12], [24]- [36]. As one of the core feature of cloud storage system, integrity auditing allows users to verify whether the uploaded data is complete and available, instead of download the cloud data. Data integrity verification mechanisms can be divided into PDP (Provable Data Possession) and POR (Proofs of Retrievability) based on whether fault-tolerant preprocessing is applied to data files. PDP can quickly verify the integrity of outsourced data, and POR can also recover damaged data. To verify the integrity of data in cloud servers, Ateniese et al. [1] first proposed a PDP mechanism to verify the integrity of data on untrusted cloud servers. They complete data integrity verification based on the RSA signature mechanism and probabilistic strategy. Their scheme supports public auditing. Based on RSA signature, cloud servers can aggregate proof information to reduce communication overhead. Since then, many PDP schemes have been proposed. Some schemes are based on basic number theory and some are based on elliptic curves cryptograghy. Chen et al. [12]. implemented data integrity auditing based on distributed string equality check protocol. Although the efficiency and security of the scheme are relatively high, but public auditing is not supported.
However, for the above schemes, dynamic operation of data can not be supported. The schemes of dynamic data are required to the insertion, deletion and modification of data. At present, many data integrity audit schemes for cloud storage services have been proposed. Giuseppe Ateniese et al. [1] proposed a PDP scheme that supports dynamic data based on symmetric cryptosystem in 2007, but the scheme cannot support data insertion. Since then, basing on different authentication structures, different dynamic data auditing schemes have emerged: some are implemented based on index Hash table [2], [21], while some are implemented based on Merkle Hash tree (MHT) [18]- [20]. However, the ultimate common goal of these schemes is to improve the operation efficiency of dynamic data and effectively resist any replay, forgery and deletion attacks. Erway et al. [5] realized dynamic data operations merely through rank-based authentication jump table dynamic data structures and RSA signature mechanisms. The scheme is the first PDP mode that supports the complete dynamic data operation. However, with the increase of the file block size, the time consumption of node search increases sharply while the dynamic operation efficiency decreases. At the same time, Tian et al. [2] proposed a new public audit scheme to secure cloud storage based on Dynamic Hash Table (DHT). It realizes dynamic data integrity auditing by establishing a dynamic two-dimensional data structure on a third-party auditor (TPA). For the efficiency of dynamic data operation, Wang et al. [3] proposed a dynamic data integrity auditing scheme that supports public auditing based on MHT and BLS signature mechanisms. In this model, MHT structure is used to ensure the spatial accuracy of blocks. In a recent study, Shao et al. [4] and Fu et al. [26] consulted Wang et al. [3] to implement dynamic data auditing by building hierarchical binary trees (HMBT and MPHT), respectively. Gan et al. [6] constructed a new data structure, the record table (RTable), to operate on dynamic data. They implement integrity auditing based on algebraic signatures and XOR homology functions. Aujla et al. [25] used grid method and Bloom filter method to verify dynamic data integrity and can resist the attack of quantum computers. Aujla et al. [25] constructed a dynamic POR scheme using trapdoor commissions.
However, it is important to note that the above schemes complete the integrity audit by set a third party audit (TPA) to, in order to reduce the user's local computing burden. However, this scenario presents a new security issue because it may leak the privacy of user data. As far as data privacy protection is concerned, users need to prevent TPA (even cloud servers) from obtaining real data from certain data with high confidentiality. At present, the proposed schemes usually use BLS signature scheme and homomorphic linear authenticator to ensure the privacy of the data to TPA, but they do not support dynamic data operation. In addition, their security and efficiency are low.
Wang et al. [7] implemented a public cloud data audit system with privacy protection based on random homomorphic authenticator in 2010. Subsequently, in 2014, Worku et al. [8] pointed out the source of security defects in Wang et al. [7], and further analyzed its inefficiency, then improved it. After that, Yang and Xia [9] and Hong et al. [10] respectively conducted researches on privacy protection through elliptic curve cryptography (ECC) and homomorphic encryption schemes for SMC problems based on random mask. In recent research, Xu et al. [27] proposed a scheme based on blockchain, which combined with homomorphic encryption and smart contract technology based on Ethereum, and solve the privacy protection problem of electronic health files. Then, Yang et al. [28] proposed a attestation-based data access identifying scheme for data confidentiality and design a special log called attestation in which hash user pseudonym is used to preserve user privacy.

III. MODELS AND GOALS A. SYSTEM MODEL
This section describes the structure of the system model and the functions of each entity. As shown in Figure 2, the system model includes three entities: User (cu), Cloud Service Provider (CSP) and Third-party Server (TPS).
CU includes UAVs and background staff, which are required to upload the real-time acquired data, realize dynamic data operate, verify the integrity of the data and authorize the TPS. The cloud service provider provides users with cloud storage services and other operational  requirements. TPS is a third-party server. Users send their encrypted data and give authorization to TPS. TPS computes digital signature, sends them to CSP within the valid time limit of authorization, and completes the data integrity audit. Users also send data, operation requests and give authorization to TPS in dynamic operation, then TPS completes the dynamic update and integrity audit of the updated data.
In the system model, CU first generates its own private and public key-key of TPS and authorization. CU encrypts the data with its own private key and sends the encrypted data, public key, key of TPS and authorization to TPS. TPS then signs the data and uploads it to CSP within the authorized effective time. When CU needs to query the integrity of data or to carry out dynamic data operation, CU sends operation application and authorization to TPS. TPS then sends operation request and uploads data to CSP within the effective time of authorization, and audits the integrity of the data through the proof information responded by CSP. Finally, TPS sends the operation result to CU.

B. SECURITY MODEL
Since cloud service providers and TPS are untrusted or semi-trusted, we have listed the following security issues that will occur during integrity auditing and dynamic operations: 1) In order to improve its storage efficiency, CSP maliciously deletes part of the user's data and calculates an aggregated data block and label in advance to pass the integrity audit. 2) TPS or CSP maliciously sells highly confidential data.
3) TPS send a large number of operation requests to the CSP in a short time to consume the communication resources of the CSP. VOLUME 8, 2020

4)
In the dynamic data operation, CSP dishonestly updates the data and deceives the user's data integrity by forging or using expired data and labels.

IV. CONSTRUCTION OF MODEL A. PRELIMINARIES AND NOTATION
Referring to Chen's scheme [12], this paper combines distributed string equality checking protocol with bilinear pairing to sign the data after privacy protection. Referring to Qing's scheme, multi-granularity dynamic data operation is realized based on MHT. The security of the scheme is realized through Diffie-Hellman problem and pseudo-random function. 1) Bilinear mapping: and G T are both cyclic multiplicative groups of prime order p, and g is the generator of group G. Then for ∀m, n ∈ G, x, y R ← − Z * p , there is e(m x , n y ) = e(m, n) xy , and e(g, g) = 1.
2) Diffe-Hellman problem [13]: G is a cyclic group of prime order p, and g is the generator of group G.
where r x represents the number of data blocks that can be accessed from the node, and h(x) represents the Hash value for verifying the two child nodes. For example, , and r d = r 3 + r 4 . If the auxiliary path is K = {J , B}, the root node A can be calculated by 4) Classic string equality checking protocol [12]: Alice owns strings x ∈ {0, 1} n , Bob owns string y ∈ {0, 1} n , and there is a public random strings pool S ⊆ {0, 1} n . Alice selects an s R ← − S and sends (s, < x, s > mod2) to Bob. Bob calculates and verifies < y, s > mod2 ? = < x, s > mod2, and continues if the equation holds, otherwise Bob notifies Alice to terminate the protocol. After repeating this 100, if the above equation holds all the time, Bob notifies Alice that the two strings are equal. The probability of false positives is 1/2 100 , which is negligible, and the communication overhead is O(log n), so the protocol is safe and effective.

B. OUR CONSTRUCTION
In this section, we give a detailed introduction to the system model. G 1 , G 2 is both a multiplicative cyclic group of order p, where g is the generator of G 1 , and e : 1) The CU selects two random numbers x, k 1 R ← − Z * p as the private key of the CU, and calculates pk = g x as the public key of the user. Then the CU selects ε, r, k 2 R ← − Z * p as the private key of the TPS, selects the random number u R ← − Z * p and k = g ε as the public key of the TPS. Then the publicprivate key pair of the user is: The public-private key pair for TPS is: 2) CU selects the random number

1) DataBlind
CU first carries out privacy protection processing on outsourced data. CU uses private key k 1 to generate 79432 VOLUME 8, 2020 pseudo-random function f k 1 ( ) and calculates blinding factor p is the unique identification of file F. After that, the user encrypts the data block and calculates m ij = m ij + α i , i ∈ [1, n], j ∈ [1, s]. The CU sends the encrypted data F = {m i } to the TPS.

2) AuthGen
After receiving the file data and authorization. For each data partition m i , TPS constructs an initial vector v i = (h(m i1 ), . . . , h(m is )) where records the Hash value of each data block m ij by element v ij , and calculates the homomorphic tag σ ij = (g r·m ij +f k 2 (name||i||j||v ij ) · u H 2 (name||i||j||v ij ) ) ε mod p. After that, TPS constructs MHT and calculates the root node R and the signature σ R ||name of R. Finally, the TPS sends (F , , σ R ||name) to the CSP, where When the CSP receives the data, it first verifies the authorization of the TPS by the equation 3: If equation 1 holds, then CSP preserves (F , , σ R ||name) and constructs corresponding MHT. Otherwise, this operation request is invalid.

3) ProofGen
When conducting integrity audit, CU first authorizes TPS. After the TPS is authorized, it selects a set I ⊆ If the leaf node auxiliary path of the l-th MHT is K l , the auxiliary information is = {v l , K l } i∈I . Then the CSP sends Proof = {α, β, , σ R ||name} to the TPS as proof information.

4) ProofVerify
After receiving the certification information, TPS constructs MHT using I and , calculates root node R , and verifies whether σ R ||name is valid. If the signature is valid, If equation 2 holds, TPS sends Accpet to CU to indicate that the data is complete, otherwise sending Reject indicates that the data is missing.
What's more, we describe the active relationship between users, TPS and cloud servers in the scheme proposed in Figure 4.

C. SUPPORT FOR DYNAMIC DATA
In dynamic data operation, CU can perform the following five operations according to the size of the data granularity and operation types: inserting data partition (SI), deleting data partition (SD), inserting data block (BI), modifying data block (BM) and deleting data block (BD). The specific process of dynamic operation of data blocks is shown in Figure 5. When CU needs to perform dynamic data operations, it will perform by the following operation procedures: 1) CU generates operation request information P S 1 = (updata, name, i) and authorization Au and sends them to TPS. TPS transmits {P S , Au} to CSP, and CSP executes AuthGen verification authorization. 2) CSP calculate the auxiliary information i and sends the information P R = { i , σ R ||name} to the TPS as the correspond information. 3) After receiving P R , TPS uses i to construct MHT and calculate root node R , and uses R to verify whether σ R ||name is valid. If σ R ||name is invalid, it sends fail to CSP and CU to indicate that the operation failed. Otherwise, the next step is executed. 4) TPS modifies v i to v * i according to the operation type, and calculates the root node R * of MHT. TPS calculates the label of data to be updated, generates and sends dynamic data operation request information P R 2 = (BI /BM /BD/SI /SD, name, v i , σ * R ||name, i, j, m * ij , σ * ij ) to CSP. 5) CSP modifies v i to v * i according to P R 2 , updates (m ij , σ ij ) to (m * ij , σ * ij ), and then modifies σ R ||name to σ * R ||name. finally, ProofGen and ProofVerify are executed to verify the integrity of the updated data.

V. THEORETICAL AND PERFORMANCE ANALYSIS A. CORRECTNESS
In this section, this article introduces the correctness of the system scheme. Due to the reference of MHT and vector v i , TPS can master the index positions of data blocks and tags in CSP, which can effectively determine whether CSP performs data update honestly. If CSP and TPS can honestly implement the system scheme, the correctness of the integrity audit can be proved by follow equation: , g) ε = e(η, k) · e(y, k)

B. SECURITY
In this section, this paper analyzes the security of the system scheme from the following points: the privacy of data, VOLUME 8, 2020 the inability of the authorization to be forged, and the reliability of audit. Theorem 1: During the data upload and storage session, TPS and CSP cannot obtain real data by encrypting data.
Proof: Since the encrypted data m ij is generated by blinding factor α i , and α i = f k 1 (i, name) is randomly generated by CU through key k 1 . Therefore, after receiving the encrypted data m ij , the TPS cannot obtain the real data m ij .
Theorem 2: TPS cannot forge authorization without permission and pass CSP inspection. Moreover, TPS cannot operate CSP without permission.
Proof: The unforgeability of the authorization is determined by Y 0 and β 0 . However, Y 0 = g r 0 is encrypted by CU through private key r 0 . Even if TPS knows (g, g r 0 ), TPS cannot calculate r 0 according to DL theorem. And β 0 = r 0 + x · H 1 (I D CU ||I D TPS ||time 1 ||time 2 , Y 0 ) mod p is determined by the CU's private key x, r 0 and the corresponding Y 0 . Therefore, TPS cannot forge (Y 0 , β 0 ) to pass CSP's examination in AuthGen AuthGen. And through (I D CU , I D TPS , time 1 , time 2 ), TPS can be guaranteed to complete the operation honestly according to CU's instructions.

Theorem 3:
In data integrity audit, CSP cannot cheat TPS through integrity audit by forging, aggregating or using expired data and labels.
Proof: Before auditing the integrity of data, TPS firstly verify the index structure and labels in CSP by the MHT and vector v i , so that CSP cannot use expired data and labels to pass the audit. According to CDH, CSP cannot calculate the key {r, ε} through the data label, and CSP cannot know the index k 2 of the pseudo-random function f k 2 (·) contained in the signature, so the data signature is unforgeable. Moreover, since TPS randomly selects I data blocks for audit, if CSP 79434 VOLUME 8, 2020 wants to calculate aggregated data blocks in advance, it needs to calculate 2 I −1 combinations. Therefore, it is unrealistic for CSP to pass the audit by aggregating data and labels.
Theorem 4: In the data integrity audit, CSP cannot cheat TPS to pass the integrity audit by forging data and labels.
Proof: In order to prove this principle, we designed the following game process (in order to prove it more concisely and effectively, we do not consider MHT path and authorization here): First, TPS sends challenge message chal = {i, e i } i∈I to CSP. In order to pass the verification of VOLUME 8, 2020 Furthermore, if Proof = {α, β} is the correct proof information, then we have e(β, g) = e(η, k) · e(y, k) According to formula 5 in the correctness proof, we can obtain g α = g α ⇒ g α = 1 ⇒ α = α . That is, unless the CSP guesses the true α value, the CSP cannot deceive the TPS by forging α . Obviously, the probability of CSP guessing α is Pr[Unbound] = negl(λ), which is almost impossible. Therefore, Case1 does not hold.
Case 2: CSP forges (α , β ) at the same time and sends Proof = {α , β } to TPS, after which TPS calculates y = mod p. If the CSP wants to win this game process, the CSP must effectively calculate the digital signature corresponding to the damaged data block, namely σ ij = (g r·m ij +f k 2 (v ij ) · u H 2 (name||v ij ) ) ε mod p. However, (r, k 2 , ε) is the private key of TPS. According to DL theorem and CDH theorem, it is difficult for CSP to forge signatures. Similarly, CSP cannot forge β corresponding to α , so Case2 does not hold.

C. PERFORMANCE
In this section, this paper specifically analyzes the theoretical performance of our scheme in terms of computational overhead, communication cost and storage cost. Suppose n is the number of data blocks uploaded to the file, l is the length of the audit query, q is the security level, and d is the number of data blocks dynamically operated.

1) COMPUTATION COST
For CU, the participating processes include Setup and Dat-aBlind. In the whole protocol, Setup and DataBlind steps for the same file are executed only once, and all overhead of these two steps can be shared equally among subsequent data integrity audits, so the computational overhead for CU is O (1).
For TPS, the processes involved include AuthGen, Proof-Gen, ProofVerify and Dynamic Data. The computational overhead of calculating data labels in AuthGen is O(n).
The computational overhead of verifying the index structure of the outsourced data and auditing the integrity of the data in the ProofVerify and Dynamic Data steps is O(l) and O(d), respectively.
For CSP, the processes involved are AuthGen, ProofGen and Dynamic Data. In the AuthGen step, the computational overhead of verifying the TPS authorization is O (1). The computational overhead of calculating the prove information in ProofGen and Dynamic Data is O(l).
Therefore, The computation cost of each entity in the system scheme is shown in Table 3:

2) COMMUNICATION COST
In our scheme, for each entity, there are the following steps to carry out communication overhead: 1) CU uploads encrypted files to TPS. 2) CU sends operation request (data integrity verification, dynamic data) to TPS. 3) TPS uploads data to CSP. 4) TPS sends challenge information or operation request to CSP. 5) CSP returns TPS certification information or operation result. 6) TPS returns CU operation result.
Similarly, for the same file, the encrypted data and the communication overhead generated during the upload process can be spread out equally among subsequent operations. Therefore, in the life cycle of the same data file, the communication overhead generated by CU is O (1). For TPS and CSP, in each data integrity audit, the communication overhead depends on the length of the audit query, and in each dynamic data operation, the communication overhead depends on the length of the dynamic operation data block. Therefore, the communication overhead of each entity in the scheme is shown in Table 4:

3) STORAGE COST
In our solution, CU, CSP and TPS do not need to bear too much storage overhead. For users, only their own private key SK CU = {x, k 1 } needs to be stored. For the security node TPS, the data information to be saved is mainly its own private key SK TPS = {ε, r, k 2 } and public information PK = {g, k, u, pk}. For CSP, it is mainly responsible for storing data files and the signature of each data block, but due to the introduction of Merkle Hash tree, its storage overhead will be relatively reduced.
Next, we compare the performance of our scheme with that of other similar schemes recently, and analyze their performance and advantages respectively. The performance pairs of each scheme are shown in Table 5: We found that the difference between different schemes lies mainly in the structural design of the schemes, which further shows the difference in functions. In SCS protocol, Chen et al. [12] implemented digital signature based on distributed string equality checking protocol, which has a good performance in computational efficiency and security. Therefore, in our scheme, we mainly refer to SCS protocol to implement digital signature. In SCS protocol, although it implements simple dynamic data based on hash table, it cannot be applied to practice due to its inefficient operation. Moreover, it does not support public auditing and privacy protection, and users have a huge computing burden. In DAP scheme, Yang and Jia [11] simply implemented data integrity audit based on BLS short signature and bilinear pairing. Although its computational and communication overhead is low, its security is worrying. Based on an index table (ITable), they basically realize all operations of dynamic data, but due to the limitation of data structure, the operation efficiency needs to be improved. In ODA scheme, Gan et al. [6] implemented data integrity audit based on algebraic signature and XOR-homomorphic function, and basically realized dynamic data operation through Index Table. However, like DAP scheme, ODA scheme does not protect users' data privacy very well, and users' computing overhead can be further reduced. In our scheme, we use distributed string equality checking protocol to implement data integrity audit by referring to the work of Chen et al., which improves the security of the scheme while maintaining low computation and communication overhead. And through the Merkle Hash tree to achieve dynamic data operations, improve the efficiency of dynamic operations. At the same time, by setting up a secure node TPS, we can ensure the privacy of user data and reduce the local computing overhead of users.  We further evaluate the computational cost of our scheme, in the setting of Intel Core i7-7700HQ CPU @ 2.80GHz/16 GB Ram. According to the Java Pairing-Based Cryptography Library (JPBC), we simulated the computation cost of our scheme under IntelliJ IDEA. We divide each block into 20 sectors. We record the computation time for the digital signature in the AuthGen algorithm where the data blocks are set in 10KB. Then we record the time for integrity auditing in the ProofVerify where the file size is 4MB. All the experimental results are the averages of 20 times. The results of digital signature are shown in Figure 6, and the results of integrity VOLUME 8, 2020 auditing are shown in Figure 7. From the experimental data, we can see our scheme is efficient for the cloud servers.

VI. CONCLUSION
During the exploration, UAVs continuously upload real-time data to the cloud server. However, the storage and computing capabilities of UAVs are limited, and a public audit scheme supporting dynamic data is needed. More importantly, the privacy of data collected by UAVs must be protected. In this paper, we propose a general and efficient privacy-preserving public cloud audit scheme which supports dynamic data. By designing the third-party server (TPS) and security authorization, we have realized the protection of data privacy and greatly reduced the local computing overhead of UAVs. Based on MHT multi-level index structure, we have realized the dynamic data operation of cloud data, in the meantime, greatly improved the efficiency of the dynamic data operation and the storage efficiency of the cloud server itself. At the same time, we design the digital signature in the scheme based on distributed string equality checking protocol and bilinear mapping. We have verified the safety and theoretical performance of our scheme through detailed linear algebraic derivation and calculation, and we have verified the effectiveness of our scheme through experiments and performance evaluation. The results show that our scheme can effectively realize outsourced data integrity audit. Comparing with the existing schemes, it can effectively reduce computational and storage costs. However, at the same time, we should point out that in order to further optimize the computational efficiency and security of the scheme, we need to consider designing a more optimized signature scheme and a more optimized system structure, which is also our future work. XIAOYUAN YANG is currently a Professor with the Engineering University of the People's Armed Police. He has published about 100 articles in the field of information security. His main research interest includes cryptography. VOLUME 8, 2020