A Generic Cryptographic Deep-Learning Inference Platform for Remote Sensing Scenes

Deep learning plays an essential role in multidisciplinary research of remote sensing. We will encounter security problems during the data acquisition, processing, and result generation stages. Therefore, secure deep-learning inference services are one of the most important links. Some theoretical progress has been made in cryptographic deep-learning inference, but it lacks a general platform that can be realized in reality. Constantly modifying the corresponding models to approximate the plaintext results reveal the model information to a certain extent. This article proposes a generic post-quantum platform named the PyHENet, which perfectly combines cryptography with plaintext deep learning libraries. Second, we optimize the convolution, activation, and pooling functions and complete the ciphertext operation under floating point numbers for the first time. Moreover, the computation process is accelerated by single instruction multiple data streams and GPU parallel computing. The experimental results show that the PyHENet is closer to the plaintext inference platform than any other cryptographic model and has satisfactory robustness. The optimized PyHENet obtained a better accuracy of 95.05% in the high-resolution NaSC-TG2 database, which the Tiangong-2 space station received.


I. INTRODUCTION
W ITH the continuous development of Big Data and deep learning, the combination of remote sensing systems and artificial intelligence is getting closer and closer, and the corresponding privacy problems [1], [2], [3] are becoming increasingly prominent. The Nature published literature "Map Opportunities" [4] and suggested the importance of geographic information, which is one of the three most promising technology areas. However, many feature information and private data are hidden in the acquired remote sensing images. Sun [5] proposed a Hashing method to explore the characteristics of RS images. Kang [6] overcame the limitation on the class discrimination. So how to effectively protect the security of the raw data and the deep learning models trained in the cloud are critical.
The structure diagram of the deep-learning inference framework with high-security level is shown in Fig. 1, which can be applied in various application scenarios [7], such as disaster detection [8], pest monitoring [9], [10], target location [11], [12] and personal GPS [13] under different devices [14]. Public key encryption scheme [15], symmetry algorithm [16], orthogonal decomposition [2], crypto-watermarking [17], and other encryption technologies are used for remote sensing image processing. However, their encryption level cannot resist quantum attacks and cannot provide a unified platform for different tasks. This is what the PyHENet is to be solved.
Cloud computing provides convenient transmission, storage, and sharing for image classification of remote sensing scenes but brings many security problems [18]. 1) The transmission of data and models in plaintext is vulnerable to reverse attacks, poisoning attacks, back door attacks [19], and many other security attacks. 2) With the continuous development of quantum computing [20], traditional cryptographic algorithms are no longer absolutely secure. Watermarking, differential privacy, partially homomorphic encryption (HE) [16], [17], [21], and other methods are poor insecurity, unable to resist quantum attacks, and cannot provide the ability of deep computing. 3) The encryption computation remains in the basic addition and multiplication operations, which cannot be combined with the standard deep learning libraries, such as TensorFlow or Pytorch to complete the generic secure deep learning computation.
Many countries and organizations are strengthening their protection of national security and data privacy. The general data protection regulation (GDPR) [22] of the European Union, which came into effect in 2018, has created a new era of privacy-preserving. The users in Mozilla can delete their data from Mozilla's servers starting from 2020. As shown in Fig. 2, the RGB histograms of different residential images in the same class have different information features, which will reveal much private information. Chaudhari [17] combined watermarking and encryption for image copyright protection, but it only provides simple security for the image itself. Further research is needed for the more challenging post-quantum encryption for deep learning computing.
Since deep learning depends on huge computing power and massive datasets, individuals with limited computing power cannot accomplish such tasks. Inspired by the platform as a service (PaaS), deep learning as a service (DLaaS) [23], [24] has become a service with great application potential. Local users upload their personal data to the cloud, and the server outputs the results through the trained model and returns them to the client. This is called deep learning inference. In many business applications of remote sensing systems, the security of data and models is more urgent than that of ordinary applications [25]. Fig. 3 shows the encryption and decryption process of DLaaS under the condition of ciphertext.
Microsoft Research started research on privacy-preserving deep learning inference in 2016 [26], [27], based on fully HE (FHE) called the YASHE algorithm. The MiniONN framework [28] was then proposed. Meanwhile, Shokri [29] started his research on differential privacy. The follow-up research on privacy preserving deep learning inference can be divided into three categories, based on HE [30], [31], differential privacy [32], secure multiparty computing or combinations of these methods. It should be emphasized that differential privacy is the disturbance on the plaintext, which will reduce the accuracy. The GAZELLE framework [33] is based on the garbled circuit but will bring colossal communication overhead. Same problem as the BAYHENN framework [34].
According to state-of-the-art research [20], the difficult problems that traditional cryptography is based on are no longer difficult [35], [36]. But lattice-based fully HE has the ability to resist on quantum attacks, our research is also based on it.
For the privacy-preserving deep learning inference service required for remote sensing scenes, this article presents the PyHENet platform for the first time which is shown in Fig. 6. We optimize the shortcomings of traditional fully HE in convolutional layer, max pooling, sigmoid, and other computations.
In summary, the contributions of this article are as follows. 1) We develop a generic deep-learning inference platform PyHENet to protect the security of the raw data and models with fully HE. It can work with the plaintext library and does not require modifying the trained model. And use single instruction multiple data streams (SIMD) and GPU parallel calculation to accelerate the calculation. 2) Compared with the most advanced FHE-based framework, the complexity and generality of PyHENet can be realized without reducing the accuracy. We optimize and achieve the convolution, nonlinear sigmoid, and max pooling functions in FHE. 3) We optimize AlexNet model that used in state-of-the-art article of the NaSC-TG2, which collected by Tiangong-2 1 remote sensing system. We not only ensure the security but also improve the accuracy to 95.03%. The rest of this article is organized as follows. Section II introduces the basic algorithms about fully HE and deep learning. Section III explains the contributions and optimizations of the PyHENet platform in detail. In Section IV, we finish the experimental comparison and give a detailed analysis. Finally, Section V concludes this article.

II. PRELIMINARIES AND RELATED WORK
Remote sensing scenes usually involve people's livelihood, the economy, the military, and many other fields. Its security is far beyond simple image classification tasks [37], such as ImageNet. First, deep learning models for remote sensing tasks are trained based on many valuable datasets, so the models must be secure. In addition, the raw data of the inference service carries the private data of the remote sensing task. Our PyHENet platform can be completed under encryption conditions that can resist quantum attacks with higher security. This section introduces the relevant work in the following four aspects.

A. Privacy-Preserving Deep-Learning Inference
Inspired by platform as a service (PaaS), deep learning as a service (DLaaS) came into being in 2018 [38], [39] with great application potential. The high accuracy rate of deep learning models depends on many datasets and hardware devices.  This makes datasets and trained models constantly becoming valuable. More and more researchers are exploring privacypreserving deep-learning inference tasks. In addition, customers who use cloud services can obtain the desired results and solve the problem of weak computing power.

1) Data Perturbation-Based Privacy-Preserving:
The basic idea of data perturbation is to increase the noise of the data so that the raw data is difficult to recover [40], [41]. The goal is to preserve each row of records in the database while allowing the analysis of the entire database Differential privacy is frequently used [42], [43] for inference services. The accuracy of the results will be reduced due to the addition of noise. Also, the noise-added data is visible to anyone. Therefore, its security level cannot meet the requirements of remote sensing scene applications.
2) Cryptography-Based Privacy-Preserving: Cryptogra phy-based methods can satisfy the invisibility and losslessness of the result accuracy. Commonly used methods include secure multiparty computation (MPC) [44], [45] and HE [34], [46]. But partial HE can only support one kind of operation, the additive or multiplicative. With the deepening of quantum computing research [20], researchers believe it will significantly impact existing cryptography. Therefore, it is necessary to research the lattice-based fully HE [47], the only methods that can resist quantum attack and support various operations.
The lattice-based fully HE is well matched with the diversity of deep learning operations and the high security of remote sensing applications. Section IV will explain how to optimize the fully HE to adapt to the general PyHENet platform. A typical application of encrypted DLaaS is shown in Fig. 3. The client provides encrypted information, and the server calculates in encrypted and returns the result to the client.

B. Lattice-Based FHE of Floating Point Computation
The security of cryptography is mainly based on some challenging problems in mathematics, such as the problem of prime factorization. Although short algorithms in quantum can solve this problem in polynomial time, other more complex mathematical problems need to be researched.
Ajtai gave proof of the lattice-based difficult problem from the worst case to the general case. Learning with Error (LWE) and variants [48], [49] are the lattice-based hard problems in the general case.
Definition 1: Learning with Errors Problem (LWE) Give uniformly randomly generated matrices A ∈ Z m×n q , s ∈ Z n q and e ∈ Z m q , obeying distribution χ and b i = A i s + e i . Given multiple sets of (A i , b i ), finding s is difficult.
The RLWE problem is a variant of the LWE problem, which reduces communication overhead and accelerates the encryption and decryption process. At the same time, the FHE algorithm in this article is based on ring learning with error problems. And the critical function of fully HE [47], [50] is to support the addition and multiplication operations under ciphertext The process of HE is as above. The public and private keys (pk, sk) are generated from the key generation algorithm Gen(1 n ), and then plaintexts m 1 , m 2 which are from the plaintext space M are encrypted with the public key pk. And obtain two ciphertexts c 1 = Enc pk (m 1 ) and c 2 = Enc pk (m 2 ) which are from ciphertext space C.
CKKS is an approximate computational fully HE algorithm proposed by Cheon in 2017, which supports floating-point operations through rescaling techniques. As shown in Fig. 4, a large amount of ciphertext space is saved by rescaling technology, which provides theoretical support for the computation for deep learning. In this way, the magnification of the ciphertext can be quickly reduced, thus avoiding the problems caused by the enlargement. In the previous methods, such as BFV, GSW [51], or BGV [52], the magnification gradually increases but cannot be decreased, which leads to the rapid accumulation and expansion  of magnification beyond the modulus of the plaintext polynomial coefficient. It also doubles the computational complexity.
The FHE can perform simple algebraic operations on the cloud directly. Although it is still a long way from cryptographic deep learning calculation, it is the security guarantee for the PyHENet platform calculation of this article.

C. Convolutional Neural Network
The convolutional neural network (CNN) is a milestone in the development of deep learning [53]. It has an excellent performance in large-scale image processing and is also the research hotspot in remote sensing [54], significantly improving image classification accuracy.
CNN has significant advantages in various applications because of its shared weight and local perception. As shown in Fig. 5, it can perform convolution calculations using convolution kernels. In addition, the sigmoid function is often used as an activation function for neural networks, and the max pool function is used to achieve this advantage of local perception. These basic neural network functions are optimized and realized in the cryptographic PyHENet platform of this article MaxPooling ⎡ ⎣ x 00 x 01 x 02 Our CNN model in the PyHENet was based on AlexNet [55], and some modifications and optimizations have been made to adapt to our cryptography applications. Combining with the standard PyTorch library, the inference services can be implemented without modifying the external code of the training model.

III. PROPOSED APPROACH OF THE PYHENET PLATFORM
Although convolution, pooling, and activation functions are widely used as necessary operations in plaintext. However, for fully HE, how to guarantee the correctness of the computation, how to reduce the computation steps, and optimize the computation time are all things we need to consider. In previous FHE implementations, neural networks tried to avoid these challenges and used linear functions with similar results instead, which is detrimental to the application of the framework and cannot achieve generalized computation for complex application scenarios.
The importance of data and model security in deep learning inference services requires no reemphasis, especially for highsecurity requirements, such as remote sensing scenarios. The theory of fully HE and convolutional neural networks are also briefly introduced. Naturally, we explore combining them, but it is not as simple as one plus one equals two. This section focuses on the difficulties that must be solved and optimized to construct the general inference platform of deep learning, which we named the PyHENet.

A. Privacy-Preserving Deep Learning Inference Generic Platform
The privacy-persevering of the PyHENet platform in this article is realized by the encryption algorithms based on the provable security of lattice-based FHE. PyHENet completes more complex convolution calculations under rescaling strategy and provides security for the raw data and the trained model without loss of accuracy.
The generality of the PyHENet is reflected in its ability to combine with the popular PyTorch library. It allows developers to not need additional learning of fully HE. We modify the bottom functions of the PyTorch library, including convolution, sigmoid, and max pooling functions. Support deep learning inference calculation under ciphertext. Combining the advantages of the cryptography library and artificial intelligence library, we have integrated a neural network platform called PyHENet, which represents an FHE-based neural network platform combined with the PyTorch library.
The overall framework of PyHENet is shown in Fig. 6. Because CNN is prominent in deep learning, especially for image classification in remote sensing scenes, it is reasonable to improve the CNN network under ciphertext. The deep learning inference service can be divided into three parts. The client provides private data, the server provides the trained model and the communication layer. Due to the nature of fully HE, the inference service can compute between the data and the model and return the encrypted result to the client. Finally, the client decrypts the result by the private key. On the other hand, different applications have different security requirements. Furthermore, the output after multilayer calculation can be insensitive to a certain extent. PyHENet can freely choose the depth of encryption calculation, interrupt password calculation, and balance security and calculation efficiency. The high-speed computing performance of the GPU can effectively reduce the overall computing time while ensuring security.

1) Convolution Calculation Under Ciphertext:
The convolution calculation is different from the multiplication operation. It is a matrix operation. Under fully HE, the steps of the matrix dot operation under ciphertext are shown in Fig. 7 c We use integer p as a base for scaling in computation, and a modulus q 0 at the beginning. Let q l = p l q 0 for 0 < l ≤ L.  1 , l, v 1 , B 1 ) , (c 2 , l, v 2 , B 2 )) Mult : ((c 1 , l, v 1 , B 1 ) , (c 2 , l, v 2  The lattice-based FHE is an algebraic operation on the ring, so Algorithm 1 gives the pseudo-code for floating point convolutional computation based on the above functions. We can pack vectors into ciphertexts and perform parallel computation on the server based on the SIMD technique. 2) Proof of Lower Accuracy Loss of Convolutional Calculation: As shown in Algorithm 1 of the convolution calculation of ciphertext, it can be decomposed into addition and multiplication. Therefore, by proving that the addition and multiplication under ciphertext have lower accuracy loss, respectively, we can also prove that convolution calculation has lower accuracy loss correspondingly. Proof: There is a polynomial e 1 , e 2 ∈ R such that c 1 · sk = m 1 + e 1 (mod q l ), c 2 · sk = m 2 + e 2 (mod q l ), and e 1 can It is obvious that c a · sk = c 1 · sk + c 2 · sk = m 1 + m 2 + e 1 + e 2 (mod q l ), e 1 + e 2 can ∞ ≤ B 1 + B 2 . We also have c m · sk = m 1 m 2 + m 2 e 1 + m 1 e 2 + e 1 e 2 (mod q l ) and than after rescaling calculation, β = β 1 + . It is very close to β 1 + β 2 similar to the case of unencrypted float-point multiplication under an

B. Aided Parallel Computing Based on GPU and SIMD
As shown in Fig. 6, to balance security and computing performance, the PyHENet can freely select the number of network layers for GPU computing. On the other hand, we also use SIMD technology to package ciphertext to optimize ciphertext computation, which makes homomorphic computing faster and more accessible for the current scene. SIMD is a parallel computing technology that can significantly more quickly execute instructions. Table I shows the advantage of SIMD. It only needs one encryption and one homomorphic computation to operate on the plaintext vector.

1) Linear Convolution Function:
In the encryption convolution calculation, the computational cost should be reduced as much as possible. Except for SIMD parallel computation under ciphertext, compared with the traditional sliding window matrix computation, the image-to-column (im2col) [56] optimization method used in this article can accelerate and support homomorphic computation. The optimized schematic is shown in Fig. 8.
2) Nonlinear Sigmoid Activation Function: Different from other deep learning methods based on HE, this article uses the Taylor expansion to approximate sigmoid function, instead of replacing it with other functions.
The sigmoid function f (x) = 1/(1 + exp (−x) ) is the most basic activation function in neural networks and is widely used. We cannot give up the original open-source system because of the difficulty of realizing complete homomorphism As HE does not support the nonlinear operation, we use Taylor expansion to approximate the Sigmoid function. Fig. 9 shows the comparison of the Taylor expansion in different orders with the Sigmoid function, and it can obtain linear approximation results. Of course, the deeper the order of Taylor expansion, the closer the effect will be to the actual value. The experiment shows that the sixth-order expansion can meet our requirements.
3) Max Pooling Function: Unlike other homomorphic algorithms that use average pooling instead of maximum pooling, the maximum pooling function is better suited for reducing the trained model. More importantly, for privacy-preserving deep learning inference services, changing the model structure has already damaged the private data of the model, even if it does not damage the parameters. The trained model is precious for the collection of the training data. The average pool function is often used in fully HE. It only requires homomorphic addition to complete the pooling, as shown in the following equation. However, the maximum pool function measurement needs our optimization.
In the encryption condition, it is harder to complete than number size, so we modify the conditional operations, shown in the following: Correspondingly, the max pooling function in the ciphertext can be transformed into the following equation:

IV. EXPERIMENTS AND EVALUATION
In this section, we implement the PyHENet platform with fully HE in the traditional PyTorch library, which is based on the rerealization of the various functions for deep learning inference services that need to be completed in the previous section.
The experimental analysis in this chapter is completed in three dimensions: 1) comparison with state-of-the-art ciphertextbased deep learning prediction models, 2) comparison with the corresponding plaintext accuracy, and 3) comparison in more complex remote sensing scenes.

A. Dataset and Experimental Settings
We deploy the generic cryptographic deep-learning inference platform with two NVIDIA A100 GPUs to satisfy the larger memory requirements for more complex security services. It not only improves the speed of the training process but also assists the PyHENet platform more effectively in the inference tasks mentioned in Section II, balancing security with computational speed.
Since previous deep learning inference experiments based on fully HE were done and compared in the MNIST 2 dataset, we must experiment on this dataset first. The MNIST is the classification task for gray images, which is the introduction dataset for deep learning. As shown in Fig. 10, it contains ten classes of images from 0-9, and the inference service is to classify them precisely, with 70 000 images of 28*28 pixels.
In MNIST-based experiments, we focus on the difference between the PyHENet with other cryptographic frameworks, the complexity of the model, or the similarity with the plaintext model rather than the accuracy itself. Since this dataset is primary, distinguishing the accuracy is meaningless. We try to make the model's parameters not optimal to achieve the best accuracy. The inference process with accuracy change can better show the rate of change.
After proving the generality of the PyHENet platform, we apply it to the more complex remote sensing image classification tasks. The NaSC-TG2 3 dataset is collected from the Tiangong-2 space lab [57], [58] in China, which has higher image quality and can enables richer remote sensing scenes compared with the experiments based on maps. It has ten natural scenes, each with 2000 color images of 128*128 pixels, as shown in Fig. 11. The PyHENet achieves the fully homomorphic encrypted deep learning inference service on remote sensing data first while gaining better accuracy and providing a general platform for high-security application services.

B. Experiment and Aanalysis of the PyHENet Platform
The main contribution of this article is to provide a more practical general security platform in complex scenarios with multiple data sources or multiple network models. The PyHENet realized convolutional, activation, and pooling functions, which are essential in standard neural networks. Of course, to obtain higher accuracy, the PyHENet requires scenario-oriented personalization, such as data preprocessing and function parameter tuning, like standard deep learning libraries, such as Pytorch. Meanwhile, the PyHENet platform makes secure inference service more in line with standard deep learning libraries and  enables no need to modify the external code after model training. This contribution is original and has a wide range of practical applications. The experiments were applied in remote sensing scenarios and obtained better accuracy.

1) Comparison With State-of-the-Art Fully Homomorphic Encryption Models:
This subsection focuses on comparing with state-of-the-art fully homomorphic neural network models.
The CrypNets [26] is one of the few frameworks that expose the model parameters and code. At the same time, it has been widely recognized. Therefore, comparing it with PyHENet is more convincing.
As PyHENet is a general platform, it can be freely combined to generate different models. Table II gives two different models of it. Both the models in the PyHENet platform have more general and complex sigmoid and full connectivity functions than the CryptoNets. This undoubtedly requires more optimization methods. Moreover, the PyHENet can be used together with the PyTorch library.
In addition, PyHENet 2 has an increased max pooling function compared to PyHENet 1, which can reduce the matrix calculation under the ciphertext. Moreover, there is no doubt about the importance of pooling computation in traditional convolutional networks. It additionally increases the accuracy of the results, so implementing maximum pooling in the ciphertext is more relevant to standard deep-learning libraries.
We also compare the PyHENet with other state-of-the-art methods in Table III. Since our platform supports encryption computation under floating-point data, it allows deeper computation. More importantly, the PyHENet is much closer to the actual requirements and can be implemented with a general platform like Pytorch. Moreover, the PyHENet has three advantages in optimizing the neural network: optimized linear convolution calculation function, nonlinear sigmoid activation function, and max pool function, which have been described in detail in the previous section.
Based on the model parameters in Table II, the following subsection compares the experimental performance of the Py-HENet.

2) Comparison With Deep Learning Inference Models Under Plaintext:
The experiments in this subsection aim to compare the performance of PyHENet with the plaintext convolution neural network model rather than to obtain higher accuracy. Therefore, this experiment was conducted in different iterations of the model to get the relative performance of different models in different accuracy cases. Figs. 12 and 13 compare the accuracy of the plaintext benchmark with PyHENet 1 and PyHENet 2 in different security levels (number of network layers for encryption computation). We modify the parameters so that the model cannot quickly obtain the optimum accuracy in the previous iterations. We found that PyHENet 1 and 2 have good robustness during 60 iterations. It can also be seen from the figures that the inference performance of the encrypted neural network has almost the same relative performance as that of the benchmark. In addition, different security-level models can obtain almost the same accuracy.
Figs. 14 and 15, respectively, compare the accuracy of Py-HENet 1 and PyHENet 2 with that of the plaintext network and obtain the relative accuracy percentage. As can be seen from the  above figures, when the number of iterations of the network is low, the relative accuracy of the network fluctuates greatly. This phenomenon is consistent with the actual situation. The neural network needs many iterations to obtain high accuracy. With the increased iteration times, no matter what kind of ciphertext network, it has reached the same accuracy as a plaintext network and tends to be stable.
In this part of the experiment, the relative accuracy under the impact condition can be obtained by changing the performance of the network, which can better reflect the robustness of the PyHENet platform.

3) Comparison in Remote Sensing Scenes:
The experiments in the previous subsections have demonstrated the security, robustness, and high accuracy of the PyHENet platform. This has laid a foundation for exploring more complex remote sensing applications.
We conduct cryptographic deep-learning inference experiments on the NaSC-TC2 dataset provided by the Tiangong-2 space laboratory. In their latest research, Zhou et al. [57]   found that AlexNet could achieve 89.39% accuracy in this highresolution remote sensing data classification task. It possesses higher accuracy than deeper neural networks, such as VGG.
Therefore, we adjust the model structure of PyHENet 3 to let it be similar to the AlexNet and replicate the experiment using our inference model based on fully HE.   Through several experimental comparisons, in the task of NaST-TC2, we found that the traditional AlexNet network, which is computed with 22 functions, has redundant functions. So we reduce the neural network to the model with nine layers of functions. Moreover, the specific parameters of each layer are given in Table IV. Since Zhou's experiments were implemented based on 20% of the training data, this article does the inference experiments on 20% and 70% of the trained models, respectively. As shown in Table VI, we can obtain higher accuracy even though the PyHENet model has fewer functions and is based on a complex computation of fully HE. It can be seen that high security does not trigger a decrease in accuracy. In addition, the generic model can be appropriately tuned to obtain higher accuracy.
From the graph in Fig. 16, we can see the accuracy trend of the inference model. The accuracy increases sharply in the 1st to 20th iterations, and the model accuracy stabilizes through the 20th to 40th iterations. In addition, the accuracy trend is the same for different amounts of training data.
With the encrypted scenario, we not only do not have less inference accuracy but also do not affect the the final results. It can be seen that the generic platform PyHENet proposed in this article can indeed provide a secure inference service for remote sensing scenes. The security of the remote sensing scenario is ensured and the platform is generic and easy to operate.

V. CONCLUSION
Remote sensing scenes have an increasing demand for security, especially inference services in deep learning. Moreover, privacy-preserving deep learning is a challenging but essential research topic. How to balance security and performance is critical. We design a general platform for deep learning inference called the PyHENet. It can secure both the client's data and the trained model based on post-quantum encryption theory. By implementing and optimizing the neural network under ciphertext, the security is further improved while reducing the development difficulty. In the future, we can further optimize the platform to satisfy more deep learning models, such as LSTM or GAN. The exploration of distributed secure remote sensing applications is another valuable research direction. Qian Chen (Student Member, IEEE) received the master's degree in computer science, from the Harbin Institute of Technology, Shenzhen, China, in 2018, where she is currently working toward the Ph.D. degree in computer science.
She is an Experimentalist with the Experimental and Practical Research Center, Harbin Institute of Technology, from 2018 to 2019, teaching experimental courses on compilation principle, operating system, and high-level language programming. Her research interests include homomorphic encryption, remote sensing, and game theory. She is currently an Associate Professor with the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China. Her research interests include secure multiparty computation, homomorphic encryption, and cloud security. He is currently a Professor with the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China, and Director of the Department of New Networks, Pengcheng Laboratory, Shenzhen. He has authored more than 130 academic papers in journals, books, and conference proceedings. His research interests include cyberspace security, cloud computing, and high-performance computing.
Dr. Zhang is a Lifetime Member of ACM.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.