High Precision Cross-Range Scaling and 3D Geometry Reconstruction of ISAR Targets Based on Geometrical Analysis

Cross-range scaling of sequential inverse synthetic aperture radar (ISAR) images is a vital problem for the three-dimensional (3D) reconstruction of spatial targets. Specifically, since the angular velocity of spatial targets cannot be accurately estimated due to their non-cooperative nature, the obtained cross-range coordinates of the scattering points often have non-negligible errors. The errors may result in a serious distortion of the reconstructed geometry of the targets, which makes consequent target recognition and classification unreliable. To tackle this problem, via geometrical analysis, the relationship between the cross-range projection vector and the range projection vector is revealed for the first time, based on which a novel algorithm is proposed to iteratively scale the cross-range coordinates with no need for estimating the angular velocity of the spatial targets. Experimental simulations show that the proposed algorithm not only provides a higher accuracy of 3D reconstruction, but also has a lower computational complexity compared with conventional algorithm.


I. INTRODUCTION
High-resolution inverse synthetic aperture radar (ISAR) imaging has been applied in many fields because of its allday, all-time, and all-weather operation capability [1]- [6]. Particularly, the needs for identification and classification of spatial targets have been promoting the development of three-dimensional (3D) structure reconstruction technology based on ISAR images. The rationale is that one ISAR image reflects the projection information of the spatial target on the imaging plane, which contains part of the 3D structure information of the target. Therefore, by combining the information in several ISAR images, the 3D structure information can be recovered.
The existing 3D reconstruction technologies based on ISAR images mainly include interference ISAR [7]- [9] and monostatic ISAR [10]- [14]. The former has higher requirements on hardware and thus is very difficult to implement. In comparison, the latter is cheaper and is much easier to implement, which makes it more pervasive. In [10], [11], the monostatic ISAR was applied to reconstruct the 3D structure of targets based on their motion data. Nevertheless, the motion data may be unavailable in practice, which make The associate editor coordinating the review of this manuscript and approving it for publication was Cheng Hu . the applicability of these methods limited. To tackle this problem, some other methods with no need for motion data were proposed, including the modified factorization method [12] and the segmentation method [13]. However, all the aforementioned works may be ineffective for non-cooperative targets. This is because there is often a non-negligible error in the speed estimation of non-cooperative targets, which results in a low accuracy of the cross-range coordinates in the obtained ISAR images. This problem is crucial when the changes of the 3D coordinates of the scattering points are small. In this case, the unreliable cross-range coordinates may result in a serious distortion of the reconstructed geometry of the target. To address this problem, the author in [14] proposed to model the change of the equivalent rotational angular velocity as a linear polynomial and then estimate the polynomial coefficients so as to fulfill the cross-range scaling. However, in practice, the motion modes of the non-cooperative targets may be volatile. As a result, the applicability of the polynomial approximation used in [14] is limited. Additionally, the method in [14] does not make full use of the relationship between the range and the cross-range projection vectors, which leads to a high computational complexity and meanwhile leaves a considerable margin for performance improvement. On the other hand, the work in [14] assumed a large angle observation, which is indeed unreliable, since the scattering characteristics of scattering points may change dramatically under a large rotational angle. In this case, the scattering points can hardly to be tracked. To address these problems, by revealing and then making full use of the relationship between the cross-range projection vector and the range projection vector, this paper proposes a novel algorithm to improve the accuracy and meanwhile to reduce the computational complexity of the cross-range calibration of ISAR images under a small rotational angle, with no need for estimating the angular velocity of the spatial targets. The contributions can be summarized as follows: 1) Via geometrical analysis, the relationship between the cross-range projection vector and the range projection vector is revealed for the first time (cf. equation (4)).
2) The conventional factorization method for acquiring the projection vectors is modified so as to increase the successful rate of the 3D reconstruction of the spatial targets (cf. equation (16)).
3) Based on the two contributions mentioned above, a novel algorithm is proposed to iteratively carry out the cross-range scaling with no need for estimating the angular velocity of the spatial targets, which can improve the accuracy of the 3D reconstruction and meanwhile reduce the computational complexity, compared with the method in [14].
Two assumptions are used throughout the paper: x ISAR can obtain stable echoes of the spatial target. As a result, during the observation time, there are always scattering points on the target that can be extracted, which can be readily realized by using the methods in [16], [17].
y Under a small rotational angle, the translational motion can be well compensated, which can be achieved by using the methods in [18], [19].
The remainder of this paper is organized as follows. Section II reveals the relationship between the cross-range projection vector and the range projection vector. The conventional factorization method and our modifications are also presented in this section. Section III formulates the proposed algorithm. Simulation results are given in Section IV. Finally, Section V concludes the paper.

II. ISAR IMAGING GEOMETRY AND FACTORIZATION METHOD
In this section, the relationship between the cross-range projection vector and the range projection vector is revealed for the first time. Then, we briefly introduce the procedures for deriving the two projection vectors from sequential ISAR images via the conventional factorization method. A small but crucial modification of the factorization method is made in order to increase the successful rate of the 3D reconstruction of the spatial targets.  coordinate system and is taken as the equivalent center of the rotation of the target. The green solid arrow denotes the radar line-of-sight (LOS). Its azimuth angle and pitching angle are denoted by γ and ϕ, respectively. Clearly, the imaging plane is the XOY plane when the LOS is the dotted green arrow. In this case, the projection vectors of the imaging plane can be given by When LOS changes from the dotted green arrow to the solid green arrow, the imaging plane experiences a rotation. As a result, the projection vectors become where i and j are known as the cross-range projection vector and the range projection vector, respectively. Based on (2), one can show that i = (sin γ , cos γ , 0) j = (− cos ϕ cos γ , cos ϕ sin γ , − sin ϕ) It is worthwhile to notice that we can map the range projection vector j to the cross-range projection vector i based on equation (3). Specifically, it follows from (3) that Note that equation (4) holds regardless of the values of the azimuth angle or the pitching angle. As stated before, we will use the factorization method to derive the projection vectors, which are however not consistent with (4) in general due to the errors in the estimation of the speed information of the target. In Section III, we will show how to utilize the derived relationship between the two projection vectors (i.e., equation (4)) to carry out cross-range scaling iteratively.

B. THE MODIFIED FACTORIZATION METHOD
To introduce the conventional factorization method and to present our modifications of it, several definitions have to be clarified as follows [15]. Let F 1 , F 2 , F 3 , · · · , F N denote the N frames of images obtained after processing the echoes. By matching the scattering points of the N images, K scattering points are tracked. Their coordinates are represented by (x nk , y nk ), n = 1, 2, · · ·, N , k = 1, 2, · · ·, K . Then, the registered points are defined as Furthermore, the trajectory matrix W is defined as Ideally, the projection relation between the trajectory matrix and the position of scattering points can be written as where represents the projection matrix, i n denotes the cross-range projection vector corresponding to the n-th frame, j n represents the range projection vector corresponding to the n-th frame, whereas P = q T 1 · · · q T K ∈ R 3×K denotes the actual 3D geometry matrix of the K scattering points with q k ∈ R 1×3 representing the 3D coordinates of scattering point.
To obtain the projection matrix as well as the 3D geometry matrix, based on the singular value decomposition (SVD) theory, the trajectory matrix W can be decomposed as where U ∈ R 2N ×K denotes a column orthogonal matrix, ∈ R K ×K represents a diagonal matrix, and V ∈ R K ×K stands for a column orthogonal matrix. In general, the rank of the trajectory matrix W is larger than 3 (if there is no noise or error, the rank of the trajectory matrix is 3), but W still has three dominant eigenvalues that reflect its main characteristics [15]. LetŪ ∈ R 2N ×3 denote the first three column of U ,¯ ∈ R 3×3 represent the first three rows and three columns of ,V ∈ R K ×3 stand for the first three columns of V . Then, we have W ≈MP, where we defineM = U ¯ andP = ¯ V . Generally,M is not the projection matrix, because each column of the projection matrix has to be a unit vector. Meanwhile, the cross-range projection vector corresponding to one image has to be orthogonal to the range projection vector corresponding to the same image. To meet these two constraints, by using a 3 × 3 invertible matrix, which we denote by A, the projection matrix and the 3D geometry matrix can be determined by where M rec denotes projection matrix that we recovered by using the factorization method, P rec denotes 3D geometry matrix that we recovered by using the factorization method. For conciseness, we simply call them the projection matrix and the 3D geometry matrix, respectively, in the rest of the paper.
In what follows, we explain how to obtain matrix A. Firstly, the two constraints mentioned above can be written as where L AA T ∈ R 3×3 is a real symmetric matrix, whereaŝ i n andĵ n represent the n-th row and the (N + n)-th row ofM , respectively. Note that in order to obtain A, we only need to determine L. For such, we rewrite L as where I i denotes the i-th element of matrix L. Then, constraint (10) can be rewritten as where G ∈ R 3F×6 , I ∈ R 6×1 , and C ∈ R 3F×1 are defined as T . Matrix I is obtained by using pseudo-inverse solution method, which can be given as By rearranging the elements in matrix I , we obtain matrix L. Then, the Cholesky decomposition is carried out and matrix A is obtained. So far, we have introduced the conventional factorization method. However, in a practical matrix decomposition process, the decomposition of matrix L may fail due to the negative eigenvalues resulting from errors and noise. To tackle VOLUME 8, 2020 this problem, we first carry out the eigenvalue decomposition of matrix L, which can be written as (15) where H ∈ R 3×3 denotes column orthogonal matrix, whereas ∈ R 3×3 represents diagonal matrix. Then, the value of matrix L is updated as Note that when the eigenvalues of matrix L are all positive, equation (16) does not change the values of matrix L.
On the contrary, when at least one negative eigenvalue exists, the negative eigenvalue will be replaced by its absolute value, but the positive eigenvalues of matrix L, which dominate the characteristics of matrix L, remain unchanged. To recapitulate, with the help of (16), we can always obtain matrix A by carrying out Cholesky decomposition to matrix L, which means that we can always obtain the projection matrix as well as the 3D geometry matrix from sequential ISAR images by using (9).

III. CROSS-RANGE SCALING AND 3D RECONSTRUCTION
Based on the modified factorization method presented in Section II-B, we can calculate the cross-range projection vector and the range projection vector corresponding to each frame of sequential ISAR images. Hereafter, for the n-th frame, we denote them as i n and j n , respectively. Ideally, the range coordinates of scattering points are accurate due to assumption , which means that the obtained range projection vector j n is accurate. Recall that we can map the range projection vector to the cross-range projection vector by using equation (4). Therefore, based on the accurate range projection vector of the n-th frame (i.e., j n ), we can indeed obtain the accurate cross-range projection vector of the n-th frame (by using equation (4)), which we denote by i tg,n in the rest of the paper. To recapitulate, Fig.2 summarizes the formation process of i n , j n , and i tg,n corresponding to the n-th frame.
If there is no noise, the errors of the cross-range coordinates only come from the inaccurate estimation of the speed of the target. In this case, due to assumption , the relative size of the cross-range coordinates of different scattering points in each image is deemed accurate. This motivates us to find a proper scaling factor to scale the cross-range coordinates for each image, so as to correct the cross-range coordinates. In what follows, by revealing and utilizing the geometrical relationship between i n and i tg,n , the scaling factor is determined to correct the cross-range coordinates iteratively.

A. PRELIMINARIES
Before presenting the proposed cross-range scaling algorithm, we first introduce some relevant variables and notations.
Without loss of generality, consider the n-th frame and a scattering point Q in space, whose 3D coordinates are represented by q = (x k , y k , z k ). We denote the projection length of q on i n and that on i tg,n as r n = q · i T n and r tg,n = q · i T tg,n , respectively. Note that r n is known since it is essentially the absolute value of the cross-range coordinate of the scattering point obtained from ISAR image, whereas r tg,n indeed stands for the accurate absolute value of the cross-range coordinate but is unknown since we do not know the 3D coordinates of the scattering point (i.e., q). In general, due to the errors in the estimation of target speed information, i n is not identical to i tg,n , and thus r n is not identical to r tg,n .
On the other hand, an intermediate vector is introduced as Note that vector i d,n exists in the plane determined by i tg,n and i n . This is because i tg,n , i n , and i d,n are all orthogonal to j n . Meanwhile, it is worthwhile to note that i d,n is also unknown, since the 3D coordinates of the scattering point q is unknown. Then, according to the definitions of r tg,n and r n , one can readily show that By defining the projection length of q on i d,n as l q · i T d,n , equation (18) can be rewritten as r tg,n = l · cos α n r n = l · cos β n (19) where α n denotes the angle between i d,n and i tg,n , β n stands for the angle between i d,n and i tg,n . Finally, we denote θ n as the angle between i tg,n and i n . As a result, we have cos θ n = i tg,n · i T n (20) B. THE PROPOSED SCALING ALGORITHM According to the relative size of r tg,n and r n , the cross-range scaling is conducted in different ways. Note that we do not know the relative size of r tg,n and r n at first, since r tg,n is unknown. We will illustrate how to know the relative size of r tg,n and r n at the end of Section III-C.
Case 1: r n > r tg,n In this case, the estimated cross-range coordinates are larger than the real ones (in terms of their absolute values). We determine the scaling factor for the n-th frame as Case 2: r n < r tg,n In this case, the estimated cross-range coordinates are smaller than the real ones (in terms of their absolute values). We determine the scaling factor for the n-th frame as After determining the scaling factor for each frame (by using (21) and (22)), we arrive at a modified trajectory matrix W , which can be given by where '' '' represents the Hadamard product operation. This completes one iteration. Next, based on the modified trajectory matrix W , the factorization method is performed again and another iteration is completed by using equations (20)∼(22) once more time. We stop the iteration process until (1 − | cos θ n |) < ε for each frame n, where ε = 10 −8 is a predetermined threshold.

C. EXPLANATIONS
In what follows, we consider one iteration to explain why equations (21) and (23) work in Case 1, via geometrical viewpoint. After that, we will explain why equations (22) and (23) work in Case 2.   3 shows the plane determined by i tg,n and i n in case 1, where Q' denotes the projection point of spatial scattering point Q on the plane. Note that herein r tg,n is the length of OB (i.e., the accurate cross-range coordinate), r n is the length of OA (i.e., the inaccurate cross-range coordinate obtained from ISAR image), and l is indeed the length of OQ'. Meanwhile, it is worth noting that due to the orthogonal projection relationship of factorization method, Q'A is orthogonal to OA, whereas Q'B is orthogonal to OB. As shown in Fig.4(a), by using equations (20), (21), as well as (23), and then by conducting the factorization method to the modified trajectory matrix W , we obtain a new cross-range projection vector, which we denote by i n . The projection of q on i n is OE, whose length is the same as that of OC. This is because OE = OA × s n = OA × | cos θ n | = OC. In other words, after one iteration, the length of the projection of q on the cross-range projection vector (i.e., the absolute value of the obtained cross-range coordinate) is scaled from the length of OA to the length of OE, whose length is closer to the length of OB in comparison with that of OA.
Again, due to the orthogonal projection relationship of factorization method, Q'E is orthogonal to OE. By taking OE as OA and taking i n as i n , we can use equations (20), (21), (23), and the factorization method again, so as to further compress the length of the projection of q on the cross-range projection vector to the length of OD, which is also plotted in Fig.4(a). As can be observed, by repeating the procedures above, the length of the projection of q on the cross-range projection vector (i.e., the absolute value of the obtained cross-range coordinate) will get closer and closer to the length VOLUME 8, 2020 of OB. These procedures are illustrated in Fig.4(b) (i.e., OA→(OC=OE) →(OD=OF) →. . . →OB).

FIGURE 5.
Imaging plane defined by i tg,n and i n when r n < r tg,n . Now we turn to Case 2. Similar to Fig.3, Fig.5 shows the plane determined by i tg,n and i n in Case 2. Note that herein r tg,n is the length of OB (i.e., the accurate cross-range coordinate), r n is the length of OA (i.e., the inaccurate cross-range coordinate obtained from ISAR image), and l is indeed the length of OQ'. Meanwhile, it is worth noting that due to the orthogonal projection relationship of factorization method, Q'A is orthogonal to OA, whereas Q'B is orthogonal to OB.
As shown in Fig.6(a), by using equations (20), (22), as well as (23), and then by conducting the factorization method to the modified trajectory matrix W , we obtain a new cross-range projection vector, which we denote by i n . The projection of q on i n is OE, whose length is the same as that of OC. This is because OE = OA × s n = OA × (1/| cos θ n |) = OC. In other words, after one iteration, the length of the projection of q on the cross-range projection vector (i.e., the absolute value of the obtained cross-range coordinate) is scaled from the length of OA to the length of OE, whose length is closer to the length of OB in comparison with that of OA.
Again, due to the orthogonal projection relationship of factorization method, Q'E is orthogonal to OE. By taking OE as OA and taking i n as i n , we can use equations (20), (22), (23), and the factorization method again, so as to further stretch the length of the projection of q on the cross-range projection vector to the length of OD, which is also plotted in Fig.6(a). As can be observed, by repeating the procedures above, the length of the projection of q on the cross-range projection vector (i.e., the absolute value of the obtained cross-range coordinate) will get closer and closer to the length of OB. These procedures are illustrated in Fig.6(b) (i.e., OA→(OC=OE) →(OD=OF) →. . . →OB).
Finally, we explain how do we know the relative size of r tg,n and r n . Note that when the scaling process operates in the correct direction, the absolute value of cos θ n will be closer and closer to 1. On the contrary, when the scaling direction is wrong, the absolute value of cos θ n will get closer and closer to 0. Therefore, the change of | cos θ n | after one iteration tells us whether the scaling process operates in the correct direction, and we can deduce the relative size of r tg,n and r n since we know which form of the scaling factor was used in the last iteration ((21) or (22)).

IV. ALGORITHM SUMMARY
So far, we have explicitly introduced all the details of the proposed algorithm for cross-range scaling. To clarify the whole 3D reconstruction process based on the proposed algorithm, a flowchart is shown in Fig.7 and the key procedures are summarized as follows.
Step1) Obtain N images via processing echoes, track K scattering points, and obtain the initial trajectory matrix W .
Step2) Perform factorization to W , and use the updated matrix L in (16) to obtain the projection vectors matrix M rec in (9).
Step3) For each image, calculate i tg,n by using (4), then calculate cos θ n by using (20).
Step4) For each image, take s n = | cos θ n | as the scaling factor. Using (23) to obtain the modified matrix W .
Step5) Take matrix W as matrix W . Do the same things as those of Step 2 and Step 3. If inequality (1 − | cos θ n |) < ε holds for every frame, go to step 6. Otherwise, for each image, compare the value of | cos θ n | to the one in the previous iteration. If its value is closer to 1 than the previous iteration, which means that scaling process operates in the correct direction, use the same form of scaling factor s n ((21) or (22)) as that of the previous iteration and conduct (23) again to obtain the modified matrix W . Otherwise, change the form of the scaling factor s n (from (21) to (22) or otherwise) and conduct (23) again to obtain the modified matrix W . After that, repeat step 5.
Step6) Obtain the 3D geometry of the target P rec by using (9).

V. EXPERIMENTS
In this section, based on the point-scattering model, simulation results are given to verify the effectiveness of the proposed algorithm and to compare its performance with two other methods. They are the modified factorization method proposed in [12], which does not involve cross-range scaling, and the state-of-the-art method proposed in [14], which involves cross-range scaling

A. SIMULATION MODEL AND PARAMETERS SETUP
Without loss of generality, a satellite model with 253 scattering points is used in the simulation, as shown in Fig.9 (a) and (b). The bandwidth of the transmitting signal is B=2GHz, the carrier frequency is f c =10GHz, and the coordinates of the spatial target relative to the radar is (100km,100km,100km). The angular velocity of the target with respect to the Z axis is ω=0.02rad/s. The total observation time is 78.72s and the corresponding total rotation angle is 90.2 degrees. The duration interval of each frame is T=2.56s, and the time step of adjacent frame is 0.64s. Based on the conventional Range-Doppler algorithm, a total of 120 frames of images are generated during the whole observation time. As shown in Fig.8, the 59th to the 64th frames are used for the 3D reconstruction since we need to ensure a small rotational angle so that the scattering characteristics of scattering points remain unchanged. In the experiments, the errors in the estimation of the angular velocity for different frames are independently set, which is model as a Gaussian distribution. For presentation, we call the mean of the absolute value of the errors in the estimation of the angular velocity as mean absolute error (MAE) in the rest of paper, which can be written aŝ where ω i denotes the estimated angular velocity corresponding to the i-th frame. Correspondingly, |ω − ω i | represents the errors in the estimation of the angular velocity of the target for the i-th frame.
Apart from the errors in the estimation of the angular velocity, we also consider the noises in the echoes, which are simulated by adding Gaussian noise to each element of the original trajectory matrix W that we obtain by processing echoes and extracting scattering points. 1 Specifically, the strength of the Gaussian noise is measured by RanReso [14], which is defined as the ratio of the standard deviation of the Gaussian noise to the range resolution.
The metric used to evaluate the performance of the proposed algorithm is the root mean square error (RMSE), which is defined as [14] E(P rec ) = 1 3K where trace [·] stands for the trace of a matrix. In the experiments, the RMSE of all the involved methods is obtained by only considering the results of successful trials.

B. RECONSTRUCTION RESULTS
In this part, we demonstrate the effectiveness of cross-range scaling of the proposed algorithm by comparing its reconstruction results with those of the modified factorization method proposed in [12], which does not involve cross-range scaling, where RanReso is set to be 0.5, whereas MAE is set to be 3%.   9 presents the reconstruction results of the method in [12] and those of the proposed algorithm, respectively. As can be observed, the method in [12] fails to recover the 3D structure of the model due to the errors in the estimation of the angular velocity as well as the introduced noises. In comparison, the proposed algorithm is able to reconstruct the 3D 1 After conducting translational compensation, the remaining influences of the noise in the echoes on the coordinates of the scattering points in the obtained images is highly relevant to the translational compensation method. What's more, as the signal-to-noise ratio (SNR) of echoes increases, the strength of the remaining influences of the noise on the coordinates may increase or decrease. In other words, a higher SNR does not ensure that the obtained coordinates have less errors. Therefore, to eliminate the unexpected influences of translational compensation on the evaluation of the performance of the proposed algorithm and that of the method in [14], we add noise in the trajectory matrix directly, as in [14]. structure of the model, which demonstrates the effectiveness of the cross-range scaling of the proposed algorithm.

C. PERFORMANCE COMPARISON
To further illustrate the advantages of the proposed algorithm, we compare the proposed algorithm with the methods in [12] and [14] in terms of three metrics: the percentage of the successful trials of 1000 Monte-Carlo experiments, the computational complexity in terms of the number of times of the iteration of factorization, and the RMSE of the 3D reconstruction results. As can be seen from Fig.10(a) and Fig.10(b), for both low noise level and high noise level, the proposed algorithm considerably improves the percentage of successful trials under both low MAE and high MAE conditions than other methods. The reason is that in equation (16), we update the value of matrix L and overcome the nonpositive problem.
Note that the proposed algorithm, the methods in [12], and the method in [14] are all based on factorization.
Meanwhile, the factorization process dominates the computational complexity for all of them. Therefore, we compare the computational complexity of them by counting the iterations of the factorization method. It can be seen from Fig.10(c) and Fig.10(d) that the proposed algorithm remarkably reduces the computational complexity compared with the method in [14]. This is due to the fact that the method in [14] utilize the particle swarm optimization (PSO) [20], [21] to search rotational matrix in three dimensions, which leads to a high computational complexity. In comparison, the proposed algorithm utilizes the geometrical relationship of the projection vectors and thus is able to quickly determine the direction of iteration. Although the method in [12] has the lowest complexity, it cannot reconstruct the 3D structure of the spatial target under high MAE.
From Fig.10(e) and Fig.10(f), we can observe that the proposed algorithm has the highest precision. Specifically, the proposed algorithm can significantly improve the RMSE compared with the method in [12], especially for the high MAE region. This is intuitive since the method in [12] does not involve cross-range scaling. On the other hand, the proposed algorithm also outperforms the method in [14] in terms of RMSE over a wide range of MAE, for both high and low RanReso setups. This benefits from the fact that the proposed algorithm makes full use of the range coordinates information by using equation (4).   11 further illustrate the influences of noises on the RMSE performance of the three methods, where RanReso ranges from 0 to 1.8. As can be observed, although the RMSE performance of the proposed algorithm is degraded under high RanReso, it can still provide a higher accuracy compared with the two other methods, which again demonstrates the effectiveness of the proposed algorithm.

VI. CONCLUSIONS
In this paper, the relationship between range and cross-range projection vectors of ISAR images was revealed for the first time, based on which an algorithm was proposed to calibrate the cross-range coordinates of ISAR images so as to obtain an accurate reconstructed 3D geometry of spatial targets, with no need for estimating the angular velocity of the spatial targets. Numerical experiments showed that the proposed algorithm not only provides a higher accuracy of 3D reconstruction, but also has a lower computational complexity compared with conventional algorithms.
CHAOXIANG ZHOU received the B.E. degree in communication engineering from the University of Science and Technology, Nanjing, China, in 2015. He is currently pursuing the master's degree with the National University of Defense Technology, Changsha, China, majors in information and communication engineering. His research interests include spatial information acquisition and processing technology. He is currently a Professor with the Science and Technology on Automatic Target Recognition Laboratory, National University of Defense Technology, Changsha. His research interests include array signal processing, passive radar system design, MIMO radar systems, and microwave imaging. VOLUME 8, 2020