A Novel Approach to Processing Very-High-Resolution Spaceborne SAR Data With Severe Spatial Dependence

An innovative approach, mainly featured by progressive iteration of partitioning and focusing, to processing very-high-resolution spaceborne synthetic aperture radar (SAR) data is presented in this article. Due to the long integration time endured during data acquisition as well as large scene extensions being common to the advanced SAR systems, spatial dependence of range histories may be severe and must be properly dealt with. In this article, after deriving the exact two dimensional spectrum for an curved satellite orbit, we focus the entire echo data through several iterations of partitioning and focusing. By partitioning, each partitioned data block can be better focused by the following focusing step, and meanwhile, focusing with better quality enables the following partitioning with both higher granularity and less overlapped data region. Besides, a new technique aiming at eliminating spatial dependence inside a data block is also presented and deployed in the processing procedure. Owing to the feature of high granularity partitioning, the spatial dependence of many factors can be accommodated finer and easier compared with other conventional algorithms. What is more, the artifacts caused by subaperture recombination are fundamentally avoided by the fact that the partitioned blocks are not recombined until being processed into fully focused image blocks. Finally, the precision and the efficiency of the proposed methodology are validated by results on a simulated and a real SAR data.

Digital Object Identifier 10.1109/JSTARS.2022.3202932 algorithm [9], chirp scaling algorithm [10], and ω-κ algorithm [11], failed to precisely process the VHR SAR data because of the long integration time on a curved orbit which lead to more complex range history curvature. Even worse is the fact that parameters of range history curvatures are varying in both the range and the azimuth directions. That is, the signals received from the targets located at various geographic positions cannot be properly focused in bulk by a unified match filter [12].
In order to properly solve this problem, various advanced approaches have been proposed in recent years. In [13], He et al. divided data into many subapertures and the hyperbolic range history model is adopted to individually process each subaperture. Another popular solution is expressing VHR range histories by high-order (usually four) polynomials. These literatures include [14], [15], where the spectrum was derived through the principle of stationary phase (POSP) [8], [16], [17], where, alternatively, the spectrum was derived through the method of series reversion. In [18], [19], [20], [21], another approach was proposed where the precision of traditional hyperbolic model was enhanced by adding both higher-order terms, which is spatial-independent, and equivalent accelerator parameter, which is employed to adapt the azimuth-dependent equivalent velocity. In addition, the range-dependence of equivalent velocity was handled by adopting the advanced Stolt kernel proposed in [22]. In the meanwhile, motion compensation method [12], [23], [24], being very common for airborne SAR processing, and keystone transformation method [25] can also be found to deal with the matters arisen in the processing of VHR spaceborne SAR data.
Despite of complexity, brought by the complex range history models, of the methods mentioned above, various inevitable assumptions are made during the derivation of the methods. These assumptions, unfortunately, may be invalid in the situation of higher resolution, or larger scene dimension, or larger squint angle, and eventually result in the degeneration of the focusing quality, which was reflected by lots of simulation results of these literatures.
For subaperture methods, since each subaperture is processed by different parameters and then recombined, pair artifacts may emerge after the successive processing of the recombined data.
Another fatal deficiency of these methods is that, since a large block of data had to be processed in bulk with common This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ parameters, many drastic changing factors cannot be properly dealt with, such as severe terrain undulation of the observed scene, or a varying Doppler centroid/bandwidth resulting from bad antenna steering.
In this article, however, we avoid the complicated twodimensional (2-D) spectrum derivation by evaluating the spectrum numerically and accurately with the help of the POSP [8]. This will be expounded in Section II.
On the other hand, we deal with the spatial dependence of range histories by an all-new methodology, featured by several progressive iterations of data block partitioning and focusing, which is very innovative as far as we know. During the first iteration, the range compressed data is partitioned (in the range direction, with some overlapped regions in where the required signals for the full processing of each block dwell) into many blocks, and each block is then partially focused referring to the geometry of its individual block center. That is, for each block, the block center is well focused. Meanwhile, the focusing quality becomes worse for the region further from the center.
The next iteration is, like the above, conducting the 2-D partitioning for each previous partially focused data block (with previous overlapped regions excluded and new overlapped regions included for each new subblock) and focusing each new subblock referring to the geometry of its individual block center (with the previous match filter cancelled). In this iteration, compared with the previous one, the overlapped regions are much smaller because the input blocks have been partially focused; and the focusing quality of each processed subblock is much higher owing to the smaller subblock size.
We can continue the iteration described above further, and as the iteration go on, by partitioning, each partitioned data block can be better focused by the following focusing step, and meanwhile, focusing with better quality enables the following partitioning with higher granularity and low data overlap ratio persisted. Finally, we can get all the subimages with a focusing quality, may be well enough, affected by final subblock size.
After the last iteration, in order to compensate the residual phase errors in each final subimage, which may exists for VHR SAR data, we also propose an innovative in-block compensation method which is featured by two resampling operations on the two-dimensional frequency domain in the range and the azimuth directions, respectively. After applying this method, the focusing quality of each final subimage is greatly improved.
Finally, we gather all the final well focused subimages to form the final entire well focused image result.
In our proposed methodology, by means of several progressive iterations of partitioning and focusing, various parts of the observed scene are eventually precisely processed mainly owing to the excellent accommodation of spatial dependent range histories. Furthermore, during the iterations, since all the signals needed to processing a data block is included, the partitioned subblocks will not be combined together until being finally processed into well focused subimages. In this way, pair artifacts resulting from aperture combination, which is very common for other subaperture based methods, are intrinsically avoided.
With respect to the processing efficiency, since the extra included regions of the previous iteration have been discarded in the following iteration, and the extra included regions also reduce with the block size, no data volume expansion occurs as the block size keeps reducing as the iterations go on.
The prominent advantage of the proposed approach lies in its excellent adaptability to the spatial dependence of range histories. For this reason, the proposed approach is qualified to processing SAR data undergoing much longer aperture and much larger observed region.
Furthermore, owing to the small dimension of the final subblocks, the external digital elevation model (DEM), Doppler centroid variation, Doppler bandwidth variation, and antenna pattern, can be compensated for each data block individually, thus the spatial dependence of them can be easily and finely considered. This feature is much appealing for the processing of mountain areas.
Finally, because of the block processing style, the proposed approach can be conveniently deployed on parallel computing platforms.
In the main body of the article, for the sake of simplicity, we will discuss our approach mainly under the condition of the standard broadside stripmap mode SAR. Nevertheless, as discussed later, by adopting some already published techniques in the literature [26], [27], the approach is also applicable to the processing of the SAR data acquired in sliding spotlight and staring spotlight modes, even in the existence of large squint angle.
This article is organized as follows: Section II establishes the basic signal model. The progressive iteration methodology is detailed in Section III. The application of the methodology to other SAR modes is discussed in Section IV. Section V provides the computational cost analysis of our approach. In Section VI, many results on the processing of the simulated spaceborne sliding spotlight SAR data with the resolution of 0.07 m are provided. Then, a real spaceborne SAR result is also provided. Finally, a conclusion is stated in Section VII.

II. SIGNAL MODEL AND SPECTRUM
For a point target, we assume that both the transmitted chirp signal and the azimuth antenna pattern are with flat amplitude. After ignoring a constant amplitude, the demodulated and range compressed SAR signal is where j = (−1) 1/2 , f 0 is the center frequency of the radar system, c is the speed of light, η ࢠ (η -, η + ) is the azimuth time and ηand η + are the start and the end instants of aperture, respectively, τ ࢠ (τ -,τ + ) is the delay time in slant range where τand τ + are the start and end of the delay time, p r (·) is the range compressed pulse (sinc shape), and R(η) is the range history between the antenna and the observed target.
In view of the fact that the satellite orbit are always very smooth curves, in World Geodetic System 1984 (WGS-84) ellipsoid model, with sufficient precision, we can express each dimension of the orbit during the observation period by a K order polynomial where ζ ࢠ (ζ -, ζ + ), and ζand ζ + are the start and the end instants of the period when the currently processed data is observed, respectively. s xi , s yi , and s zi are the coefficients of polynomials in x-, y-, and z-directions, respectively. The order K can be easily decided by limiting the fitting error below an accepted threshold (1/100 of a wavelength, for example), and a higher order is inevitable for the fitting of a longer orbit period.
Supposing the position of the point target is (X, Y, Z) in WGS-84, then the range history After range Fourier transform (FT), the range spectrum of (1) is where f τ is the range frequency. Then, the 2-D spectrum can be derived by applying the POSP on (4) where f η is the azimuth frequency, and the mapping relationship between η and f η is given by [8] f and f -1 (f η ) means the inverse function of f(η).
In practice, the expression of f -1 (f η ) can be evaluated by a L order polynomial fitting on a series of (dR(η)/dη)-η pairs, i. e., where a i are the polynomial coefficients of the fitted (dR(η)/dη)η pairs, and The order L can be decided by limiting the fitting error below an accepted threshold (η T , satisfying that both |η be less than 1/100 of a wavelength, for example), and a higher order is inevitable for the situation of either a longer aperture a single target experiences or a larger squint angle.
Thus, we can get the 2-D spectrum for a certain point target with position (X, Y, Z) in WGS-84 as follows: for each (f τ , f η ).
2) Determine the start (η − ) and the end (η + ) instants of the aperture, respectively, by the antenna attitude information; 3) Calculate the range history R(η) on a series of η from η − to η + by (2) and (3). 4) Conduct polynomial fitting on the (dR(η)/dη)-η pairs, and get the coefficients a i , and the order L is determined to be less than the preset threshold η T . 5) Calculate the corresponding η by (7). 6) Calculate the 2-D spectrum by (5) where (df η /dη) -1/2 can also benefit from the polynomial fitting in (7).

III. PROGRESSIVE PARTITIONING AND FOCUSING
The progressive partitioning and focusing (PPF) algorithm, imposed on the range compressed result of the standard broadside stripmap SAR data (SAR data acquired in both sliding spotlight and staring spotlight modes, even in the presence of large squint angle, can be easily transformed to the broadside stripmap one by some techniques in the literature, as discussed in the following), consists of five steps: partitioning; focusing; several iterations of partitioning and focusing, in-block compensation; and subimage recombination.
The flowchart of the complete procedure of PPF is shown in Fig. 1, where the iteration number is three, as an example, and each step is described in detail in the following.

A. Partitioning
First, the range compressed SAR data is partitioned in the range direction.
For each partitioned data block, since we intend to process it into a fully and precisely focused image block, without any interactions with other blocks, through the following successive steps imposed on the block, all the signals required have to be included and processed together with the partitioned block. That is, each partitioned block has to be enlarged to a certain extent so that all the required signals are available to be processed.
For the first partitioning step conducted only in the range direction, the enlargement of a certain partitioned block means including some extra range bins beyond far range edge of the block, and in where the aperture signals of the far range edge targets dwell.

B. Focusing
After partitioning, each partitioned block is focused, maybe partially, by the match filter expressed as the reciprocal of (5), which is calculated referring to the geometry of the block center. The block center here means the location on the earth surface, considering topography undulation if needed, whose zero Doppler position appears at the center pixel of the block. The focusing step is implemented by successive operations of 2-D FT, match filtering, and 2-D inverse FT (IFT).
The match filter of a data block can be expressed according to the reciprocal of (5) After the focusing step, each data block is partially focused, but only the block center is fully focused, and the focusing quality gets worse for the targets further from the block center.

C. Iteration
After the first iteration of partitioning and focusing, the original range compressed data is partitioned into many data blocks, and each data block is partially focused by a match filter designed referring to the center of the block.
Then, we can process each data block by one or more round of iterations as above, until sufficient focusing quality is achieved for each final subblocks.
In these iterations, some important points should be noted as follows.
1) For each block processed by the previous iteration, the previous extra included region will be discarded after the focus step of the previous iteration. This means that no data volume expansion occurs along with the iterations. 2) Since each block is further partitioned and focused by a further iteration, the extra data to be included in the following partitioning is also much smaller.
3) In the following iterations, the extra data beyond two edges in the azimuth direction, in where the signals after the first iteration dwell, also should be included. 4) In the focusing step, for each further partitioned subblock, besides the deserved match filter designed referring to the subblock center, the match filter imposed in the previous iteration should be removed. Therefore, the focusing quality is updated and only determined by the accuracy of the new match filter.

D. In-Block Compensation
When the spatial dependence of range histories is severe, maybe due to a long integration time or a large final subblock size, in order to further focus the final subimages, the residual phase error in each final subimage can be compensated by an in-block resampling method proposed here. This method is featured by two resampling operations in 2-D frequency domain described as follows.
In a subimage processed by the final focusing step of PPF, the 2-D spectrum of an arbitrary target located at (X, Y, Z) is where (X H , Y H , Z H ) is the center of the subimage.
In the light of the Stolt mapping in the traditional ω-κ algorithm, in order to fully focus the signal expressed by (10), and after ignoring the amplitude factor which is trivial when the subimage size is small enough, we should get through resampling the 2-D spectrum, where f τ andf η are the resampled range and azimuth frequencies, respectively, R(η) reaches the minimum R c when η = η c , that is, min{R Let us assume that an entire subimage will be precisely focused by a resampling of (11), which is valid when the subimage size is small enough, that is, R Hc −L R /2<R c < R Hc +L R /2 and η Hc −L A /2<η c <η Hc +L A /2, where L R and L A are the slant range limit and the azimuth time limit of current subimage, respectively. Then, we can carry out the resampling of (11) by the following two steps.
First, with respect to range frequency resampling, we place two targets A(X A , Y A , Z A ) and B(X B , Y B , Z B ) at both range edges of the observed subblock with the same azimuth position η Hc , that is, η Ac = η Bc = η Hc , where η Ac and η Bc are the azimuth instant time η A and η B when the antenna is closest to the target A and target B, respectively. For target A and target B, the range histories are represented by R A (η) and R B (η), respectively. That is, min{R Then, according to (10), after computing the difference between the phases on target A and target B, the resampling from the new range frequency f τ to the old one f τ can be expressed by Note that, in the resampling operation of (12), , and all the f τ occurred in (12) have to be substituted by f τ . The impact caused by the substitution is much tiny even for the data with the resolution better than 0.1 m, and this can be examined simply through point target simulation introduced in Section III-G.
For azimuth frequency resampling, on the other hand, we place target A and target B at both azimuth edges of the subblock with the same range position R Hc , that is, R Ac = R Bc = R Hc . Then, according to (10), after computing the difference between the phases of the two targets, the resampling from the new azimuth frequency f η to the old one f η can be expressed by Note that, in the resampling operation of (13), , and all the f η occurred in (13) have to be substituted by f η . The impact caused by the substitution is much tiny even for the data with the resolution better than 0.1m, and this can be examined simply through point target simulation introduced in Section III-G.

E. Subimage Recombination
After throwing the extra included regions of each final focused subimage away, and putting the trimmed subimages together according to their original relative positions, the entire well focused image will emerge.
Since each subimage is well focused owing to the best individually tailored processing parameters, the inclusion of sufficient extra data, and the in-block compensation for each final subimage, the recombination processing will result in no visible artifacts on the final recombined image product.

F. Partitioning Strategy Discussion
Generally speaking, in each iteration, a smaller partitioning size is helpful to increase the focusing quality of the following focusing step, and may further lessens the total iteration number, but harmful to the processing efficiency. This is because that the size of the extra included data is determined by the focusing situation of the previous focusing step, and a smaller subblock size implies a lower ratio of subblock volume to extra data volume, which means a lower processing efficiency of the current iteration.
In practice, for a certain SAR system, the proper subblock size, the related extra data size, and further the iteration number, can be chosen by processing a simulated data backscattered by several point targets which are placed at various positions of the observed scene. After each round of iterations, we can observe the focusing situation of the apertures and choose the extra data size which is no less than the unfocused region of each aperture.
In the absence of in-block compensation, the iteration will not be terminated until the focusing quality is acceptable. In our experience, without the adoption of the in-block compensation, four iterations are adequate for the processing of sliding spotlight SAR data with the resolution of 0.25 m (without window weighting) and azimuth region of 20 km, and the final subimage size is about 20 pixels in each direction.

G. In-Block Compensation Discussion
The performance of PPF is greatly boosted by the introduction of the in-block compensation method presented earlier. The advantages brought by the method lie in two aspects: First, for the SAR systems with moderate resolution and moderate azimuth region width, larger final subblock size can be acceptable, which may further leads to less iteration number. Second, which is more appealing, SAR data with higher resolution and wider azimuth region can be precisely processed by PPF with adequate focusing quality. In our experience, SAR data with azimuth region of 10 km on the resolution of 0.07 m can be precisely processed with only three iterations and the final subblock size is 64 and 256 in the range and the azimuth directions, respectively, which will be demonstrated by the experimental results in the following section.
In practice, if we decide to import the in-block compensation method either to lessen the iteration number or to improve the processing precision, we can inspect the effect of the in-block compensation through processing the simulated data above and examining the focusing results. The processing configuration turns out to be appropriate if the focusing quality of all the simulated targets is acceptable. Otherwise, we must either reduce the subblock size of the final iteration while the extra data size still remains or adding more iteration(s).

H. Compensation of Other Spatially Dependent Factors
Owing to the partitioning feature of PPF, many critical factors with spatial dependence property can be compensated individually for each final subblock, which is inherently impossible for other processing algorithms mentioned in Section I. These spatially dependent factors mainly include but not limited by terrain undulation, Doppler centroid variation, Doppler bandwidth variation, antenna pattern, etc.
The terrain undulation adaption, which is much appealing for the processing of mountain areas data and much hard to be achieved in the conventional algorithms, can be easily considered when determining each subblock center [(X H , Y H , Z H ) in (9)] in each processing iteration. It is noteworthy that the high frequency fluctuation of the raw DEM profile, which may cause discontinuity of the adjacent final subimages, must be suppressed though being low-filtered before being imported to PPF.
In addition, the variation of both Doppler centroid and Doppler bandwidth, which may result from primitive platform attitude controlling or crude antenna steering (in spotlight case), are harmful to the azimuth window weighting and the radiation measurement, and can also be individually compensated referring to the center of each final subblock.
Furthermore, the antenna pattern can be compensated in 2-D style, which is much appealing in large squint acquisition mode, individually for each final subblock. By (7), we can map each 2-D frequency pair (f τ , f η ) to the corresponding azimuth instant time η. Then, at time η and in WGS-84, we compute the vector from antenna center position to the block center position. By computing the 3-D angle between the vector and the antenna attitude, we can find the corresponding compensation coefficient from the antenna pattern data.

I. Low/Moderate Resolution Case
For the processing of SAR data with low/moderate resolution, small scene dimensions, and mild terrain undulation, taking no care about spatial dependence, the entire data can be efficiently processed by PPF with only one iteration. No partitioning is required, and in-block compensation (azimuth frequency sampling can be omitted) should be employed. In this case, the procedure and efficiency of PPF are very similar to the standard ω-κ algorithm.
Nevertheless, this must be verified by processing the point target simulated data with identical system parameters and get the results with acceptable focusing quality.

IV. EXTENSION OF PPF
In the discussion of PPF above, we supposed that SAR data is acquired in the standard broadside stripmap mode. Actually, however, by introducing two methods existed in the literature [26], [27], without any precision loss, PPF is also competent to processing the data acquired in both sliding spotlight and staring spotlight modes, even if in the presence of large squint angle.

A. Azimuth Dechirp Processing
In [26], the problem of pulse repetition frequency (PRF) insufficiency, which exists in sliding spotlight SAR data, is circumvented by an azimuth dechirp method (ADM) implemented in the range frequency and azimuth time domain. First, the rotation center, which is always pointed by the antenna beam center throughout the data acquisition time, is evaluated by the orbit and antenna attitude parameters; then, the virtual range history is defined as the range history between the antenna and the rotation center, and the azimuth span of it is the whole data acquisition period. ADM is imposed on the range compressed data in range frequency domain, expressed by (4), by multiplying the data with where η ࢠ (H -, H + ) and R d (η) is the virtual range history, and R d (η) is the mean value of R d (η). After this, as illustrated in [26], the Doppler centroid of the received signals from all the observed targets are aligned to almost zeros, even if in the presence of large squint angle, and the Doppler bandwidth of the entire data is reduced to the one as if antenna is not steered, which, being less than one PRF, is defined by the azimuth antenna size (≈2v/d [8], where v is the satellite velocity, and d is the azimuth antenna size). That is, after the implementation of ADM, the range compressed data can be seen as a standard broadside stripmap data with updated range histories. Then, the data can be processed by PPF presented above where the range history of each target is modified by the virtual range history where η ࢠ (η -, η + ). It must be pointed out that, after all the processing steps of PPF, each dechirped aperture in the SAR data is eventually focused to the zero Doppler position of the data grid, which is different from the one not impacted by ADM. This must be kept in mind when we geocode the focused image to a longitude-latitude grid.

B. Squint Consideration
In sliding spotlight case, as mentioned above, the Doppler centroid of the entire data is aligned to zero by ADM. Therefore, data acquired in sliding spotlight case and with large squint angle can directly be processing by PPF with ADM employed.
Furthermore, for spaceborne SAR systems with a curved orbit, the SAR data acquired in stripmap mode can be treated as the one acquired in sliding spotlight mode with the rotation center much far away. Therefore, with ADM adopted, PPF is also competent to processing the squinted stripmap SAR data.
Finally, ADM is also useful for the removal of the range walk of the squinted staring spotlight SAR data if a linear virtual range history, which is defined by the distances between the rotation center and the first/last antenna position, respectively, is adopted.

C. Two-Step Processing
PPF can also be used to process the staring spotlight SAR data by importing the well-known two-step approach in [27].
According to [27], by the successive steps of the two-step method, the range compressed SAR data is equivalently match filtered by where K TS is the Doppler rate used in the two-step processing. Moreover, supposing N a is the azimuth pixel number of data grid and η c the azimuth time center of the data grid, after the two-step processing, the PRF is substituted by K T S N a /PRF, which must be larger than the total Doppler bandwidth, and the azimuth time η by η c + (η − η c ) · P RF 2 /(K T S N a ).
It should be noted that, the range frequency, f τ , is not involved in the match filter of the two-step method, as expressed by (16). This will result in azimuth dispersion and increase the difficulty of the following application of PPF.
Therefore, before the PPF procedure, we should substitute the match filter (16) by a new one where f τ is taken into account. In the 2-D spectrum, we match filter the two-step method result by where R ref is the range line of the middle swath, v ref is the equivalent velocity, and () * is the conjugating operation. After this, we can obtain the SAR data match filtered by (17), which leads to a best focusing quality of the targets along the middle swath line, and the focusing quality becomes worse for the targets further away from the line.
Then, PPF can be applied on the SAR data treated above except that H 0 (f τ , f η ) must be offset in the first focusing step.

V. COMPUTATIONAL COST ANALYSIS
In contrast to range compression, which is also inevitable in the traditional ω-κ algorithm [8], the computational cost of PPF is dominantly spent on the iterations including the in-block compensation. This is mainly because that the total data volume is enlarged by the inclusion of lots of extra data into each subblock, and the iteration is repeated for several times. Concerning the in-block compensation method, which is only applied in the final iteration, the computation cost of it is nearly double of the cost of Stolt interpolation in the standard ω-κ algorithm [8].
We adopt the methodology introduced in [8] to investigate the computational cost of PPF. According to [8], without the considerations on memory operation, match filter setting up, raw data loading, image dumping, interpolator indexing, and so on, the floating point operations (FLOPS) of the basic operations are given in Table I, where FFT means fast FT, IFFT means inverse FFT, N r is range point number of each received pulses, N a is the pulse number, and M is the interpolator length. As shown in [8], we can express the FLOPS of the standard ω-κ algorithm by Concerning PPF, the FLOPS of range compression can be expressed by For the ith processing iteration, where i = 1 to 3, as an example, by defining the data size of partitioned data block n ir and n ia in range and azimuth directions, respectively, the subblock numbers are F ir = N r /n ir (20.a) and F ia = N a /n ia (20.b) in range and azimuth, respectively.
For the ith focusing steps, after including extra data into each subblock, the FLOPS are where n iR and n iA are the enlarged data dimensions in the range and the azimuth directions, respectively, and c ir and c ia the subblock enlarge ratios.
It can be seen from (21) that smaller c ir , c ia , n iR , and n iA are helpful to reduce the computational cost of the ith iteration of PPF. Therefore, the extra data size must be as small as possible but larger than the defocused region after the previous iteration.
The additional FLOPS of the in-block compensation method, always just after the final focusing step for each final subblock, if needed, are Therefore, the total FLOPS of PPF is In the following section, we will investigate the results above by comparing the time spent on the processing of a simulated data between PPF and the traditional ω-κ algorithm.

VI. EXPERIMENTAL RESULTS
In this section, we will validate the performance of PPF by processing a simulated sliding spotlight spaceborne SAR data with the resolution of about 0.07 m. The processing result of a real Gaofen-3 SAR data with the resolution of 1 m is also provided.

A. Simulation Results
For a spaceborne sliding spotlight SAR system with the resolution of about 0.07 m resolution (without the windowing weighting adopted), we simulated a received SAR data backscattered by four point targets, labeled A, B, C, and D, placed at four corners of the observed scene with observed region of 10 km × 10 km. In order to assess the effects of the recombination and the in-block compensation method, we placed each target at a corner of a final subblock. A piece of real curved orbit data was adopted in the simulation, and the orbit and other main system parameters are given in Table II. For simplicity, all the four simulated targets were on the WGS-84 ellipsoid surface with zero heights. The stop-and-go assumption was adopted, and no antenna pattern weighting was imported. The four-order and  the five-order polynomials are employed to express the orbit segment and the (dR(η)/dη)-η mapping, respectively. During the PPF processing, three iterations were adopted, and, in the iterations, the dimensions of subblock and extra data (single side) are given in Table III. For point target A, as an example, the results after each round of iterations are shown in Fig. 2. As the data was processed by more iterations, the geographic center of the subblock, in where the signal of target A dwell, gets closer and closer to the location of target A, which resulted in a gradual improvement of focusing quality of target A, as shown in Fig. 2(a)-(c).
The effect of the in-block compensation method is shown in Fig. 2(d). We can see that, after the successive processing steps above, a perfect focusing quality of target A has been achieved.
The 2-D spectrum of target A is also provided in Fig. 2(e), which appears obvious trapezoid shape because of the large bandwidth of transmitted pulses. Since the signal of target A is precisely processed by the successive steps of PPF, the amplitude of the 2-D spectrum is nearly flat without visible undulation.  The focusing results of all the four simulated targets are shown in Fig. 3, and the measured indexes of them are given in Table IV where the range resolution values are given in term of ground range. We can see that, owing to the systematic cooperation of each step of PPF, for all the simulated targets, we have obtained nearly perfect contours and measurement results, which have eventually proved the effectiveness of our proposed approach. Then, the azimuth resolution differences of the four targets (about 30%) resulted from the nonideal antenna steering which led to the variation (mainly in the range direction) of both the Doppler center and the Doppler bandwidth. Furthermore, we note that the trapezoid spectrum results in much better peaksidelobe-ratio (PSLR) indexes in azimuth than the theoretical one (−13.26 dB). In addition, no visible artifact resulting from the recombination has been found for all the simulated point targets.
The simulated data was processed by PPF in parallel on a server with one P100 graphics processing unit equipped. The processing time of three iterations and the following in-block compensation was about 370 s where the time spent on memory allocation and free, block information calculation, the image geocoding, and the image dumping was not included.
As a comparison, despite of the totally unfocused processing results, we also processed the simulated data by the traditional ω-κ algorithm, and the processing time (range processing and ADM not taken into account) was about 53 s, being about seven times faster than PPF.

B. Real Data Results
In fact, PPF has been tested and verified by a great deal of real data of various SAR systems. One patch of the processing result of the Gaofen-3 spaceborne SAR system acquired in sliding spotlight mode and with resolution of 1 m is shown in Fig. 4. Three iterations were made, and the in-block compensation method was adopted. The subblock size configuration is given in Table V, and the stop-and-go approximation, terrain undulation, window weighting, troposphere, and antenna pattern were all considered and compensated for each final subblock. The image patch is with dimension of 1.3 km (azimuth) × 2.0 km (ground range) and has been well focused. No visible resolution loss and artifacts caused by block recombination can be found.

VII. CONCLUSION
In this article, we proposed a progressive approach to efficiently and precisely processing VHR SAR data. First, we expressed the orbit segment in earth-fixed coordinates by three polynomials. Then, after evaluating the mapping relationship between azimuth time and Doppler frequency, we expressed the 2-D spectrum of the received SAR data backscattered by a point target on the form of the original POSP result where the azimuth time variable is retained. After this, we proposed the image formation approach through several progressive iterations consisting of partitioning, which makes the better focusing possible, and focusing, which makes the more granular partitioning possible. We also proposed an in-block compensation method which aimed at the elimination of the residual spatial dependence in the final subimages. Then, we discussed the possibility of applying the proposed approach to the processing of the SAR data acquired in the modes other than stripmap, such as sliding spotlight, staring spotlight, even in the presence of large squint angle.
Next, after the discussion on computational cost, many simulation results with the resolution up to 0.07 m were provided to visualize and verify the performance of our proposed approach. Finally, the performance of PPF was also verified by the result of a Gaofen-3 spaceborne SAR data with the resolution of 1 m.