A Quadtree Beam-Segmenting Based Wide-Swath SAR Polar Format Algorithm

The Polar Format Algorithm (PFA) is the most suitable imaging algorithm for high-resolution and highly-squinted spotlight Synthetic Aperture Radar (SAR), but the approximation of planar wavefront in this algorithm limits the effective scene size of PFA. To meet the wide-swath requirement in modern SAR system, a quadtree beam-segmenting based PFA is proposed in this paper. The original full-beam echo signal is filtered recursively as a quadtree, generating multiple sub-beams. Each sub-beam only illuminates a small part of the total swath. As long as the sub-beam is narrow enough, standard PFA could be utilized to process the sub-beam data. Each sub-beam data will result in a fully focused sub-image. Finally, all fully focused sub-images are mosaicked to get a big image perfectly focused. This divide-and-conquer approach breaks the image size limit in traditional PFA, extensively enlarges the effective scene. The processing flows are derived in detail and the algorithm is validated by simulated and measured data. Via the experiments, it could be seen that when the scene size exceeds the PFA limit, there would be serious defocus in the image obtained by traditional PFA, and the defocus could be eliminated by our new approach.


I. INTRODUCTION
Synthetic aperture radar (SAR) [1]- [3] is an active groundimaging system based on coherent processing of multiple radar echoes acquired along the path of a moving platform under all weather conditions. Although much of the literature discounts non-broadside imaging as atypical, highly-squinted imagery is becoming more widely available. Many modern spotlight SAR systems are especially aiming at nonbroadside imaging with squint angles possible up to or beyond ±45 • . Generally, squint angle greater than 40 • is considered to be highly squinted in SAR [4].
In highly squinted SAR, the echo signal is heavily coupled between range and cross-range, and it brings much difficulty for the imaging. In order to guarantee the imaging quality, many algorithms of high accuracy have been investigated to accommodate the high squint angle [4]- [12]. However, the complex mathematics expressions and the resulting complexities limit the future applications of the algorithms.
Meanwhile, the polar format algorithm (PFA) is adept at highly-squinted spotlight SAR [13], [14], since PFA The associate editor coordinating the review of this manuscript and approving it for publication was Brian Ng . has the following advantages: (1) PFA compensates the data to the line of sight (LOS) direction, so it could easily correct the range migration in highly squinted SAR and it has the same focusing accuracy for any squint angle. Generally, it is adapted to any squint angle as long as the radar is not illuminating the blind area [2], i.e. it is adapted to any squint angle smaller than 85 • . (2) The dechirp process of PFA in slowtime domain reduces the azimuth bandwidth to be related to the azimuth size.
However, the approximation of the planar wavefront limits the PFA effective imaging size, and PFA has an effective focus radius limitation under different resolution conditions. For present high-resolution SAR applications with the resolution of 0.1m at X-band, the actual imaging requirements at least spans up to or beyond 1 km×1 km. Meanwhile the effective focusing radius of PFA is only about 100m, which is far from meeting the actual imaging requirements. In recent years, the methods to solve the problem that the effective radius of PFA is too small mainly include two kinds, one is based on overlapping subaperture [15] and the other is space-variant post-processing [16], [17]. These two kinds of methods not only have low computation efficiency and high complexity of processing, but also are not accurate enough since they only derive the residual phase error to the second order. As the resolution requirements constantly increases, the expanded imaging scene is still not enough and need to be enlarged further more.
Based on the idea of digital spotlighting preprocessing [18]- [20], an improved quadtree beam-segmenting based PFA is proposed. Actually, this algorithm includes two parts of recursive segmentation. One part is to segment the image recursively, and the other part is to segment the echo data recursively. This procedure is equivalent to generate multiple sub-beam raw data. As long as each sub-beam is filtered narrow enough, standard PFA could be implemented to process the sub-beam data to produce a number of fully focused sub-images. Finally, all fully focused sub-images are stitched together to get a big image perfectly focused. This divideand-conquer approach breaks the image size limit in traditional PFA, extensively enlarges the effective focused scene. Since this method is not based on the formula derivation of residual error, it is not restricted by the derivation order and can theoretically be extended to handle scene arbitrary large.
The idea of using beam segmentation scheme to overcome the PFA radius limitation problem sounds fairly simple, and the key point is how to maintain the reflected energy from each point target lossless and keep the desired resolution not degrading during the sub-beam segmentation. Moreover, we must consider how to keep the computation burden and the complexity of processing not increasing too much.
For these aspects, this paper is organized as follows. In Section II, the imaging geometry of squinted SAR is established, and the signal model of PFA is presented. The effect of the ''planar wavefront'' approximation in PFA and its size limitation is illustrated. In section III, the quadtree beam-segmenting based PFA is derived and presented in detail. In Section IV, the effectiveness of the proposed imaging method is demonstrated via simulation and real data experiments. The conclusion and further discussion are in Section V.

II. GEOMETRY AND ACCURATE SLANT RANGE MODEL A. GEOMETRY OF SQUINTED SPOTLIGHT SAR
The classical squinted spotlight SAR imaging geometry is illustrated in Fig. 1, where the SAR sensor traveling at a fixed velocity v transmits a series of chirps to illuminate the ground target area. The scene center point O is defined as the origin, and the O-xyz coordinate frame of the ground imaging area is shown in the figure, with y along the flight path, x perpendicular to the flight path, and z directs up to sky. The synthetic aperture is from A start to A end , with the aperture center A c .
The line of sight (LOS) direction at aperture center is projected on the ground signed as x -axis. The squint angle θ 0 is the rotation angle from (x, y) to x , y coordinate system. At instantaneous slow time t (in azimuth), φ = φ (t) , θ = θ(t) represent the instantaneous grazing angle and squint angle (the angle between the instantaneous LOS projection on ground and x-axis), respectively. And the angle between the instantaneous LOS projection on ground and x'-axis iŝ θ(t) = θ(t) − θ 0 . A m represents an arbitrary radar position at time t. The instantaneous slant range between radar and the scene center is r a = r a (t), and instantaneous slant range between radar and point target P x p , y p is r p = r p (t).

B. ACCURATE SIGNAL MODEL AND PFA SIZE LIMIATION DUE TO PLANAR WAVEFRONT APPROXIMATION
According to the general geometry in Fig.1, assume that linear frequency modulated signal is transmitted, then the demodulated echoes can be expressed as where I represents the ground illuminated area, rect(·) is a rectangular window, t is the azimuth time centered at t c = x c /v, T a is the aperture time, τ is the range time, T p is the pulse duration, c is the speed of light, f c is centre frequency, γ is the chirp rate, and g x p , y p represents the reflectivity of point target P x p , y p . In the case of large squint angle and high resolution, the aperture time may be very long, and the range migration may be very severe, sometimes may extend to more than 10000m.
In the pre-process of PFA, it is common practice to implement Fast Fourier Transform (FFT) operation to transform the signal into range frequency domain as follows (2) in which f τ is the fast time frequency, i.e. range frequency. VOLUME 8, 2020 Next step is to do the frequency domain match filtering and the 1 st order motion compensation pulse by pulse. The scene center point is commonly chosen as the reference point of motion compensation, so this step is implemented by multiplying the conjugate of a hypothetical scene center echo in the range frequency domain as reference After this 1 st order motion compensation, the signal prepared for polar format storage is as follows: Based on the Taylor series expansion, the differential range of r a (t) − r p (t) could be expressed as where r p = x p , y p , z p is the target vector from scene center to point P, and r a = x a , y a, z a is the slant range vector from scene center to antenna phase center (APC). When the scene is small enough, the higher order term of the differential range above could be ignored, while only the first term is remained. This means that the wavefront is assumed planar. Under this approximation, the simplified differential distance can be expanded as Azimuth time t is implicit in the angle φ = φ (t) , θ = θ (t), and x p ,ŷ p represents the coordinates of point P in the x , y coordinate system. Apparently, the effective PFA imaging scene could not be too large. The allowable scene radius r 0 is decided by the carrier wavelength λ c , the resolution requirement ρ a and the imaging range R ac , and the relation formula could be expressed as where K a is the main lobe widening factor (generally let it be 1.3) [2].

III. A QUADTREE BEAM-SEGMENTING BASED PFA DESIGNED FOR WIDE-SWATH
As the required imaging scene gets larger and larger for present SAR systems, no matter in spotlight or sliding spotlight mode, the PFA residual error brought by planar wavefront approximation cannot be ignored. For a large scene, after PFA imaging, the central part of the scene (within the radius r 0 ) is non-defocus, while the outer part beyond the radius r 0 is defocused. Our technique is to break wide beam into many sub-beams, it means breaking the large scene into many small pieces of limited size. If each piece of sub-scene is within the effective imaging scene radius r 0 , no defocus will appear.
In order to make full use of the signal support area, traditional PFA commonly utilizes line-of-sight polar interpolation(LOSPI) [2] rather than stabilized scene polar interpolation (SSPI) to yield an image in which the scene orientation matches the squint angle, i.e. the PFA image will be in the x , y direction, with azimuth along y , and range along x'. Since the scene center point is chosen as the motion compensation point (MCP), the scene center will be the center of the resulting PFA image. Actually, what we need is only a hypothetical point as a reference point. So if we want to change the image center, we could choose another reference point to do the motion re-compensation. Use this digital processing method, the beam could be diverted to point at any direction artificially. This could be considered as digital beam-rotating.
Based on this idea, a quadtree beam-segmenting based PFA is proposed in this paper. In the first step, the original whole beam (corresponding to the whole imaging scene) data is decomposed into 4 narrow sub-beam data, each of which corresponds to a quadrant. That is the level-1 segmentation step shown in Fig. 2. And recursively, each set of the 4 subbeam data is again decomposed into 4 narrower sub-beam data by level-2 segmentation, achieving 16 narrower subbeam data after level-2 segmentation, as shown in Fig.2. In this way, the beam is recursively segmented one level by one level, like a Quadtree. The recursive beam segmentation could be stopped as long as each sub-scene is within the effective imaging scene radius r 0 .
Firstly, we detail the first level of segmentation as below.

A. LEVEL-1 SEGMENTATION
The total imaged scene could be divided into 4 sub-scenes, by the reference axes in a Cartesian coordinate system, designated first, second, third, and fourth quadrant, counting counterclockwise from the area in which both coordinates are positive. In this step, we decompose the whole beam into 4 sub-beams based on the quadrant. The specific process of one level of segmentation includes the following 4 steps:

1) CALCULATE THE CENTER COORDINATE OF 4 QUADRANTS
To rotate the beam direction pointing at each quadrant center, 4 reference points need to be chosen newly. Since I represents 147684 VOLUME 8, 2020 the ground illuminated area, here let I.n (n = 1, 2, 3, 4), C I .n x cI .n , y cI .n, z cI .n represent the each quadrant sub-scene and its sub-scene center point, respectively.
As long as the original azimuth size W a (along y ) and range size W r (along x ) of the illuminated scene can be calculated, the coordinates of each quadrant center could be calculated easily.

2) MOTION RE-COMPENSATION TO EACH QUADRANT CENTER
In order to avoid energy leakage caused by defocusing, motion re-compensation is firstly needed for the original data. For each quadrant, the motion re-compensation function is constructed based on the central point, and the echo data is compensated pulse by pulse.
For the nth (n = 1, 2, 3, 4) quadrant, the re-compensation phase is: where r sI .n = r sI .n (t) is the instantaneous distance between the APC to C I .n . And the re-compensated data is: (11) This echo still contains the whole scene information, but the beam is digital rotated to a new direction pointing at nth quadrant center.
Taking the 1 st quadrant as an example, i.e. sub-image I.1 in Fig.2, the re-compensation phase is: swhere r sI .1 = r sI .1 (t) is the instantaneous distance between the APC to C I .1 x cI .1 , y cI .1, z cI .1 , and it can be described in coordinates: The new phase history after re-compensated to C I .1 x cI .1 , y cI .1, z cI .1 is: 4π c (f c +f τ ) r sI .1 (t)−r p (t) dx p dy p (14) VOLUME 8, 2020

3) LINEAR RANGE DOPPLER (LRD) COARSE IMAGING AND CENTER SUB-IMAGE INTERCEPTED
Notice that last motion re-compensation step is actually equivalent to accomplish two-dimension dechirp to the original echo, eliminating the second order phase term of the signal. The azimuth and range frequency of each scatterer's two dimensional signal is proportional to scatterer distance from sub-scene center.
Suppose the re-compensationed data array size is N × N , and a coarse focused image of size N × N can be obtained via a 2-dimesional (2-D) FFT. That is just the basic Linear Range Doppler (LRD [21]) algorithm. However, the center of the coarse focused image has been transferred to the center of each quadrant, and it means the useless information of other quadrants will be folded relative to the original image. Therefore, it is convenient to intercept the central part from the full image according to the subscripts. The array size after interception is N /2 × N /2.
Taking 1 st quadrant as an example, after motion recompensation to C I .1 , the coarse focused LRD image is just centered at point C I .1 . What we need is that the sub-beam 1 only contains sub-scene I.1 information, and throwing away other unwanted information is easy in this coarse focused image.
After 4 quadrants are processed one by one, 4 coarse focused LRD sub-images could be achieved.

4) RETURNING TO THE PHASE HISTORY DOMAIN
Profiting from the dechirp, if the scene decreases, the bandwidth in azimuth and range domain corresponding to the phase history will be reduced as well. Accordingly, the sampling rate can be reduced in both azimuth and range, leading to a reduction of the computation.
The sub-image scene size is reduced to half in both range and azimuth dimension, so the sampling rate can be also reduced to half in both dimension. After returning the image to the spatial-frequency domain through Inverse Fast Fourier Transform (IFFT), the amount of data is reduced to the 1/4 of original, only containing the information of sub-scene I.1. It is equivalent that the radar only uses a narrow-beam illuminates the sub-scene I.1 area.
The sub-data for 1 st quadrant after level-1 beam segmentation is represented as S I .1 , i.e. the sub-beam signal: (15) Here the sampling interval is doubled as much as that of the original. In summary, this process could be regarded as a fast filtering process, accomplishing information filter and down-sample at the same time. This operation is the key to keep the processing computation not increasing too much while maintaining the reflected energy from each point target lossless at the same time.
After 4 quadrants are processed one by one, 4 sub-data sets could be achieved. As Fig.2 shows, after level-1 segmentation, the original whole-beam echo data S I is segmented into 4 sets of narrow sub-beam data. The nth(n = 1, 2, 3, 4) subdata is named by S I .n , which only contains the nth quadrant (sub-image I.n) information. The nth (n = 1, 2, 3, 4) output sub-beam data is represented as: The flowchart of the 1 st beam segmentation is illustrated in Fig.3. The key part to throw the unwanted information and implement the beam segmentation can be called Fast Filtering. The schematic diagram is shown in Fig. 4.

B. RECURSIVE BEAM SEGMENTATION AS A QUADTREE
After level-1 segmentation, the output sub-beam may be still too wide, since the corresponding sub-scene size may be still larger than the effective focusing scene size r 0 . Therefore, it is necessary to segment the beam further, in order to ensure that the sub-scene illuminated by the segmented sub-beam is small enough, as shown in Fig.2 like a quadtree.
The naming principle of sub-scene and sub-beam data is shown in Fig.2, based on the Quadtree principle, beginning with I, using its parent ID (identification) and the quadrant number.
Suppose the decomposition includes S level of quadtree segmentation, then for level-s (s= 1, 2 . . . . . . S) segmentation, there would be 2 2(s−1) set of input data, and 2 2s set of output data. The flow diagram of the proposed PFA is shown in Fig.5, where S represents the recursive decomposition level.
For each set of input echo data, the illuminated scene size and scene center needs to be calculated first. Then 4 quadrant center's position could be calculated easily, which will be used in motion re-compensation in fast filtering.
After motion re-compensation, the echo data could be segmented recursively, accomplishing filtering and downsampling, outputing the filtered data only containing the information of each sub-image.
In summary, the algorithm mainly includes the following two key parts of recursive segmentation. Part one is to segment the scene recursively, and the other part is to segment the beam recursively and get a set of narrow-beam filtered data. Part one is to serve part two, and the sub-beam recursive segmentation could be stopped as long as each sub-scene is within the limit of the effective focusing scene size.

C. PRECISE FOCUSING VIA STANDARD PFA
As long as the sub-beam is segmented narrow enough to ensure that each sub-scene is within the effective imaging scene radius of the PFA, the classical PFA could be used. 2 2S precisely-focused sub-images for 2 2S sub-beams will be achieved via PFA, the residual error of the non-center point can be completely negligible and each sub-image achieved via PFA is well focused.
There is one point to be noticed: Each sub-beam has a direction offset from the original scene center, and each sub-beam has its corresponding sub-scene center. Therefore, the LOS direction at aperture center is always changing for For efficiency, a modified LOSPI method is adopted for polar coordinate interpolation to process the sub-beam data, via which the scene orientation in the output sub-image does not vary with the sub-beam direction, but fixed at the original squint angle. Then all the orientation stabilized sub-images could be mosaicked easily, finally achieving the large image of full scene free of defocus can be obtained.

D. CALCULATION QUANTITATIVE ANALYSIS
The calculation of dot multiplication is small, such as motion re-compensation. FFTs in the recursive segmentation needs some extra calculation but not too much. The biggest operation is the resample of range and azimuth in the final step of standard PFA to each sub-beam data. However, compared to the original data array size of N × N , after the beamsegmenting, there would be 2 2S ouput sub-data arrays, each has a small size of N /2 S ×N /2 S . Therefore, the final classical PFA processing for each sub-beam data will not increase the computational burden. Therefore, the calculation amount is almost the same as classical PFA.

A. SIMULATION RESULTS
The highly squinted and high resolution simulation parameters are shown in the Table 1, with a high squint angle of 60 • . The distribution map of 9 × 9 simulation points are given in Fig.6 along the (x, y) direction, while the adjacent targets are spaced as 140m along the x-axis and the y-axis. Under the condition of this parameter, the effective scene radius limit of traditional PFA is only about 133m. The imaging area is 1120m × 1120m in square, and the max scene radius should be calculated as 1120 × 1.4/2 (half of the diagonal line), which is almost 800m, which is far beyond the PFA effective imaging scene radius of 133m. VOLUME 8, 2020 According to the relationship between the PFA effective imaging scene radius and the full scene size, 4 levels of segmentation are required.
The whole image obtained by normal PFA and some certain sub-images obtained during the beam-segmenting process are shown in Fig.7, while the two dimensional target  response characteristic of 2 certain points, one is point M (at the edge of the scene), and the other is point N, is displayed as well.
It can be seen that the whole image processed by normal PFA has obvious geometric distortion, and there is very serious defocus for point M and N obtained by PFA.
If use normal PFA to process sub-beam data SI.1, i.e. if the beam-segmenting is stopped after 1-level of segmentation, the sub-image of SI.1 is shown in Fig.7. It can be seen that the obvious geometric distortion is primarily corrected, and the point target is better focused than the original level. Anyway, the defocus still exists, so only 1-level of segmentation is obviously insufficient. Further segmentation is indispensable.
Until 4-level of beam-segmentation is implemented, each point in the sub-image is focused perfectly. Based on our method, via 4-level of beam-segmentation, 64 precisely focused sub-images are obtained, which could be mosaicked together to get the final well-focused whole-scene big image showed in Fig.8. In Fig.8, it could be seen that each point target is focused perfectly.

B. REAL DATA EXPERIMENTAL RESULTS
We validate the proposed PFA algorithm by real data of a X-band airborne radar with resolution 0.15m and the squint angle 53 • . Under the condition of these parameters, the effective scene radius limit of traditional PFA is only about 150m. But the whole scene size is much larger than 2 km×1 km (azimuth × range). There is serious defocus for the image, since the whole scene size exceeds the PFA effective scene radius limit.
According to the relationship between the PFA effective imaging scene radius and the full scene size, 4 levels of segmentation can control the sub-image within the effective imaging scene radius. Fig.9 (b) shows the final image obtained by the new PFA. It can be seen that all regions are well focused and have no stitching traces. It proves that the proposed method can break the PFA size limit, and it could achieve well-focused wide-swath image.

V. CONCLUSION AND FURTHER DISCUSSION
In this paper, a new quadtree beam-segmenting based PFA is proposed. Actually, this algorithm includes two parts of recursive segmentation. One part is to segment the image recursively, and the other part is to segment the echo data recursively.
This procedure is equivalent to generate multiple narrow sub-beam raw data by filtering the original echo data recursively until each sub-beam is filtered narrow enough for standard PFA. The desired resolution could be retained for each sub-beam, since the synthetic aperture length for each sub-beam is retained changeless, and it is as long as the original aperture which is long enough to get the desired resolution. And the data of each sub-beam is not divided into fragmented sub-apertures, while the new approach utilizes the whole aperture as a total solution, so the resolution is maintained really well.
The simulation and measured data results show that even in the situation of high squint angle, the proposed method can avoid the residual error of each target and achieve preciselyfocused wide-swath image.
This divide-and-conquer approach breaks the image size limit in traditional PFA, extensively enlarges the effective focused scene. Since this method is not based on the formula derivation of residual error, it is not restricted by the derivation order and can theoretically be extended to handle scene arbitrary large, and it has its unique advantage in highlysquinted SAR. Since normal PFA is adapted to any squint angle smaller than 85 • , so the presented approach can deal with the squint angle no larger than 85 • as well.
XIN NIE was born in Shanxi, China, in December 1983. She received the Ph.D. degree from the Nanjing University of Aeronautics and Astronautics, Nanjing, in 2010.
She is currently a Researcher with the Nanjing Research Institute of Electronics Technology. She has authored or coauthored over 20 articles. Her major research interests include system design, radar imaging, synthetic aperture radar (SAR) processing for high-resolution, and highly-squinted and wide-swath SAR systems.
LONG ZHUANG was born in Jiangsu, China, in 1979. He received the master's degree in signal and information processing from the Nanjing Research Institute of Electronics Technology, Nanjing, China, in 2005, and the Ph.D. degree in signal and information processing from Shanghai Jiaotong University, Shanghai, China, in 2009.
He is currently with the Nanjing Research Institute of Electronics Technology. His major research interests include MIMO radar signal processing and SAR/ISAR imaging.
SHIJIAN SHEN was born in Jiangsu, China, in 1983. He received the master's degree in signal and information processing from the Nanjing Research Institute of Electronics Technology, Nanjing, China, in 2008.
He is currently with the Nanjing Research Institute of Electronics Technology. His major research interests include radar signal processing and target detection. VOLUME 8, 2020