Radar Band Fusion Using Frame-Based Compressed Sensing

Existing compressed sensing algorithms fail when applied to radar target detection in the presence of a large gap in the frequency band, i.e., presence of signals in separate, discontinuous bands. A new algorithm based on a subdivision-fusion scheme is proposed to solve this problem. The main goal is to use a structured sensing matrix based on radar signals to an advantage and obtain a good range resolution in spite of high coherence. Parameters influencing the performance of the algorithm are discussed. Simulative examples and results based on real measurement data are presented. The results show superior performance of the proposed method in the presence of band gaps.


I. INTRODUCTION
T HERE is a rapid increase in the need for improved radar range resolution for various applications such as tracking and surveillance, environmental monitoring, and disaster management [1], [2]. Such high range resolutions are possible only when a scene is detected using a wide-band radar. However, an exponential increase in spectrum congestion [3] hinders the availability of wide frequency bands. Due to such limitations on the frequency spectrum, sometimes only narrow-band radars maybe be available for target detection. From another perspective, these narrow frequency bands can be viewed as a wide band having a continuous 'gap' or a block of missing data. The aim of this work is to obtain well-resolved targets, in spite of this gapped-band.
There is a large amount of research from different communities to tackle the problem of low resolution arising from missing or 'gapped' measurements. The earliest work dealing with the estimation of missing data comes under the topic of 'spectral Manuscript  estimation'. However, most of these methods work on clusters of missing data samples present in a particular signal. They do not consider completely missing signals in different frequency bands [4], [5], [6]. More recent research under the umbrella of Compressed Sensing (CS) includes a large amount of work on obtaining good sparse estimates in the presence of missing data. Most of these methods work on solving an l 1 -minimization problem, under the assumption that the sensing matrix satisfies the necessary coherence bounds. In this paper, due to the unavailability of a wide, contiguous frequency band, such coherence bounds are not met, thus causing a failure of the conventional CS approaches. A sub-branch of CS deals with the problem of missing data using 'group sparsity' [7], [8]. However, due to the unique structure of the sensing matrix dictated by the radar system, the minimization used in such problems cannot be directly applied to the gapped-band problem. The problem of 'missing information' has also been addressed from the aspect of noisy sensor networks in a series of papers based on the mathematical model of Fusion Frames [9], [10]. The underlying assumption of distributed sparsity involving sub-division of the scene itself does not align with the problem defined in this paper. Recently, there has been a number of deep-learning based hybrid approaches for SAR super-resolution [11], [12], but many of these methods rely heavily on existing high-resolution scenes for training, and suffer from the lack of reproducibility of results [13].
Successful target estimation in the presence of a large band gap is not guaranteed by any of the existing methods. The aim of this work is to use CS techniques on measurements from available narrow-band radars to improve the sparse estimate, such that it is comparable to the estimate from a wide band radar. The focus is placed on gapped-radar systems consisting of multiple co-located narrow-band radars. Since a radar scene is typically sparse in terms of target reflectivity, the sparsity assumption is well-aligned with the ground truth. It must be noted that the proposed work does not directly estimate the gapped-band. However, in some cases, it could be obtained as a by-product of the CS algorithms [14].
The proposed algorithm exploits the uniquely structured radar sensing matrix and follows a 'subdivision-fusion' scheme to obtain fairly accurate target estimates in spite of the gapped-band problem. The 'subdivision' scheme overcomes the coherence problem and therefore, allows the use of well-established CS algorithms to obtain multiple target estimates. The 'fusion' scheme is inspired from concepts used in noisy sensor networks that combine noisy measurements from each sensor to give a final reconstruction result.
This problem of spectral gaps or (spectral) fusion of several individual measurements is quite significant in the field of joint communication & sensing. For the upcoming 6 G standard and existing communication standards, the idea of using communication signals for sensing is at the forefront of the research. Given that the communication systems operate with several sub-carriers (e.g., orthogonal frequency division multiplexing), which individually do not cover the bandwidth needed for a high resolution, the fusion over several sub-carriers based on the consideration made in this paper also shows a way to deal with this problem.
The paper is structured as follows. Section II discusses the general radar operation, the signal structure and provides the CS formulation for the general gapped-band case. Section III describes two formulations of the CS sensing matrix for radar systems. Section IV introduces the main algorithm and discusses its links to CS. Section V provides results demonstrating the effectiveness of the proposed method. Section VI discusses the conclusions drawn and the scope for future work.
The following notations are used throughout the paper. This section first discusses the general operation of a radar system and gives a mathematical formulation of the received signal spectrum. Then, the motivation for using CS to solve the gapped-band problem is discussed and finally, the CS framework for gapped-band systems is presented.

A. General Radar Operation
The sequence of steps for a general radar operation is shown in Fig. 1. The working principle of a radar system can be divided into a transmit path and a receive path. In the transmit path, first a signal generator generates a complex-valued waveform. This is followed by a digital-to-analog conversion. The resulting baseband signal, y b Tx (t), is then mixed with a stable carrier frequency, f Tx , to obtain a radio-frequency (RF) signal. This is known as the 'mixing' or the 'modulation' step. The RF signal is sent to the scene through the transmitter Tx. In the receive path, receiver Rx detects the back-scattered signal from the scene. This signal is 'down-mixed' or 'demodulated' to obtain y b Rx (t) and converted back to a digital signal y. y is transformed into the frequency domain by the DFT block and sent to the matched filter (MF) to identify peaks corresponding to targets in the scene. In practice, there exist several variations for the individual blocks described. For instance, the signal may be generated completely digitally, thereby making the 'mixing' step unnecessary. End-to-end analog signal generation is also possible for high bandwidth LFM signals. Another possibility is to perform the matched filtering of the received signal in the time-domain, which also makes the DFT block optional.
Earlier radar systems were limited by ADCs having low sampling rates, but current hardware can support the direct sampling of up to 8 GHz [15]. So, this setup can be used in practice for radar signals having considerably large frequency bands.
Throughout this paper, a fundamental assumption is that the scene detected by the radar system is sparse, i.e., there are only a few targets in the scene. This sparse scene may be modelled as a reflectivity distribution given by where ρ i ∈ C denotes the complex-valued reflectivities of each target, s denotes the number of targets (and consequently the sparsity in the CS problem described later) and the impulse responses δ(t − τ i ) correspond to the peaks on the delay grid. After convolution of the transmitted signal with the reflectivity distribution of the scene, the received signal y Rx is given by where U τ i is the time-shift operator, and y Tx is the signal transmitted with the carrier frequency f Tx . Down-mixing of y Rx with the reference signal leaves only one additional phase term in the baseband signal due to f Tx , i.e., A more exhaustive discussion of this convolution-based interpretation of radar operation can be found in [16, pp. 90 -99].
The DFT block now performs a Fourier transform on the baseband signal and the resulting frequency-domain signal is given by The general idea can be easily extended to pulsed or LFM systems. Consequently, the transmitted radar signal in time may be expressed as, where k is the slope, and T is the pulse duration.

B. Motivation for Using CS
In order to combat the spectrum congestion problem discussed in Section I and effectively use the limited spectrum, a lot of research has been done on the topic of communication and radar spectrum sharing (CRSS). Broadly, CRSS can be divided into two main areas: 1) Dual functional Radar Communication 2) Radar Communication Coexistence [3]. Focusing on the latter, it is necessary for radar and communication systems to coordinate effectively in order to use the same frequency band. If a communication system is utilizing a certain part of the spectrum, the frequencies in that part become unavailable to the radar system, thus creating a continuous 'gap' in the band. Resolving targets in the presence of such a gap becomes difficult, while super-resolution proves to be an even bigger challenge.
Such a general communication system, with a gap in the frequency spectrum, can be expressed by the equation where y is a vector of dimension M containing the available frequency measurements, A is a sensing matrix of dimension M × N , weighted by the spectrum of the signal at hand, and x is an s-sparse reflectivity vector of dimension N , constructed from the scene reflectivities ρ j . e is white Gaussian noise in the measurement y. Since the exact values of the sparsity (s) and the delays (τ j ) corresponding to the peaks in the reflectivity distribution are unknown, x and in turn A, have to be constructed such that they set up a range grid with N possible range positions. In an ideal scenario, a wide (non-gapped) band would correspond to a sufficiently large number of measurements M . If the number of range grids N were considered to be equal to M , it would construct a finely-spaced range grid, leading to a high resolution reflectivity vector x. Consequently, A would be a square matrix, allowing a classical inversion to find the near-exact values of the reflectivity vector x. However, due to the gap in the frequency band, a set of contiguous rows corresponding to the gap are now missing and the number of measurements m < M. This causes A to be a horizontal matrix, leading to an under-determined system of linear equations. A pseudo-inversion in this case would give infinitely many solutions. Therefore, a meaningful option would be to interpret it as a CS problem that leverages the assumed sparsity of the scene to give a unique solution, given that the lower bound to the number of measurements is met, i.e., m ≥ 2s ln N s (see [17, p.271]).

C. General CS Framework for Gapped Band Systems
In general, the CS formulation is given by where B represents a Fourier-like orthogonal sparsifying basis and S represents the synthesis matrix corresponding to a complete set of measurements as described in [18]. P represents the projection matrix which introduces the gap in SB, making A a horizontal matrix with M N . y is the vector of measurements in frequency given by (3) and x is the sparse vector of scene reflectivities. Based on this framework, the solution for x is given by 1 min

III. CONSTRUCTION OF SENSING MATRICES FOR GAPPED RADAR SYSTEMS
In the previous section, a general CS framework for a gapped system was discussed. Here onwards, the focus is placed only on gapped-radar systems. In this section, first the mathematical model of the spectrum of an LFM radar signal is discussed. Then, two CS-based gapped-radar models are explored. First, construction of the CS sensing matrix is discussed based on gapped chirps in the frequency domain for a general radar system. Then, a time-domain based sensing matrix is considered for the special case of Frequency Modulated Continuous Wave (FMCW) radar systems.

A. Considerations Regarding the Spectrum Modelling of LFM Radar Signals
The Fourier transform of an LFM signal given by [19], [20] is Fresnel Integrals [19], [20] are used to compute this term as follows: where Z denotes the complex Fresnel integral, β the frequency bandwidth, and The phase term can also be approximated by the principle of stationary phase (PSP) [16], which defines a stationary point in time at which the first time derivative of the intergral phase is zero. The approximation of the integral in (7) is then given by Evaluation of the Fourier Tranform using Fresnel integrals causes ripples in the passband spectrum. This stems from the rectangular windowing of the signal in the time domain, which represents convolution with a sinc function in the frequency domain. Such artifacts are seen in practice, and therefore Fresnel integrals provide a spectrum that can be considered the 'truth'. The PSP approximation, on the other hand, works only at the points having stationary phase. Since the phase fluctuations are maximum near the edges, the PSP does not get a stationary point there. So, this approximation does not contain the ripple artifacts. With increase in the time-bandwidth product, the ripple artifact reduces, and the error between the Fresnel and the PSP-based frequency spectrum also reduces. Thus, a PSP-based Fourier Transform offers a good approximation to the one based on Fresnel.

B. Frequency Domain CS Model 1) Gapped Chirp:
In the case of a gapped chirp, the individual chirps are present in separate, non-adjacent frequency bands and exist in different time intervals. Therefore, the gap exists both in time and in frequency. The gapped chirp in time domain is given by where the rect(·) function represents parts of the available signal, obtained at different time windows and l represents the total number of individual chirps. The Fourier transform of the gapped chirp is given by The second equation of (11) involves a change of the integration variable from −T i 2 + τ i to −T i 2 in order to align the given integral with the standard Fourier Transform of a chirp signal as given by (7). This represents a re-centering of the sub-chirps around the zero of the time axis, which in turn yields the coefficient e j2π( k 2 τ 2 i −fτ i ) . The final formulation of the Fourier Transform of the gapped chirp signal of (10) is thus Comparing with (7), the term 2 ft in the integral is replaced by the term 2(f − kτ i )t. This frequency-shift term represents the center frequency of each sub-chirp.
Applying Fresnel integrals defined in (8) to (12) yields the final description of the gapped chirp in frequency domain as Similarly, applying the PSP, as given in (9), to (12) gives In order to validate the frequency domain representation obtained using Fresnel integrals, it is compared to a direct FFT of the gapped-signal. Fig. 2 shows that both the methods give identical amplitude and phase spectra in the relevant frequency band. Fig. 3 shows the spectra of 2 LFM signals. The first spectrum (in blue) is based on a signal that is gapped in both time and frequency, as described by (12). For the second spectrum (in red), the signal has a gap in frequency but the gap in time is removed. This is done by time shifting each individual chirp from (10), such that they are continuous in time. As a result, the frequency jumps at the end of each sub-chirp to the initial frequency of the next sub-chirp. This time-continuous spectrum can be derived by incorporating the time-shifts into the leading coefficients of (13) such that Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. where τ d . . , l are given by The phase term e −j2πt c represents a shift in time, t c , that recenters the whole time-continuous chirp around the zero of the time-axis. Note that the first sub-chirp is the reference for the time shifting of the other sub-chirps and the τ d i s are all negative since the shift is always to the left. The time periods (T i ) and the original time shifts (τ i ) of the center of each chirp from zero allow us to determine the additional shifts needed to make the entire signal time-continuous. Finally, this time-continuous signal is re-centered around the zero of the time axis, as shown in Fig. 3. The top figure in Fig. 3 shows the change in frequency over time for both the time-continuous and gapped-time chirps. The middle figure shows the amplitude spectra for both chirps. Since they have the exact same energy at identical frequency sub-bands, there is a complete overlap of the spectra. The bottom figure shows the phase spectra from both chirps. The difference between the spectra is due to the extra phase term introduced by the time shift given in (15).
2) Construction of Sensing Matrix in Frequency Domain: Based on (5), the frequency-domain CS model can be expressed as ⎡ Rx ](f 2 ) . . . where, As described in Section II-B, the Fourier Transform of the back-scattered signal forms the measurement vector y, each row in the sensing matrix corresponds to a frequency f i in the frequency grid and the columns of the sensing matrix correspond to the cells τ j of the delay grid.

C. Time Domain CS Model
The discussion of the previous section was made against the background of a general radar system. That is, possible advantages of certain system designs were not taken into account. In fact, the direct construction of the sensing matrix based on frequency domain signals still carries the problem of sampling of possible high-bandwidth signals. This problem also remains for LFM signals 2 if the idea of stretch processing, i.e., de-ramping with the sent signal, is not taken into consideration. If a de-ramp system design is assumed, a different design of the sensing matrix follows. It now becomes a dictionary for the beat signals of the de-ramping process. Furthermore, this design allows for an indirect definition of the band gap by attributing the sub-chirps to individual systems, creating a group of independent but spatially co-located systems in the process. This idea is shown in Fig. 4. The receive path shown in Fig. 1 is sub-divided in Fig. 4 to represent the receive paths from multiple, co-located radar systems. The matched filter block from Fig. 1 is replaced by the CS block which performs the range processing using information from the individual systems. This idea is used for the following description of the sensing matrix for a special case of LFM systems, i.e, FMCW systems.

1) FMCW Signal Model:
The description of the signals of the sensing matrix is based on a reference LFM signal Opposed to the previous discussion, the transmit frequency f T x of the LFM signal, i.e., the center frequency, is explicitly taken into account. Similar to (10), the reference signal is assumed to be partitioned into multiple sub-LFM signals, each defined by Since each individual system operates within its own time basis t i , the sub-LFM signals in the global time basis t are to be transferred into the individual times bases byt i = t − τ i . This yields the descriptions of the sub-LFM signals as The related beat frequency signals of these sub-LFM signals for a target with a delay of τ j becomẽ which shows that the term f Tx τ i + πkτ 2 i of the signal in (18) can be ignored. It only serves the purpose of aligning the phase function of the sub-LFM signals with the reference LFM signal, but does not influence the eventual beat frequency signalỹ τ j IF,i . The structure of (18) and (19) shows that f Tx + kτ i defines the center frequencies f Tx,i of the sub-LFM signals. That is, it suffices to know the respective center frequencies of a series of LFM signals in order to establish the individual relationship between these LFM signals, i.e., the temporal gap between them (for a given slope k).
In this context, the temporal relation between the beat signal of (19) and the overall beat frequency signal of the reference LFM signal is directly related to the temporal/ frequency relation between the respective sub-LFM signal and the reference LFM signal. That is, (19) can also be retrieved by applying the same sequence of rect(.) signals as in (10) to y τ j IF (t) 3 and performing again the substitutiont i = t − τ i as before. Hence, the term kτ i τ j is the initial phase due to temporal shift of a sub-LFM signal with respect to the reference LFM signal. In other words, the gap in the frequency spectrum is reflected as a temporal gap.
2) Construction of Sensing Matrix in Time Domain: For the construction of the sensing matrix, it is necessary to consider the maximal delay since this defines the window rect(t i − 1 for all beat signals of the individual systems. 4 The sub-signals y τ j IF,i (t i ) of the multiple individual systems i and individual beat frequencies j can be arranged to yield ⎡ ⎢ ⎢ ⎣ỹ where the τ 1 ..τ N are assumed to be equidistantly separated, τ N = τ j,max , and M is the number of sub-LFM signals or individual radar systems. (20) follows the same CS problem structure as described in (5). The matrix of (20) is given in a block (row) form. That is, each block row consists again of rows of the individual measurement time instances. The number of columns, i.e., the spacing between the τ j can be arbitrarily set as to meet the required resolution. The subdivision along the block rows follows from the available bands, available systems, etc.

D. Synchronization
This section takes into account the effect of time and frequency synchronization errors [21] for the beat frequency signal defined in (18). For a system of radars as discussed in Section III-C, it is necessary that the individual systems are synchronized so that their center frequencies, or rather the band gaps, agree with the assumptions that the sensing matrix is based upon. Or vice-versa, the actual individual center frequencies are known and the sensing matrix is constructed accordingly. Three main synchronization error terms are considered-error in 3 This has to be done while taking into account that the windowing ) needs to be done for each rect(.) of this sequence individually. 4 Or the design of the respective systems, which might differ from this straightforward mathematical description.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
transmitter frequency (f T x e ), error in receiver frequency (f Rx e ), and error in time, since the receiver does not know exactly when the pulse was sent by the transmitter (t R ). Based on these errors, the modified time and frequencies are expressed as wheref Tx andf Rx are the modified transmitter and receiver frequencies, andt R is the modified time basis for the individual sub-LFM signals. If the system is co-located,f Tx =f Rx or f Tx The sub-LFM signal remains as defined by (17). However, the down-mixing is affected due to the synchronization errors and (18) is replaced bỹ where . Therefore, the synchronization errors appear as errors in phase. The term e 2πj((f Rx e −f Tx e )(t i −τ i )) describes the error arising from the mismatch of the transmitter and receiver frequencies. This term will disappear if a co-located system is considered. e −2πj(f Rx +kτ i )t e signifies an additional shift in the center frequency of the individual sub-LFM signals. The term e 2π k 2 (t 2 e −2t i t e )) signifies an error in the delay from a target, which will lead to an error in the detected range. Since a synchronization in the order of nano-seconds is required for the problem at hand, the synchronization errors can be overcome by using the same oscillator for the individual, co-located radar systems.

IV. SUBDIVISION-FUSION ALGORITHM
While the previous sections concentrated on the structure of the sensing matrix, the scope of this section is the estimation based on these sensing matrices. Although the structure of a signal having a spectral gap is fundamentally similar to a CS problem, this gap will limit the feasibility of the estimation. This observation will be detailed in the first subsection. However, taking radar application into consideration, there is a possibility to deal with this problem by designing an intermediate stage for the estimation. These observations are discussed in the second subsection and become the foundation for the derivation of an algorithm for this application in the third subsection. The fourth subsection is dedicated to the discussion of the proposed algorithm.

A. Problem Description
Be it in the frequency domain or time domain, a gap essentially subdivides the sensing matrix into blocks. In order to get a good sparse estimate from the CS formulation, the columns of the sensing matrix should be as dissimilar as possible. In other words, their inner product should be very small. Now, due to the gap in rows, a part of the frequency content responsible for making adjacent columns dissimilar is lost. As a result, the degree of similarity of adjacent columns-measured by their inner product-increases. The larger the width of the gap, the more is the information lost and the closer this inner product becomes to one (considering normalized columns). This increases the coherence of the sensing matrix and as a result, the CS problem is ill-posed and conventional CS reconstruction methods fail to provide a correct estimate. Table I shows that for a particular frequency band, the coherence of the sensing matrix increases with increase in the width of the gap. It also demonstrates that the coherence values are not affected by the width of the whole band. The coherence solely depends on the ratio of width of the gap to that of the whole band.
In order to tie this argument to the working of specific CS algorithms, a greedy l 1 -minimization method known as Orthogonal Matching Pursuit (OMP) and a variation of the basis pursuit (BP) algorithm called Least Absolute Shrinkage and Selection Operator (LASSO) are briefly discussed. OMP consists of 2 main steps: support update and residual calculation [22]. If the coherence of A is large, it implies that the columns are highly correlated and the neighboring elements of the sparse vector have very similar contributions to the measurement vector y. Therefore, the indices added to the support by OMP may not correspond to the true sparse estimate. In case of LASSO or BP, high coherence means that the algorithm randomly selects one of the similar elements from the sparse estimate and shrinks the others. The selected position may be completely different from the true sparse estimate. Detailed explanation of the effects of coherence on LASSO can be found under the topic of 'neighbourhood stability condition' or the 'irreplacibility condition' [23], [24].

B. Aspects From Radar and Compressed Sensing
Although the above outlined problem invalidates a straightforward application of CS to this class of problems, there are two aspects from radar and CS which can be beneficially used to get around this issue.
1) Radar: With respect to radar operation, a discrete separation of range grid points, i.e., assumed object positions, which appears as natural consequence of digital evaluation of the received signal, does not match with the actual, possibly continuously-distributed positions of the objects. Thus, the predefined object positions τ j define range cells, in which the objects may reside. Hence, the entries of the sensing matrix define the center positions of these range cells. Any object which is in a range of ± cτ 2 with τ : τ j − τ j−1 around one of the equidistantly separated τ j will be attributed to the same range cell, i.e., to the same τ j . This also means that the width of a range cell can be arbitrarily set without losing the detection of the targets, only the resolution will be decreased or increased. It is to be noted that the detection of targets mentioned here does not imply identifying separate targets-indeed one range cell may contain multiple targets making them indistinguishable. Detection here refers to merely identifying the presence or absence of target(s) in a range cell.
2) Compressed Sensing: A significant aspect from Compressed Sensing is the coherence of the derived sensing matrix. It is, in fact, defined by the normalized distance between the columns of the sensing matrix A, i.e., the inner product of the normalized columns, where each column corresponds to a delay τ j on the delay grid, as shown in (5).
3) Matching Both Aspects: Against this interpretation of coherence, it is clear that coherence and range cell width are essentially the same concepts-just seen from different perspectives. That is, the increase of the range cell width increases the gap between consecutive grid points. Intuitively, this pushes the adjacent columns to be more 'dissimilar'. In other words, the mutual coherence between the columns of the sensing matrix is reduced, resulting in a better-conditioned matrix.

C. Algorithm Description
Based on the discussion in Section IV-B, this section presents details of the 'Subdivision-Fusion Algorithm'. Algorithm 1 denotes the subdivision step by S and the fusion step by B. The core idea is inspired by the work on fusion frames in [9], [10]. A direct application of [9], [10] to the current problem would result in a non-coherent estimation. Nevertheless, the concept of subdividing an ill-posed CS problem, followed by fusion of the results, can be utilized.
1) Subdivision: In order to reduce the coherence for better performance of CS problems, the specific structure of the sensing matrix due to the radar framework is used to an advantage. Every element of the sensing matrix A corresponds to a specific frequency and delay grid, A(f i , τ j ). Based on the discussion in IV-B, if sub-matrices are constructed by taking columns at a specific distance from each other in range, the coherence is lowered. Thus, A is divided into k sub-matrices, where j n = n, n + k, n + 2 k, . . ., N/k, s.t, N/k >> m. The value of k is chosen such that the sub-matrices are horizontal matrices. Now, the original measurement vector y is represented as y = A n x n , n = 1, . . ., k. These k sub-problems are then directly solved using greedy methods or basis pursuit minimization. For this work, Beurling LASSO (BLASSO) is used to obtain the k 'coarse-grid' estimates of the target scene, since it takes into account the possible basis mismatch for each sub-problem [25].
2) Fusion (Support Estimation): Let each element of these sub-matrices be known as a 'supercell'. Every supercell has k fine-grid positions and is centered at a particular fine-grid position τ j c , as shown in Fig. 5. In the original problem, every element of the measurement vector y was represented by Since the same measurement vector y is represented using coarse-grid sub-matrices, different estimates are obtained, where n = 1, . . . , k, i = 1, . . . , m, and j n = n, n + k, n + 2 k, . . ., N/k, s.t., N/k >> m.
If a target lies exactly on τ j c , the energy is maximum at x(j n ). On the other hand, if a target lies at the border of two coarse gridcells, the energy is distributed almost equally between them, for eg., x(j n ) and x(j n+1 ). The value of x(j n ) is the highest when the target is exactly on-grid and lower if the target is off-grid. In order to transfer the coarse-grid estimates from each subproblem to a common fine-grid, a k-element Correction Factor (CF) array is constructed as follows. First, an arbitrary on-grid target is considered on the fine-grid and the subdivision step is performed. This gives k coarse-grid estimates for the said on-grid target. The inverse of the estimate at the supercell where the target lies exactly on τ j c will have a value of one and will become the central element of the CF array. Inverse of the values at ± k−1 2 neighbouring cells fill the remaining elements of the array. Multiplication of the results from each sub-problem with this CF array mimics an auto-correlation operation.
The individual fine-grid estimates from each sub-problem are then added and the whole target estimate becomes where n = 1, . . . , k. CS algorithms, such as BP or LASSO, do not give a binary estimate of the sparse vector, but a set of values representing the strength of each element x(j) of the vector. Therefore, once the target estimate in the fine grid is obtained, a variance-based thresholding algorithm such as Otsu's method [26] is used to filter the estimate before obtaining the final support. After thresholding, the corresponding support positions are stored in a support set S.

3) Fusion (Target Estimation):
The final estimate is given by solving a normal BLASSO problem, such that Due to the use of BLASSO in the final estimation step, the 'subdivision-fusion' algorithm can be viewed as a nested-CS algorithm.
D. Discussion 1) Comparison Against Group Sparsity: In many cases, CS can be viewed as a regression problem, where the aim is to Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
find important explanatory elements that predict the response variable y. As opposed to individual elements, group sparsity based CS methods focus on selecting groups of input variables x i that best represent the response variable. The strength of each group then depends on the basis A i . The extension of the popular LASSO problem to group sparsity gives [7]: where N is the number of groups, and K i denotes positive definite matrices.
The above equation represents y as the direct sum of A i x i . In the Subdivision-Fusion algorithm, the individual CS problems may be viewed as different groups. However, y is not the direct sum of these groups. Due to the radar-based structure of the sensing matrix, it is important to perform the summation at the correct positions based on the correction factor described in Section IV-C.
2) Comparison Against Fusion Frames: A method to reconstruct a distributed sparse signal from unavailable or noisy measurements is discussed in [9]. In this paper, a mathematical model of fusion frames is used in combination with modified CS concepts to divide the problem into smaller sub-problems. The individual CS results are then fused to obtain the final sparse estimatex where N is the number of groups, S denotes the fusion operator, andx i denotes the individual CS estimates from local recovery problems. Aceska et al. [9] assumes that the scene has distributed sparsity. Therefore, each subspace of the fusion frame system deals with a different part of the scene and captures a different portion of the sparse estimate. However in this paper, the same scene is detected using different parts of the frequency spectrum. The work presented in this paper involves the subdivision of the sensing matrix corresponding to different parts of the frequency spectrum-it does not involve subdivision of the scene.
3) Limitations Based on Sparsity: The design of a radar system limits the number of measurements, m. Additionally, the requirement for low coherence of the sensing matrix A limits the super-resolution factor, i.e., the number of fine grid points N that can be supported. Therefore, given a specific m and N , an upper-bound on the sparsity s of the scene can be determined from the relation [17] m ≥ 2s ln(N/s) (28) Since N s, (28) can be approximated to which makes m 2 ln(N ) the upper bound of sparsity, in the case of a given N and limited number of measurements m.
Another limitation of this approach, which is also given by the above inequalities, defines the maximal factor for the coarse grid. If it is assumed that for a sparsity s all objects fall into different coarse cells and all k fine grid positions of these coarse cells are selected as possible support positions, a total of sk possible support positions are defined for the calculation of the second stage of the algorithm. Since this stage performs a CS estimation based on a selection of sk columns of A, sk should not surpass m, i.e, m ≥ sk, which is however the exact same inequality as in (28) if the tighter bound based on (28) is used.

4) Relation to Chirp Sensing Matrices:
A similar concept using a structured sensing matrix is explored in [27]. Here, chirps define columns of the sensing matrix, with each chirp having a different base frequency and chirp rate. The measured signal is thus a weighted combination of a number of different chirps. However, in this work, a single gapped-chirp is considered as the measured signal, in accordance to the radar framework. Applebaum et al. [27] does not mention any links to radar application.
5) Relation to Sequential Estimation Methods: Donoho et al. [28] defines the problem of missing measurements as a case of having a well-determined but noisy system. This work uses the stOMP algorithm to sequentially generate sparse estimates and perform stage-wise fusion. Although the problem addressed is similar, the method is significantly different from the current work, since here the sparse estimates are independently generated and combined in a single fusion step.

V. RESULTS
This section presents results demonstrating the performance of the proposed algorithm. First, simulation results from the Subdivision-Fusion algorithm are compared to a direct CS method for a particular scene. In all the following simulations, BLASSO is used as the CS method. The Subdivision-Fusion and direct BLASSO algorithms are tested for different levels of noise and further analysed using phase transition diagrams. The algorithm is also tested on real measurement data.

A. Simulation Results
As demonstrated in Section IV-A, the mutual coherence problem depends only on the ratio of the missing data to the available data, and not on the bandwidth itself. Therefore, in the interest of computational load, a frequency band of 5 kHz has been used for the simulations. The gap is half of the whole bandwidth, i.e., 2.5 kHz. Based on the sensing matrix derivation discussed in Section III-B, a matrix A is constructed using the frequency domain representation of measurement vector y. Considering the width of the missing band to be half the total bandwidth and a pulse duration of 0.5 seconds, y consists of 2500 measurement samples. The range grid is constructed with 10000 grid points, with the grid-spacing depending on the range resolution corresponding to the bandwidth. Fig. 6 shows the estimated support and the reconstruction result from the Subdivision-Fusion algorithm and the BLASSO. The target scene consists of 70 targets (in red), which is within the theoretical bounds on sparsity given by (28) and (29), placed at randomly selected range grids. The frequency domain sensing matrix described in (16) is used and f Tx is set to zero for convenience. The mean-squared error from the Sudivision-Fusion algorithm is found to be 8.5449e −6 , while that from the BLASSO method is 1.3920e −3 . This proves that the proposed method is much more effective that direct CS in the gapped-band scenario. It is also evident from Fig. 6 that BLASSO (in blue) gives a large number of false positives near the actual target. This results from the coherence issues described in the previous sections. The Subdivision-Fusion algorithm (in green) gives a much better reconstruction in this aspect, with much fewer false positives.
Next, both the algorithms are tested for gaps of different widths in the presence of noise. Gaussian white noise is added to the received signal y b Rx in the time domain. Fig. 7 demonstrates the performances of both algorithms for gaps of widths corresponding to 0.2, 0.3, 0.4 and 0.5 of the total bandwidth. In each case, the Signal-to-Noise ratio (SNR) is varied from 0 to 20 dB. It is observed that the Subdivision-Fusion algorithm performs better than direct BLASSO for lower values of SNR, specially in the range of 0-10 dB.

B. Phase Transition Diagrams
In order to analyze the performance of the proposed algorithm for different levels of sparsity and different dimensions of A, phase transition diagrams are constructed. These diagrams represent the probability of a successful reconstruction over different levels of sparsity and different numbers of measurements.   ( m N , s m ), 50 iterations of the proposed method are executed. The success or failure of each iteration is determined by calculating the mean-squared error between the ground truth and the reconstruction result. An error of 0.01 or less is considered a success. The averaged rate of success over 50 iterations is then shown in the corresponding position in the phase transition diagram. Since the sensing matrix is always horizontal (N m), the number of columns (N ) considered for the phase transition diagrams must correspond to the maximum number of available measurements (m). Fig. 8 shows the phase transition diagrams for the direct CS and Subdivision-Fusion algorithm applied to the gapped-band problem. When the number of measurements (m) are lower, the Subdivision-Fusion algorithm shows a higher success probability compared to direct CS. In particular, when the ratio m n goes above 0.2, the success probability shows a larger increase for the Fusion method as compared to the direct CS method. Due to the use of only 50 iterations per point and random selection of target positions in each iteration, a variance is noticeable and the rise in success probability is not very smooth. However, a clear distinction in performance is obtained, specially when the number of measurements m are lower. Fig. 9 gives the phase transition diagrams for the Subdivision-Fusion algorithm using the traditional Fresnel integrals and the PSP approximation [16], [19], [20]. 5 iterations have been used per point. The probability of success is similar for same values of s/m and m/N , thereby demonstrating that PSP provides a good approximation to the Fresnel Integral method discussed in Section III. Fig. 10 shows the phase transition diagrams in the case of two gaps having widths of 0.5 and 0.2 of the whole frequency band. 5 iterations have been used per point for both algorithms. As expected, for

C. Application to Real Measurement Data
In order to present a proof of concept using real world data, an FMCW radar of bandwidth 24 GHz has been used to detect a point on metal plate at a distance of 0.32 m from the radar. The FMCW radar operates using a transceiver unit, i.e., the same unit is used to transmit and receive the signal. A focusing horn lens antenna with a 5 • opening angle has been used for this experiment. Fig. 11 shows the measurement setup.
The ground truth is obtained by performing an FFT on the measurements from the entire frequency band. To test the performance of the Subdivision-Fusion Algorithm, gaps having a width of 0.1, 0.3, 0.5, and 0.7 of the total bandwidth are introduced, which correspond to a gap in time as described in Section III-C and the dedicated FMCW-based sensing matrix from (20) is used. Fig. 12 shows the corresponding results. For all cases, a prominent peak is observed at the correct range position. The smaller peaks appear due to multiple reflections from the target. It is observed that the results from the Subdivision-Fusion algorithm have a lower mean-squared error compared to those from the direct BLASSO for wider gaps in the frequency band. Since the aim is to recover the ground truth resolution in the presence of a band gap, the difference in error is not as large as it would be for super-resolution. Nevertheless, the error values show that the proposed algorithm performs better when the coherence is high.

VI. CONCLUSION
This work aims to overcome the limitations of CS applied to gapped-radar systems. The presence of a continuous gap in the frequency band leads to high coherence values for the typical CS sensing matrix, leading to an ill-posed CS problem. A new method is proposed based on the subdivision of the sensing matrix and fusion of the results from the different sub-problems. Simulation results are presented analysing the performance of the method in comparison to existing direct CS methods. The algorithm is also tested on real measurement data from an FMCW radar. Future work will include optimization of coherence subject to band-gap width and system placement, where given a specific gap in the band, the rows and columns leading to the lowest coherence may be selected. A more detailed evaluation of real measurement data with different super-resolution factors and multiple band gaps will also be performed.