Aperture Synthesis With Digital Array Radars and Covariant Change of Wavenumber Variables

Attributes of digital array radars are leveraged in enhancements of wideband frequency-wavenumber (omega-k) methods to achieve 1) single-pulse, short-range imaging from a stationary array; 2) single-pulse, all-range, high-density, digital beamforming-on-receive from a stationary array; 3) multiple-pulse aperture synthesis for short-range imaging with sensor movement; and 4) multiple-pulse inverse aperture synthesis for long-range imaging with tracked object movement. Modifications to conventional omega-k algorithms used in synthetic aperture radar are introduced to accommodate antenna element level data, real array element spacing, large scene size and small array size (compared to scene size). Large scene size with k-space processing is handled by a novel Huygens-Fresnel transfer function that does not fully rely on zero-padding to resolve array and scene size mismatch. Aperture synthesis with generalized pulse-to-pulse sensor-step operations is supported. Connections between omega-k wavenumber migration and a covariant change of variables transform associated with Dirac's spectral models of free and scattered electromagnetic fields are established.


I. INTRODUCTION
A DVANCES in wideband signal processing for aperture synthesis that utilize covariant change of wavenumber variables are presented and shown to be related to the quantum field theory work of P. A. M. Dirac [1]. The introduced omega-k enhancements and capabilities rely on the availability of element (preferred) or subarray (with concomitant grating lobes) data channels of a digital array radar (DAR).
The advanced aperture synthesis methods presented here are based on a novel baseline single-pulse omega-k (frequencywavenumber domain) method that is also introduced in this paper. The baseline algorithm (usable without aperture synthesis) empowers single-pulse imaging at short range with a baseline (nonsynthesis) resolution. At long range, the single-pulse baseline method also provides means for high-density digital beamforming-on-receive (HD-DBF). The point spread function (PSF) of short-range imaging becomes increasingly overlapped as the PSF morphs with increasing range into the beam spread function (BSF) of long-range. The angular resolution of the short-range PSF is the same as that of the long-range BSF. In both cases the single-pulse cross-range spatial resolution degrades with range.
The single-pulse baseline method can be viewed as an allrange digital beamformer that is not constrained by a plane-wave assumption. The single-pulse omega-k method produces the same results achieved by time-space domain spherical backpropagation. Spherical wavefield inversion is accomplished at all ranges with efficient omega-k domain processing.
With use of Dirac's covariant frequency-wavenumber domain descriptions of free-space and scattered electromagnetic (EM) fields, the avoidance of plane-wave signal model approximations empowers a capability to coherently integrate multiple single-pulse data products (images and HD-DBF) for aperture synthesis. If relative movement increases angular dwell or reduces the range between sensor and scene on a pulse-to-pulse basis, then cross-range resolutions progressively improve as single-pulse images are coherently fused in the pixel domain for aperture synthesis. The single-pulse resolution approaches a range-independent value that is a fraction of the passband wavelength.
A dense lattice that specifies a pixel grid for imaging and a receive-beam aim-point grid for beamforming is required. This lattice grid spacing matches the DAR's antenna element (AE) spacing. The beam spacing of long-range beamforming is identical to the pixel spacing of short-range imaging; hence, the "high density" adjective that labels the HD-DBF use case. At short range, the "HD" can also imply "high definition".
The single-pulse method can image a large field-of-view (FoV) if illuminated by a single transmit beam. The FoV can be parsed into smaller regions-of-reconstruction (RoR) for processing. The tessellation of RoRs required to cover the transmit beam's FoV can be processed simultaneously in parallel. Hence, the computational efficiencies of omega-k domain processing are further advanced by a system-level solution architecture that does not require full-scene data aggregations in parallel computing systems.
1) Early Observations of "Solopulse": After an extensive literature survey and study conducted in preparation for authoring [2], and after an initial research activity that addressed a formulation of the omega-k algorithm for wideband and threedimensional synthetic aperture radars (SARs) operating at very short range [3], our research turned to the use of DARs for inverse SAR (ISAR) imaging of highly maneuvering drones. The DARs in this initial study were assumed to operate in a colocated multiple-input/mulitple-output (MIMO) mode with time-division orthogonal waveforms. The DAR assumption provided, for each transmitted pulse, multiple receive data channels. Conceptually the synthetic aperture's spatial sampling interval is reduced to that of the element spacing of the DAR; i.e., a fraction of the wavelength of the highest frequency within the transmitted signal's passband.
During this drone surveillance research we observed evidence that suggested a form of imagery was obtainable from a single pulse. This serendipitous observation led us to further investigate and advance this new type of imaging modality, which we eventually labeled "Solopulse".
2) Aperture Synthesis With "Solopulse": Solopulse is intimately related to both digital beamforming-on-receive and to SAR. Since this Special Topics issue is on synthetic apertures, a description of Solopulse from the SAR perspective is emphasized in this presentation.
A SAR processor ingests single-channel-radar data from a collection of pulses gathered with a relatively large spatial sampling interval (determined by the temporal pulse repetition interval and platform speed) to image a scene that is comparatively small relative to the sensor flight-line (see Fig. 1).
Our drone surveillance research took us to complementary situations where the (real) aperture is much smaller (defined by the DAR length without sensor movement) and with fields-of-view (scenes) that are much larger than the DAR's size. A Solopulse processor ingests multiple-channel-DAR data of a single pulse with a spatial sampling interval that is equal to the AE spacing. The single-pulse data sample count is determined by the number of AEs in the DAR.
In this paper, we advance four-dimensional wavenumber migration methods to support this "SAR-with-DAR" or "Solopulse" concept. The resulting adjustments require the development of what we call a Huygens-Fresnel transfer (HFtransfer) function. The HF-transfer handles (without total reliance on zero-padding) the spatial expansion of the measured DAR data set to that of the cross-range extent of a larger scene.
The choice of some HF-transfer reference point within the scene identifies an effective point of reference for spherical backpropagation (via k-space processing) from sensor to scene. Also, if the HF-transfer reference point is updated pulse-to-pulse with relative sensor-scene (or object) movement estimates, then aperture synthesis with generalized sensor stepping modes is supported.
3) Early Validations of "Solopulse": Solopulse was invented at Georgia Tech in 2017 [3], [4], [5]. There has been much follow-on effort to validate and mature the concept and explore potential use cases via modeling and simulation. Initial software simulations included models of both environmental noise and receiver hardware imperfections. Hardware prototyping activities were conducted at the Georgia Tech Research Institute (GTRI) under internal research and development funding during 2019-2021 and also under external funding during 2021-2022 [6]. Research goals included activities to validate and verify the Solopulse concept by collecting and analyzing data measured in an anechoic chamber and in an open laboratory. More advanced front-end array models were then utilized to produce additional simulated, but increasingly realistic, data-cubes at the hardware performance levels of potential Solopulse antenna arrays. Various error tolerances were evaluated, including frequencyindependent amplitude/phase offset errors, element dropouts, misaligned array elements and channel response mismatch. These initial studies established that Solopulse has a measure of robustness in the presence of hardware imperfections. The laboratory work of [6] demonstrated that the omega-k domain processing of Solopulse produces the same beam function of time-space domain spherical backpropagation, but with the higher computational efficiency afforded by a k-space approach.
4) Paper Outline: The remainder of this introductory paper on Solopulse with a focus on the aperture synthesis perspective is outlined as follows. Section II provides an introductory overview of Solopulse foundations, including discussions on concepts, covariance, spherical wave theory and inversion, and the algorithmic system model. Section III presents examples and analyses of various Solopulse data products, including single-pulse images and HD-DBF products, and multiple-pulse aperture synthesis images. Section IV gives a detailed explanation of Solopulse signal processing. Section V overviews the various methods used to describe EM spectra related to wave motion equations, with an overview of Dirac's results. Section VI summarizes status and plans.

A. Solopulse Aperture Synthesis Concepts
To better illustrate the dual relationship between Solopulse and SAR, consider the concept shown in Fig. 2, where each spatial pulse repetition interval of SAR is replaced with an imagined DAR of equal length and with a number of (singlepulse) data channels equal to the number of AEs in the DAR. This idea introduces the availability of contiguous element-level array data across the synthetic aperture with DAR-AE spacings. Furthermore, this idea invites consideration of the following questions: r Can imagery be formed with multiple channel (AE) data from a single pulse with the DAR in a fixed position?  r Can the DAR be more generally maneuvered during aperture synthesis?
r Can coherent fusion, possibly with complex pixel additions, occur post single-look image formation? This paper establishes that Solopulse provides affirmative answers to these questions.
As illustrated by the SAR-like side-stepping example in Fig. 3, Solopulse produces imagery with each pulse. Multiple images can be coherently fused (in the pixel domain) with no particular sensor-scene motion geometry required to perform aperture synthesis. Aperture synthesis requires that the HF-transfer's reference point be updated pulse-to-pulse. The resulting multiplepulse data products have image quality (sensitivity and resolution) levels that depend on the mode's step-interval size(s) and aperture length.

B. Covariance
Certain aspects of radar signal propagation modeling become more tractable and realistic when spherical EM wavefields are approached as relativistically covariant time-space fields that are also describable by corresponding frequency-wavenumber domain spectral models. The spectral descriptions of quantum field theory (QFT) and quantum electrodynamics (QED) prove particularly useful in wavenumber domain algorithm development for radar signal processing. Like light, radar signals are electromagnetic and covariant principles can be applied to advantage in algorithm development. This is one of the objectives of Solopulse signal processing.
1) Time-Space Covariance: Covariant analysis harmonizes time and space observations [7], [8]. Within the context of the Special Theory of Relativity, time and space can be viewed as a single entity, time-space [9], [10]. Covariance requires that the square of any change in time-space "distance" (Δs) 2 between two time-space points (events) should satisfy (Δs) 2 = c 2 (time interval) 2 − (space interval) 2 , with c representing the speed of propagation. If the separation interval Δs is infinitesimal, then the difference Δ goes to the differential d and the temporal dt and spatial dx, dy, dz are introduced, (ds) 2 = (cdt) 2 − (dx 2 + dy 2 + dz 2 ). The time-space interval between any two events is a geometric quantity and all observers (possibly in relative motion) measure "4-vector" time-space coordinates in a way that preserves, as an invariant, the differential differenceof-squares.
Classical descriptions of space utilize three-dimensional vectors, or "3-vectors". Extension of radar wave theory from a classical (nonrelativistic) to a covariant (relativistic) form is facilitated by "4-vectors". This extension holds for all relative velocities, whether fast, slow or nil.
Covariance does not place an analysis into the Minkowski space [11]. The noncovariant Minkowski form (ds) 2 = (jcdt) 2 + (dx 2 + dy 2 + dz 2 ) that uses jct for the time coordinate seeks to retain Euclidean behaviors as noted in [12]. The relativistic and covariant time-space universe in which we all live is non-Euclidean.
2) Frequency-Wavenumber Covariance: Spectral (frequency and wavenumber) properties of free-space EM propagation are more readily obtained when dealt with in relativistic 4-vector forms. Dirac's covariant formulation of electromagnetic fields requires 4-vector analysis of scalar-vector potential fields of both free and scattered EM fields. In quantum mechanical disciplines, unbounded photon (free-space EM field) behaviours can be described in either a 4-vector time-space difference-of-squares χ domain or a 4-vector frequency-wavenumber difference-of-squares κ domain. The complementary k-space model of the κ-domain is such that energy (signal frequency) and momentum (directed radiation) are used in a 4-vector.
3) Covariance and Solopulse: The Solopulse signal spectrum is related to the radar's time-space data by a fourdimensional temporal-spatial Fourier transform. Subsequent signal processing seeks to maintain covariant relationships among wavenumber data samples by moving or "migrating" the wavenumber sample positions. Aperture wavenumber samples k u are positioned such that the squared magnitude |k u | 2 is equal to the square of the signal's temporal wavenumber k 2 ω . Said another way, the difference-of-squares κ = k 2 ω − |k x | 2 = 0 is maintained or held as "covariant" by wavenumber migration. The 4-vector Fourier transform between the χ and κ domains ensures that the covariance of the corresponding time-space (t, x) domain manifold χ = (ct) 2 − |x| 2 = 0 is also preserved.
Covariant systems preserve the Lorentz invariance of both timespace and frequency-wavenumber entities.

C. Spherical Wave Theory
The foundational elements of Solopulse's covariant spherical wave theories are the Huygens wavelet, the Fresnel wave field and an entity that we call the Huygens-Fresnel spectrum.
1) Huygens Wavelet: A Huygens wavelet, is an impulsively thin EM sphere centered at r = 0 that expands with increasing time. Since the impulsive Huygens wavelet • δ is viewed as a distributed singularity, generalized function theory applies [1], [13], [14]. The impulsive wavelet "density" spatially decays at the rate of 1/r. The spherical attribute of hh is indicated iconically by placing the "•" over δ to obtain • δ. Bold fonts are used in 3-vector descriptions. Double scripted notation hh(t, r) is sometimes used to emphasize the existence of both the temporal t and spatial r domains. Ordered upper case letters are used to indicate the result of a temporal Fourier transform Hh(ω, r) or the result of both temporal and spatial Fourier transforms HH(ω, k r ). A radial "r" variable is sometimes used instead of a rectilinear "x" variable in anticipation that with point source models the spatial analysis will have spherical symmetries that depend only on radial distance r = |r|. Scalar analyses of EM vector fields are common when spherical symmetries exist, in which cases, differential equations of f (x) often go to rf (r) [2]. The Huygens wavelet was a key element used by Einstein in his development of the Special Theory of Relativity [9].
2) Fresnel Wave Field: The temporal Fourier transform of a Huygens wavelet yields the static (time-independent) Fresnel wave field, where k ω = ω/c. Note that k ω may be positive or negative. If specified by a single value k ω the situation can be called monochromatic, if by a set of values {k ω } the situation becomes polychromatic. In radar signal analysis, consideration of a continuous k ω band as specified by a bounded set a < k ω < b for passband sensors is useful. A Fresnel wave field for just one temporal frequency ω c is shown in Fig. 4 (a). Huygens' wavelet is a solution to a covariant (difference-of-squares) time-space domain wave motion equation derived from Maxwell-Heaviside equations [15]. Fresnel's wave field is a solution to the (difference-of-squares) frequency-space domain Helmholtz wave motion equation, Huygens wavelet and the Fresnel wave field both originate at point singularities. Both spherical wave functions are solutions when the forcing functions (ff and Ff ) of the corresponding wave motion equations are point singularities. Hence, these can be called the Huygens Green-function and the Fresnel Greenfunction [16].

3) Huygens-Fresnel Spectrum:
One might expect that the spatial (3D) Fourier transform of the Fresnel wave field Hh(k ω , r) or the combined temporal-spatial (4D) Fourier transform of the Huygens wavelet hh(t, r) would provide what we call the Huygens-Fresnel (HF) spectrum HH(k ω , k r ), which is characterized by a difference-of-squares, frequencywavenumber domain, wave motion equation, The HF-spectrum is anticipated to embody an alternate 4-vector (k ω , k x ) frequency-wavenumber domain expression of the same Huygens wave motion (1) that occurs in the 4-vector (t, x) timespace domain. The spatial Fourier transform HH of the Fresnel wave field Hh can be obtained computationally with a Discrete Fourier Transform (DFT). Fig. 4(b) shows the computed HH function that results when a three dimensional DFT is applied to the Fresnel wave field Fig. 4(a).
A monochromatic sample of the HF-spectrum is related to the Ewald sphere of x-ray crystallography [17], [18], [19], [20], [21]. The Ewald sphere of the computed HH of Fig. 4(b) is clearly evident. The banding on the Ewald sphere is what we call the k x -space locator sinusoid exp(jx n k x ) that expresses the location x n of the x-space source point δ(x − x n ) of the Fresnel wave field of Fig. 4(a).

4) Spherical Wavefield Inversion:
One approach to modeling a spherically scattered wave field is to describe the interaction of an incident wave on a bounded region that contains scatterers [22], [23], [24], [25]. The incident field is viewed as energizing each scatterer, which in turn, if certain conditions are met [26], can be viewed as each creating its own isotropically scattered field, a portion of which is received back at the antenna. These concepts are based on Huygens principles [27]. An approach to forming an image from scattered wave field data is through wave field inversion [28]. As one option, the wave field inversion task can be formulated in the time-space domain. Such algorithms, sometimes applied in SAR, are predominantly within the class of spherical back-propagation algorithms [29],  [30]. Corresponding SAR inversion methods can be developed in frequency-space and frequency-wavenumber domains.

D. Solopulse System Model
Solopulse is able to image a scene by spherical wavefield inversion performed with the k-space processing illustrated by the block diagram of Fig. 5. If illuminated during transmit actions, the scene can even be the entire surroundings of the radar, thereby delivering on the futuristic vision of Skolnik's "surround" or "ubiquitous radar" [31]. Solopulse signal processing can be configured to create imagery within one or more RoRs, which may be a subset of a larger FoV illuminated by the transmitted pulse. Solopulse reconstructions of multiple RoRs can be performed simultaneously to adapt the Solopulse algorithm to parallel computing architectures. Parallelization reduces the computational latency of large-scene or full-surround Solopulse reconstructions.
The number of RoRs obtainable with a given radar system are determined by the sensitivity (power-aperture product) of the sensor array and the compute capacity of the processing hardware. If sufficient power-aperture product is provided, only one pulse is required to cover the FoV with the tessellation of RoRs that form a mosaic. No transmit beam scanning is required. The size of the individual RoRs determines computing latency. The number of RoRs within the field-of-view mosaic determines the computing throughput requirement. RoR-boundary or seaming degradations over the FoV that might occur can be minimized or eliminated with careful bounded-region design decisions as described in Section IV-B.
Imaging methods based on temporal-spatial isotropic wave field inversions, but implemented with frequency-wavenumber domain operations, can be viewed as holographic [32]. Holographic reconstructions of k-space descriptions of remotelysensed (ex situ) wave fields can be converted to within-scene (in situ) descriptions through k-space operators that we call inverse HF-transfers [3]. Inverse HF-transfers are based on the Fourier transform pair δ(x − x n ) ↔ exp(jx n k x ) and are k-space operators that correspond to spatial domain spherical wavefield back-propagation operators. Solopulse signal processing utilizes HF-transfers and covariant wavenumber migration to achieve spherical-backpropagation by k-space methods. Inverse HF-transfers of k-space are preferable to the computationally intensive, interpolated, pixel-by-pixel, temporal-spatial, spherical back-propagation methods of SAR or near-field array scanners.
1) Huygens-Fresnel Transfers: HF-transfer functions are key to managing the frequency of the Ewald sphere banding of the HF-spectrum. Lower frequency banding is advantageous in the design of a required resampling task in Solopulse signal processing. The frequency of Ewald sphere phase banding is reduced by the HF-transfer that changes scattered field descriptions from remotely sensed ex situ descriptions to within-scene in situ descriptions [3]. A single reference point transfer is effective for the entire scene. The resulting scatterer-specific lower frequency in situ tonal fields in k-space describe scatterer-specific offsets relative to the selected reference point. Lower frequency tonal bands on the Ewald sphere are desirable in preparations for the uniform resampling process that occurs either after or as part of a wavenumber migration process [33]. As shown in Fig. 5, part of the Solopulse algorithm requires, before the runtime of the algorithm, computation of a HF-transfer function for each RoR to be imaged. Multiple transfer functions can be simultaneously applied to a single copy of the measured (single-pulse) data to simultaneously form images of multiple RoRs with parallel processing.
2) HF-Transfer Setup: An innovative feature of Solopulse is means for isotropic wave field inversion with an inverse HF-transfer function expanded from a relatively small stationary array back to a larger scene. In preparation for setting up an expanded HF-transfer function, a reference signal that would be received by a virtual array with length or size matched to the cross-range extent of a desired RoR is produced beforehand by computer simulation. A reference Huygens wavelet hh(t, a − r c ) with a point r c positioned at the nominal center of the objective RoR is simulated with a describing locations of real and virtual AEs that are imagined to exist within and outside the bounds of the DAR (across an extent equal to the RoR size). Solopulse algorithms modify the reference Huygens wavelet from an impulsively thin shell to a radial thickness defined by the transmitted waveform as a function of time. A reference Fresnel wavefield Hh(k ω p , a − r c ) is obtained by the temporal Fourier transform of the reference Huygens wavelet hh. Computation of a spatial Fourier transform of the reference Fresnel field yields a reference HF-spectrum and from which the forward HF-transfer function can be obtained. The inverse HF-transfer function is obtained by conjugation.
3) Array Data Zero Padding: Data sample-count mismatches between array and scene sizes create potentially problematic situations in (both real and synthetic) array signal processing. In some SAR algorithm families, for example, if there is a mismatch between the synthetic array length and the typically smaller (in spotlight mode) cross-range scene size, then zero padding has been recommended as a means to adjust data set sizes [34], [35], [36]. Similar recommendations have been made for applications in optics [37]. For Solopulse, the spatial expansion of the HF-transfer function requires that the measured sensor array data be spatially zero-padded to the size of the objective RoR (the expanded HF-transfer function is not zero padded).

4) Covariant Change-of-Variables for Wavenumber Migration:
Sensor array data in a k-space format obtained by temporal-spatial Fourier transforms of received signals do not immediately satisfy the covariant constraint. Wavenumber migration reformats a sensor array's noncovariant rectilinear spectrum into a covariant HF-spectrum. This migrated spectrum provides an estimate of the scene's angular spectrum. During wavenumber migration, the signal frequency wavenumber k ω and cross-array wavenumber k u of the sensor array undergo a change-of-variables (CoV) transformation, The breve accent indicates migrated angular spectrum variables.

5) Uniform Resampling of Migrated Spectrum:
After the CoV transform, data samples of the migrated angular spectrum wavenumber domaink x are nonuniformly positioned. A resampling operation from nonuniformk x to a gridk x of uniformly spaced scene wavenumbers must occur before Fourier inversion with Fast Fourier Transform (FFT) algorithms [38]. Double-dot accents indicate resampled data. Hence, there is a sequence of mappings k u →k x →k x of measured phase history data positioned at array wavenumber points in the array's rectilinear spectrum k u to the migrated wavenumber points in the angular spectrumk x and on to uniformly resampled image spectrum data samplesk x .

6) Least-Squares CoV:
The resampling process can also be viewed as a regridding process sometimes used in various computed imaging tasks [39], [40], [41], [42], [43], [44], [45]. The terminology of "regridding" does not hold consistent meaning throughout the literature. As used here, the meaning and methods of [46] used for magnetic resonance imaging (MRI) are relevant. The MRI approach allows formulations based on linear algebra pseudoinverse image reconstructions. As explained further in Section IV-B, the MRI approach inverts the inherent continuousto-discrete mapping of the (continuous) scattered field data to (discrete) measurements of migrated non-Cartesiank x -space samples.
The notion of a regridding "transformation" opens the door to approaching the resampling task as an estimation problem. Exact discrete-to-continuous inverse mappings may not exist, and this invites the use of least-squares solutions. However, as a first step, this paper uses a Jacobian-weighted CoV (JW-CoV) method to implement the covariant transfer. Future research will explore the use of a least-squares CoV (LS-CoV) approach. 7) Jacobian-Weighted CoV: In standard signal processing problems, a nonuniform to uniform resampling process can be achieved with sinc interpolation. Coincidentally, when a box indicator function is used to both define and bound an objective RoR, Jacobian determinant weighted sinc interpolation can be used to resample to the uniformly spacedk x [33], [47], [48]. Although not an exact interpolation based reconstruction, nor even a least-squares solution [49], [50], Jacobian-weighted sinc-based reconstruction methods for rectilinearly bounded RoRs have been considered appropriate due to implementation efficiency [34]. If the RoR bounding function were circular, then Bessel functions, instead of sinc functions, would be required for regridding [51].

8) Cauchy Structures in Resampling Matrices:
The resampling task of JW-CoV can be setup as a linear algebra problem. The resampling matrix possesses a Cauchy structure due to x −1 decay functions [52], [53], [54]. The linear algebra formulation allows wavenumber migration to be implemented with simple matrix-vector multiplies, a key to the computational efficiency of Solopulse. The Cauchy matrix-vector product is shown as an icon for each wavenumber path in Fig. 5. These migration operations, implemented as matrix-vector multiplies, can all be implemented in parallel for each of the RoRs, which also can be processed in parallel. 9) Inverse Fourier Transform: After wavenumber migration, inverse Fourier transforms lead to Solopulse imagery or HD-DBF data products. The inverse FFT is shown as a subblock in Fig. 5.

III. SOLOPULSE DATA PRODUCTS
This section provides an overview of the variety and attributes of Solopulse data products for both single-pulse imaging and HD-DBF, and for multiple-pulse applications with aperture synthesis. Characteristics of point and beam spread functions are demonstrated and described. A single-pulse image of a small drone at short range is demonstrated. A study of performance levels as SNR and range are varied is provided. Aperture synthesis for short-range surround imaging and long-range HD-DBF are performed. Scenarios that utilize parallel processing are provided with multiple RoRs that span surround FoVs and track-mode HD-DBF FoVs. The surround FoVs demonstrate aperture synthesis with sensor array movement and the HD-DBF FoVs demonstrate inverse aperture synthesis with tracked object movement.
1) Solopulse in a Colocated MIMO Radar: Solopulse can utilize SISO (single-input/single-output), SIMO (singleinput/multiple-output) and MIMO transmit/receive configurations. All of these colocated MIMO configurations can be viewed as "single-pulse" operations. Use of correlated or uncorrelated/orthogonal waveforms among the AEs is an option [55]. If the waveforms are correlated, then transmit beamforming occurs during MIMO transmit dwells. However, if the waveforms are uncorrelated or orthogonal so as to broaden the transmit beam, then the Solopulse system operates in a SISO mode. Transmit beamforming, possibly with monostatic spoiling or bistatic pulses from a secondary, smaller, transmit antenna or array, can be utilized. SIMO mode, where one AE transmits and all AEs receive, provides system behaviors like SISO but with an undesirable (but removable) artifact that causes geometric warping of Solopulse imagery at short range.
2) MBOR Comparisons: A useful baseline for Solopulse comparisons is multiple-beams-on-receive (MBOR) data products produced by a conventional plane-wave algorithm. Such Fraunhofer digital beamformers are implemented with timedelays for beam-steer-on-receive in wideband scenarios. An MBOR receive beam can reasonably be called a Fraunhofer beam (F-beam) and a Solopulse beam a Huygens-Fresnel beam (HF-beam).
With conventional DBF, the MBOR cross-range field of view is divided into a number of whole or fractional single-beam intervals. Track-mode MBOR utilizes overlapped beam rosettes with typically 10's of overlapped receive beams. The degree of overlap in track mode is typically on the order of half a beamwidth. Search-mode MBOR tends to utilize less overlap. The receive beam density of MBOR used for Solopulse comparisons is increased in this paper. MBOR beams spaced as close as a tenth of a beamwidth apart are utilized to better see attributes of MBOR solutions. This makes Fraunhofer-DBF data products more image-like and easier to compare to Solopulse images and HD-DBF data products.

1) Pixel and Beam Lattices:
Operations of the Solopulse baseline has a requirement not typical of SAR or DBF: the pixel-density of the image or beam-packing lattice is set to match the real array's AE spacing. This beam/pixel packing requirement is indicated by the red lattices of Figs. 2, 3, and 6. 2) Point Spread and Beam Spread Functions: At long range the beam lattice characterizes the aim-points of a high-density, highly overlapped, set of HF-beams, as illustrated in Fig. 6. At short range, the BSF overlap is reduced, and the beam spread function behaves more like a PSF of a computed imaging algorithm.
3) Single-Pulse Cross-Range Resolution: Solopulse with a stationary DAR provides a range-dependent cross-range spatial resolution of Rθ, where R is range. The range-independent angular resolution is θ = λ/D DAR , with D DAR being the length of the DAR sensor. Note that with a DAR, the angular resolution is λ/D DAR = 4/N AE in SISO mode, where N AE is the number of AEs and the AE spacing is λ/4. This becomes 2/N AE in SIMO and MIMO modes with an AE spacing of λ/2. 4) Short-Range Solopulse PSF: Shown in Fig. 7 are two examples of the PSFs of a C-band (5 GHz) digital array at short range. The array has 32 elements spaced λ/4 apart and the DAR is about 44 cm long. The uncoded waveform has a bandwidth of 500 MHz. The simulation is noise free. Fig. 7(a) contains a single scatterer at about 25 m. Fig. 7(b) has a scatterer at about 115 m. The Fraunhofer (2D 2 DAR /λ min ) near-field/farfield boundary is 7 m. The λ/D DAR angular beamwidth is 7.5 degrees. The associated R × λ/D DAR spatial beamwidths are 3 m and 15.5 m, respectively. The measured 4 dB-down cross-range resolutions are 1.6 m and 8.2 m, respectively. This near factor-of-two difference is expected in SISO mode. The colormap spans a full scale dynamic range of more than 140 dB. This allows all sidelobe structures to be observed. The curved sidelobes shown in Fig. 7 are typical of Solopulse imagery at short range, or at long range with large cross-range FoVs. Note that there is wrap-around aliasing in Fig. 7(b). This can occur with certain combinations of parameter settings related to RoR and array sizes. Section IV-B provides a detailed analyses of the potential aliasings and ambiguities of spatial and bandlimited reconstruction scenarios. Fig. 7(b) demonstrates that with judicious parameter selections the wrap-around aliasing can be managed and that the side-lobe curvature is less pronounced with longer range. Fig. 8 is a comparison of Solopulse and MBOR as the transmitted power is varied to produce decreasing signal-to-noise ratios (SNRs) of 20, 10, 0 and -10 dB (the columns of images left-to-right). The top row are Solopulse images. The middle row are MBOR "images". The red plots in the bottom row are the cross-range profiles of the Solopulse images. The blue lines are the cross-range profiles of the MBOR images. The same C-band sensor-scatterer configuration of Fig. 7(b) is used in these SNR comparisons, except the number of AEs has been increased to give a larger array size of about 3.5 m (128 AEs). The sensor mode is also changed to operate in a SIMO mode with just one transmit element. The required power-aperture-gains (PAGs) are respectively, 16.9, 6.9, -3.1 and -13.1 dB. The larger array size increases the Fraunhofer near-field/far-field boundary to 446.5 m. This scenario is within the Fraunhofer near-field of the sensor. The scatterer's PSF can be made out at all tested SNR levels in both Solopulse and MBOR images. Measured cross-range (4 dB-down) spatial resolution is about 1.9 m for Solopulse and 2.9 m for MBOR. The Solopulse image (both noise and signal) sits on a elevated energy floor. Analysis suggests this energy pedestal comes from the gain of the HF-transfer function of the Solopulse processing flow.

5) Solopulse and MBOR Comparison: Sensitivity: Shown in
6) Solopulse and MBOR Comparison: Range: Shown in Fig. 9 is a comparison of Solopulse and MBOR as the range is increased to 0. 7) Single-Pulse "Freeze Frame" Image of a Drone: Shown in Fig. 10(b) is the Solopulse image of a small drone obtained with a W-band (77 GHz) digital array at short range. The scatterers of a Swerling 1 model used to simulate the small drone is shown in Fig. 10(a). The array has 1024 elements spaced λ/4 = 0.1 cm apart and the DAR is about 95 cm long. The coded waveform has a bandwidth of 4 GHz and a time-bandwidth product of 10. A round-trip PAG of 8.7 dB is required to deliver 20 dB sensitivity at 50 m with a receiver noise figure of 3 dB. The λ/D DAR angular beamwidth is 0.23 degrees. The Rλ/D DAR spatial beamwidth is 20 cm at 50 m range. The Fraunhofer near-field/far-field boundary is 485 m. This image is well within the near-field of conventional Fraunhofer beamformers. With a pulse duration of 2.5 nanoseconds, the image is essentially a freeze-frame, even with respect to propeller rotation.

8) Solopulse Aperture Synthesis:
Multiple Solopulse images, either in time series or concurrent from multiple platforms (i.e., multiple DARs with a centralized processing center for shared data), can be coherently fused with k-space or pixel domain processing. This capability is possible due to the coherent correctness of the Solopulse spherical wavefield model, where the covariant formulation of spherical wave fields removes approximations associated with plane-wave models. Also, the HF-transfer operation converts each Solopulse image to a common scene-centered coordinate system that is shared across the extended dwell [3]. If relative motion is present, then a change in view-angle between sensor and scene can be exploited to enhance resolution beyond that achievable with a static situation. The result is progressive resolution if angular dwells are extended by the relative movement. If the scene is moving and tracked by a stationary sensor, Solopulse progressive resolution is an inverse-SAR solution. Fig. 11(a) is a single-pulse image and in Fig. 11(b) a 10-pulse image with aperture synthesis of a large, forward-looking, FoV typical of a radar that might be used in an autonomous vehicle. The FoV extends from 10 m to 100 m in range, and from 4 m on the right side to 12 m on the left side. Just one side of the forward FoV is reconstructed here to allow more detail to be seen in this printed format. The radar is a Ku-band (36 GHz) radar with a 1 GHz bandwidth with a time-bandwidth product of 10. The number of AEs is 128, spaced 0.2 cm apart, operated in a SISO mode, with a forward-facing DAR-length of about 26 cm. Sensitivity Time Control (STC) is utilized to counter range dependent attenuation. A PAG of 9.4 dBW is required to deliver 15 dB sensitivity at 100 meters with a receiver noise figure of 3 dB. The λ/D DAR angular beamwidth is about 1.8 degrees. The Fraunhofer near-field/far-field boundary is 16.6 m. A 22 × 4 RoR mosaic is used to parse the FoV for parallel processing. Each RoR is 2048 × 2048 pixels in size. Each RoR contains a single scatterer in this simulation. The display has a partial-scale dynamic range of 30 dB. Note that the PSF broadens with range and angle off boresight. Fig. 11(b) is the same setup as Fig. 11(a), but where 10 frames are coherently integrated, pixel-by-pixel, post image formation, to achieve aperture synthesis. The forward-facing sensor moves forward by 1 m on each pulse. Note that the improved resolution varies with each scatterer's angle off-boresight. Cross-range resolution improves more significantly for scatterers with increased angle off-boresight since these scatterers experience a larger increase of angular dwell pulse-to-pulse. The PSF experiences an undesirable amplitude modulation for scatterers at the Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. shortest range and at the most extreme angles off-boresight. This characteristic can be mitigated by reducing the step-size between pulses. These results suggest that a radar point-cloud data product could be easily extracted from Solopulse with aperture synthesis. The point cloud density would increase pulse-to-pulse as resolution cells improve (become smaller).

9) Surround Imaging With Parallel Processing: Shown in
10) Inverse Aperture Synthesis: Fig. 12 shows a Ka-Band (36 GHz) progressive resolution image of a drone at a range of 1 km. Pulse bandwidth is 4 GHz with a time-bandwidth product of 25. The drone is modeled as a Swerling 1 object as illustrated by the leftmost image in Fig. 12. The number of AEs is 512, spaced 0.2 cm apart, operated in a SISO mode, with a DAR-length of about 96 cm. The sensitivity is 20 dB above the noise floor with a receiver noise figure of 3 dB. A PAG of 60.2 dBW is required. The angular beamwidth is 0.47 degrees. The Fraunhofer near-field/far-field boundary is 245.6 m. A 2 × 3

IV. SOLOPULSE SIGNAL PROCESSING
Solopulse signal processing is similar to the wavenumber domain Stolt transform method [56], [57] used in SAR omega-k (range migration) algorithms [58], [59], [60], [61], [62]. The primary modification is the adaptation of the HF-transfer function (which is sometimes implicit or missing in prior formulations of omega-k processing [33]) to the size difference of the DAR and RoR.
A detailed description of Solopulse signal processing through the system model of  Table I provides a quick reference guide for primary  variables and symbols. 1) Scene Function: Let a continuous scene function g(x) be modeled as a set of continuous Dirac impulse functions δ(x − x n ), each representing a point scatterer at some location x n ; hence g(x) = n g n · δ(x − x n ), where the scatterer strength is g n .
2) Scene Spectrum: An unbounded scene function g(x) and unbounded scene spectrum G(k x ) comprise a Continuous Fourier Transform (CFT) pair, g(x) ↔ G(k x ). The scene spectrum G(k x ) is comprised of k x -domain locator sinusoids exp(jx n k x ) determined by the CFT pair, δ(x − x n ) ↔ exp(jx n k x ). The scatterer's position x n determines the frequency of the complex locator sinusoid exp(jx n k x ) that exists in the k x -domain.
3) Transmit Waveform: The transmitted waveform is w(t) for the one transmit-AE in SIMO mode. If in SISO or MIMO mode, all AEs are assumed here to transmit w(t), either simultaneously or in a quick time-series, if orthogonality within a burst is desired.   There is a zero pad that matches the UPA size (data sample count) to the twodimensional cross-range RoR size. (b) Extended HF-transfer reference signal for an UPA. There is no zero pad. g(x). The received signal is a function of time t and positions u of transmit/receive elements within the array. In practice, sensor arrays measure discrete output data samples s(ẗ,ü). Fig. 13 shows an example received signal collected by an uniform linear array (ULA). Fig. 14 shows an example received signal collected by an uniform planar array (UPA).

4) Received Signal: The array receives and measures a continuous time-space signal s(t, u) scattered by the scene function
The array should be viewed as collecting an incident (received) phase pattern, rather than a directed amplitude and phase information of a single plane-wave along the line-of-sight from each scatterer [63]. This received phase pattern is a linear or planar "slice" of the incident spherical wavefield [2]. Use of expanded HF-transfers makes this viewpoint relevant at both short and long range.

5) Reference Huygens Signal:
In preparation for setting up the inverse HF-transfer function, a reference signal that would be received by a real array with a virtual extension with size matched to the cross-range extent of the RoR is produced by computer simulation . The reference signal h(t, u) is setup by selected the position of a reference scatterer that may be anywhere within the RoR. Fig. 13(b) shows a reference signal for an ULA and Fig. 14(b) for an UPA.

6) Received Signal Rectilinear Spectrum:
A CFT of s(t, u) in time and space yields the continuous signal spectrumS(k ω , k u ), where k ω is the signal wavenumber and k u is the aperture wavenumber. A DFT of the sampled s(ẗ,ü) yields the discrete signal spectrum, S(k ω ,k u ). The spectral sampling can be modeled by a grid of continuous Dirac impulse functions, in which case, both S(k ω , k u ) and S(k ω ,k u ) can be dealt with as continuous functions. In other cases, the data samples can be handled as discrete Kronecker data, e.g., with data buffer indexes i ω and i u . Due to its lack of a covariant angular structure, the signal spectrum is said to be rectilinear. The phase patterns of scattered fields seen in rectilinear spectra SS(k ω p , k u ) are not tonal, i.e., embedded locator sinusoids such as exp(−jx n k x ) are "warped" in the rectilinear spectrum format.

7) Covariant CoV for Wavenumber Migration:
The phase pattern of a rectilinear sensor spectrum can be "unwarped" or made tonal through covariant wavenumber migration to thȇ k x -domain. Wavenumber migration implements a covariant change-of-variables transformation. Tonal formatting prepares the spectral data to form Solopulse imagery by Fourier inversion, k x →x. Once migrated, the nonuniformly spaced samples represent locator sinusoids without warping in k x -space, which comprise the image spectrum. But the task to uniformly resamplȇ k x →k x remains. Conceptually, this resample operation is performed after the migration from k u tok x , but in implementations, can be combined with the migration as done with a Cauchy matrix formulation.
An example of the magnitude of the frequency-wavenumber rectilinear spectrum for an ULA after pulse compression (frequency domain matched filtering), but before wavenumber migration, is shown in Fig. 15(a). The resulting angular spectrum after wavenumber migration is shown in Fig. 15(b). Corresponding results for the UPA are shown in Fig. 16

B. Aliasings and Ambiguities
Continuous-to-discrete and discrete-to-continuous signal analysis is provided here to better enable an understanding of the impact of sampling of sensor data s(ẗ,ü) and its sampled spectrum S(k ω ,k u ) as eventually migrated to a sampled estimate M (k x ) of G(k x ). Analyses of continuous-to-discrete sensing and discrete-to-continuous inversion is also preparatory for application of minimum-norm least-squares image reconstruction methods [46].
The forward Discrete-Time Fourier Transform (DTFT) of a discrete scene function g(ẍ) from uniformly sampledẍ-space data creates replicas G(k x ) of the unbounded continuous scene spectrum G(k x ) in the continuous k x -domain (a tilde accent is used to indicate replication). The spectral periodicity creates k x -domain ambiguities and possibly, overlap aliasing in the wavenumber domain.
The inverse Discrete-Frequency Fourier Transform (DFFT) of a discrete scene spectrum G(k x ) from uniformly sampled k x -space data creates replicas g(x) of the unbounded continuous scene function g(x) in the continuous x-domain. This spatial periodicity creates x-domain ambiguities and possibly, overlap aliasing in the spatial domain.
Forward and inverse DFTs induce periodicity in both spatial and wavenumber domains. If spatial and spectral sequences are bounded (finite length) and if sampling rates are sufficiently high, then there is no overlap aliasing in either domain. The following analyses seeks to define boundary functions and to prepare for control of overlap aliasing effects.
1) Band-Limiting and Spatial-Limiting: To manage the possibility of overlap aliasing of the replicated estimate m(ẍ) of the unbounded scene function g(ẍ), the objective scene function can be changed from the entire scene g(ẍ) to just a box-bounded subset (expressed rather iconically as) g(ẍ) , which is limited to finite extent by multiplication with a box-shaped spatial bounding function Π(ẍ) . The bounding box corresponds to an objective RoR. The box-bounded scene forms a Fourier transform pair with a version of the unbounded scene spectrum G(k x ) convolved with a sinc function,  , m(x) ≈ g(x) .
Since the sensor is passband limited and view-angle limited, the migrated spectrum M [k x (k ω ,k u )] sinc is also limited as expressed by a migrated box-window function, The preimage (in a functional sense) of the angular spectrum window Π(k x ) is the rectilinear signal spectrum window Π(k ω ,k u ) . The " " subscript represents a migrated version of the " " bounding box. The attributes of the data-supporting angular spectrum window sinc are determined by sensor system parameters, e.g., waveform bandwidth, angular support of the FoV or RoR, etc, as mapped by the covariant CoV transformation. A continuous (view-angle, band-limited and box-bounded) scene estimate from sampled data is obtained by a DFFT pair, wherek x are nonuniformly sampled Dirac impulses. A Kronecker impulse version of the sampled data of the migrated wavenumber can be obtained by uniformly resampling,k x → k x to obtain a corresponding discrete spectrum and scene estimate related by a DFT pair,

C. Continuous JW-CoV Development
Ignore for a moment the eventual discrete aspect of the end-to-end mapping S(k ω ,k u ) → M (k x ) that involves both a covariant CoV and uniform resampling of migrated data and consider an unsampled (continuous) version of the migrated spectrum M (k x ). An unreplicated reconstructed scene estimate m(x) can be expressed as an inverse CFT, A JW-CoV transformation of the sampled signal spectrum S(k ω ,k u ) is desired to estimate the scene spectrum.
First, consider a continuous, unbounded, scene estimate m(x) that is the inverse CFT of an angularly windowed, migrated, continuous, passband spectrum method of obtaining m(x), not from the inverse CFT of M (k x ) , but from the inverse CFT of S(k ω , k u ) is desired. The inverse CFT functional can be modified to involve a covariant CoV transformation with a Jacobian matrix determinant weighting |J (k ω , k u )| applied to the (continuous) signal spectrum S(k ω , k u ), This is the continuous version of the JW-CoV. For notational efficiency, let be called the scene's Jacobian-weighted spectral estimate (JSE), where the signal bandwidth and aperture wavenumber window function constraint is indicated by the subscript JSE.

1) Discrete JW-CoV Development:
A bounding box can be applied in the following analysis to setup a bounded version of the scene reconstruction that is subject to replications, This modified objective of wavenumber migration is to have m(x) provide a satisfactory representation of a bounded g(x) .
Consider next a sampled version of the scene's Jacobianweighted spectral estimate, To get a handle on the description of the migrated-wavenumber spectrum of a box-bounded and replicated reconstruction m(x) , synthesis via a windowed version of the inverse DFFT of S(k ω ,k u ) JSE used in (5) can be considered, A forward CFT of m(x) will convolve the spectrum of S(k ω ,k u ) JSE with a sinc function. To see the connection of the box-bounded m(x) with a sinc-convolved signal spectrum S(k ω ,k u ) sinc JSE , consider the forward CFT of (6) with respect to a new wavenumber variable k x , where, with some foresight, M (k x ) sinc has been annotated with the superscript "sinc," since The covert sinc-convolution of (7) can be made overt by dealing with the integral over x first, where i is an index over the dimensions of (8). To simplify notation, the box Π(x) = Π(x) (2Xo @Xc) is assumed here to be sized the same in each dimension. The notational shorthand and position X c = (X c , Y c ) of a square-bounded section of g(x), Equation (8) also simplifies notation by using bold-font vectors, indicating that the center location of the sinc function k x (k ω ,k u ) in the k x -domain holds in each dimension k x i . Hence, using (8) in (7), with an interchange of the order of CFT integration and DFFT summations, and thus an estimate of the scene's migrated angular spectrum is obtained from the Jacobian-weighted version of the discrete signal spectrum.
The task of obtaining a discrete version of m(ẍ) to produce an approximation of the discrete version of g(ẍ) is accomplished by design of a resampling gridk x to be applied in the Uncertainty rules related to the Π x parameter of the RoR represented by the box determines the required sample spacing Δk x of thek x domain. An inverse DFT is then used to obtain m(ẍ) as the box-bounded replicated scene estimate of g(ẍ) .

D. Uncertainty Principles
Solopulse signal analysis involves signals that are both time and frequency limited, and both space and wavenumber limited; and hence, are characterized by (Heisenberg) uncertainty principle bounds [64], [65], [66]. Uncertainty principles govern the support of conjugate parameters in Fourier transform pairs (e.g., x and k x ). These bounds determine the unescapable relationships between the sizes of various observational windows or dwells (e.g., receive time window, bandwidth, aperture size, wavenumber manifold, etc.) and the corresponding Nyquist sampling densities required in the conjugate domains. Use of uncertainty principles proves important in the parameter selection process of instantiated Solopulse algorithms, where comparatively small sensor arrays with dense sensor element spacings may handle large, remote (near or far), scattering scenes reconstructed through wave field inversion processes.
For efficiency of language, the size of an observational dwell, such as the support of the scene, the array size, or the size of some k-space manifold, shall be generically called here a "box". A sampling interval shall be called a "bin". Boxes are typically large. Bins are typically small. The relationship between observational boxes (generically indicated here by upper case Greek letter Π) and sampling bins (generically indicated here by upper case Greek letter Δ) are governed in signal analyses and system designs by the uncertainty relationship, Π · Δ ≥ 2π. [67]. For example, the bins and boxes of the spatial domain (Δ x and Π x ) and those of the corresponding wavenumber domain (Δ k x and Π k x ) govern signal processing system designs. Select any two of these four as free parameters and the other two are governed by Π · Δ ≥ 2π. If the signal processing is designed such that the bins and boxes satisfy Π · Δ ≥ 2π with equality, then the parameters are critically sampled at Nyquist rates.
Let the cross-range dimension of an ULA, for example, be expressed with the variable u y and the (parallel with respect to u y ) cross-range dimension of the Solopulse reconstructed scene with the variable y. Consider first the aperture box and bin parameters. The AE spacing or aperture bin size Δ u y determines the array wavenumber manifold size Π k uy = 2π/Δ u y , and the array length Π u y determines the required array wavenumber sample density Δ k uy = 2π/Π u y .
Likewise, given a specified cross-range image (RoR) box size Π y and image bin size Δ y (pixel spacing, not resolution), the corresponding cross-range image (RoR) wavenumber manifold must satisfy Π k y = 2π/Δ y with sample spacing Δ k y = 2π/Π y . One-half wavelength element spacing gives an off-boresight angular FoV of ±90 degrees and an Ewald sphere diameter of 2k ω , where k ω is the largest within-band signal frequency. The uncertainty relationship establishes that a wavenumber domain manifold of size of Π = 2k ω requires spatial domain sampling intervals Δ be of size (2π)/(2k ω ) = λ/2, where λ is the temporal signal wavelength. This bin size (for SIMO and MIMO modes) specifies the pixel density requirement of Solopulse images and HD-DBF beam fields as illustrated by the red lattices of Figs. 2 and 3. This bin size is halved and the box size is doubled in SISO mode.

A. The Search for the Huygens-Fresnel Spectrum
Instead of taking a computational approach to obtain the Huygens-Fresnel spectrum HH, an analytic solution has long been desired. These temporal-spatial Fourier transforms are surprisingly difficult to fully evaluate and their analyses depends on a variety of preconditions. Certain questions must be resolved before proceeding with a search for the solution. If used as a starting point in the analysis, should the Fresnel field Hh be static or dynamic? Should the Fresnel field Hh be in a sink or source form? Is HH representative of free space electromagnetic radiation or a scattered field?
Here are some approaches found in the literature for seeking analytic solutions of the spectrum of EM wave motion described by HH: r Plane-wave synthesis and decompositions [68], [69]. r Approximate methods based on stationary phase [70]. r Asymptotic Fourier analyses that seek insights into the structure of HH by analysis of the singularities of Hh [28].
A "complete" spatial Fourier transform of the Fresnel field solution of the Helmholtz equation was recently "developed" by Schmalz, et al. in 2010 [78]. They note that their approach was originally developed and utilized in the 1920's by Dirac in his k-space analysis that established QFT and QED [1], [80]. In QFT and QED the frequency-wavenumber domain analysis of the complementary (probability amplitude) wave function of a boson (photon) is essentially the same as the energy-momentum analysis of the field, whether free or scattered.

B. Dirac's Approach
Dirac provided two primary forms for the HF-spectrum. His first relates to photon radiation scattered by electrons. His second relates to photon descriptions of free-space EM fields.
1) Dirac's Free-Space Field: Generally, EM field analyses of the HF-spectrum can be qualified by the following attributes of the problem: free versus scattered, out-going versus in-going, noncausal versus causal, analytical versus arbitrary complexity, double-sided versus single-sided, even versus odd, and real versus imaginary. It was clear to Dirac that the free-space solution should be real, odd, and noncausal in the time-space domain and hence purely imaginary in the frequency-wavenumber domain. Atypical of much of the ensuing research performed by others over the intervening decades, Dirac focused on the use of Huygens' hh in the time-space domain as a starting point rather than use of Fresnel's Hh in the frequency-space domain.
In free-space analysis, there is no scattering agent that causes causality in time, hence analysis requires time to be a free parameter with both positive and negative values. To achieve covariance, Dirac generalized the Huygens wavelet to be an odd double-sided (noncausal) light cone (bidirectional sequence of Huygens wavelet spheres). We call this structure a hypercone. Dirac's covariant version of the Huygens wavelet is where the bow-tie accent indicates a hypercone singularity δ and where covariant time-space is indicated in 4-vector notation by χ = χ μ χ μ = (ct) 2 − |x| 2 . Dirac's solution for the HF-spectrum of free-space is indicated here by where covariant frequency-wavenumber 4-vector k-space is indicated by κ = κ μ κ μ = (k ω ) 2 − |k x | 2 . Dirac established through Fourier analysis of (9) that (10) is also a double-sided hypercone (bidirectional nested series of Ewald spheres). Dirac established that the 4-vector energy-momentum (frequencywavenumber) spectrum δ (κ) is the Fourier transform of the 4-vector time-space double-sided light cone δ (χ). This is an example of Dirac's "beautiful mathematics". This Fourier transform is self-similar (see [1], p281), We choose to call these the Dirac time-space hypercone and the Dirac frequency-wavenumber hypercone. If context is clear, all can be simply referred to as Dirac hypercones. It is important to remember that these δ (χ μ χ μ ) ↔ δ (κ μ κ μ ) hypercones are functional compositions of an underlying difference-of-squares, and hence, are covariant.
2) Dirac's Scattered-Field Solution: Dirac's scattered field solution was based on his understanding of the interaction of the (massless) photon and (massive) electron. The photon interacts with the electron in a way so as to be scattered and hence gives rise to the scattered EM field. Once assumed scattered, Dirac's assumptions about the EM field's (unbounded) probability amplitude wave functions were narrowed down to just out-going and time-space causal. Hence, Dirac's free-space double-sided hypercone in the time-space domain became single-sided for scattered fields. The Fourier transform of a causal time-space structure is such that the corresponding frequency-wavenumber structure is analytic in the Hilbert transform sense [65], [76], [79], [81], [82], [83], [83], [84], [85].
As an aside, the ability of an analytic signal to convey arbitrarily complex (scatterer) information should not be forgotten per the Hilbert transform product theorem [86], [87], [88], [89], [90], [91]. This theorem also establishes the ultra-wideband limits of systems based on these theories: the single-sided signal bandwidth can be as large, but no larger, than the (peak passband) carrier frequency. Solopulse supports such ultra-wideband systems [3] Similar to the combination of homogeneous and inhomogeneous solutions, the free-space spectrum's purely imaginary, double-sided, odd, hypercone remains as part of the scattered field spectrum. Scattering of the time-space EM field induces an additional real component to the free-space version of the HF-spectrum, a covariant pole 1/κ, hence This is the "complete" solution of Schmalz, et al. [78]. The covariant pole 1/κ follows directly from (2). This covariant pole is also a functional composition of a baseline differenceof-squares 1/κ = 1/(κ μ κ μ ) and as such, has a partial fraction expansion. Dirac seems to be the first to have combined these k-space elements in his study of quantum electrodynamics [1] (see also [11], p. 71 and [84], p. 224) and this was done again in the work of Lighthill in 1958 [13]. This structure in the 4-vector difference-of-squares κ-space retains the double-sided odd hypercone (bidirectional sequence of Ewald spheres) of the free-space HF-spectrum, but where a real part has been added to the spectrum to make it analytically complex, i.e., the real and imaginary parts are related by the Hilbert transform.
The double-sided light cone is in its entirety a generalized function; but once truncated to be single-sided, the causal light cone has a spread of spectral energy off the Dirac hypercone manifold as expressed by 1/|κ| for |κ| > 0. We shall refer to this as the "Hilbert spread". The Hilbert spread exist both inside and outside of the Ewald spherical singularities of the Dirac hypercone.

C. Dirac's Analysis Details 1) Dirac Hypercone in Time-Space:
Covariant analysis of time-space is based on the difference-of-squares expression c 2 t 2 − x 2 − y 2 − z 2 = 0, which defines the time-space requirement of Lorentz invariance. Using the notation of Dirac, let time scaled by the speed of light ct be expressed by x o , and let the dimensions of conventional 3-space be indicated by x = (x 1 , x 2 , x 3 ). Covariant 4-vector time-space is denoted by +x 1 , +x 2 , +x 3 ). The time-space 4-vector scalar product expresses a difference-ofsquares, χ μ χ μ = x 2 o − |x| 2 . This 4-scalar is sometimes written Consider a 4-vector time-space singularity described as a functional composition based of the difference-of-squares polynomial χ μ χ μ = x 2 o − |x| 2 = 0 in a spherical singularity, The second-order differenceof-squares invariant x 2 o − |x| 2 = 0 can be decomposed into linear factors that specify the two roots of (x o − |x|)(x o + |x|) = 0. By the property of generalized functions considered as functional compositions, an important decomposition is obtained, where • δ(x o + |x|) and a right Huygens wavelet sequence as a function of x o (i.e., onehalf of the Dirac hypercone) and call a left Huygens wavelet sequence (i.e., the other half of the hypercone). A double-sided (right and left) set of expanding Huygens wavelet sequences are realized in (11) by the difference-ofsquares spherical singularity • δ(χ μ χ μ ) = δ (χ μ ) + δ (χ μ ). Dirac found utility in establishing both even and odd (as a function of time x o ) versions of the time-space difference-ofsquares singularity, Dirac noted that the definition (12) gives meaning to the function δ (χ) if applied to any covariant 4-vector [1]. Example 4-vectors motivated by physics include time-space, frequencywavenumber, energy-momentum, and electromagnetic scalarvector potential functions. Note that For − δ , t < t n is permissible. For + δ , t n < t is permissible. Such offsets lead to the locator sinusoid banding of the Ewald sphere exploited by Solopulse.
2) Dirac Hypercone in Frequency-Wavenumber: Covariant analysis within the frequency-wavenumber domain is based on the difference-of-squares expression encountered in the spatial Fourier transform of the frequency-space domain Helmholtz wave motion equation (ω/c) 2 − |k r | 2 = 0, which defines the requirement of frequency-wavenumber domain covariance k 2 ω − k 2 x − k 2 y − k 2 z = 0, where k r = (k x , k y , k z ). Using the notation of Dirac, let the temporal-frequency k ω be expressed by k o , and let the dimensions of conventional wavenumber domain 3-space be indicated by (k 1 , k 2 , k 3 ). Covariant 4-vector frequency-wavenumber is denoted by κ μ = (k o , −k r ) = (k o , −k 1 , −k 2 , −k 3 ) and the contravariant 4-vector by κ μ = (k o , +k r ) = (k o , +k 1 , +k 2 , +k 3 ). The frequency-wavenumber 4-vector scalar product is κ μ κ μ = k 2 o − k 2 1 − k 2 2 − k 2 3 = k 2 o − |k r | 2 . This 4-scalar is sometimes more compactly written, κ = κ μ κ μ . The frequency-wavenumber difference-of-squares invariant k 2 o − |k r | 2 = 0 can be decomposed into linear factors that specify the two roots of κ μ κ μ = (k o − |k r |)(k o + |k r |) = 0. By the property of generalized functions considered as functional composition, • δ(κ μ κ μ ) = δ (κ μ ) + δ (κ μ ). Similar to the time-space difference-of-squares singularity, even and odd versions of the frequency-wavenumber hypercone • δ(κ μ κ μ ) can be defined, The frequency-wavenumber structure of δ (κ) can be characterized as a 4-vector double-sided (noncausal) HF-spectrum of EM propagation in free-space. Although developed many years ago, the Dirac approach did not become the standard approach for spatial Fourier transform analysis of the Helmholtz equation and has not been widely adopted. Our recent recognition of the relevance of Dirac's model of the HF-spectrum to the structure of the Solopulse spectrum filled a lingering gap in our prior theoretical analyses, which until then relied on an ansatz (i.e., the fundamental angle isomorphism of SAR) as explained in [2].

VI. STATUS AND PLANS
Single-pulse signal processing methods for short-range imaging and long-range, high-density, receive beamforming with digital sensor arrays have been developed and demonstrated. Evolving view-angle diversity was shown to achieve progressive resolution with aperture synthesis, where coherent fusion is implemented with pixel domain additions. Future research plans include more extensive modeling and simulation, prototyping, validation and further demonstration of these and other use cases. Additional research is required to better understand position estimation error sensitivities in multiplepulse use cases. Additional research is required to demonstrate that microwave video signal processing based on Solopulse freeze-frames is feasible in real-time, with and without aperture synthesis. Research is planned to explore use cases related to sensing for autonomous vehicles. Future research will also explore other sensor modalities such as ultrasound and sonar.