Synthetic Aperture Scatter Imaging

Diffraction limits the minimum resolvable feature on remotely observed targets to <inline-formula><tex-math notation="LaTeX">$\lambda R_{c}/A_{c}$</tex-math></inline-formula>, where <inline-formula><tex-math notation="LaTeX">$\lambda$</tex-math></inline-formula> is the operating wavelength, <inline-formula><tex-math notation="LaTeX">$R_{c}$</tex-math></inline-formula> is the range to the target and <inline-formula><tex-math notation="LaTeX">$A_{c}$</tex-math></inline-formula> is the diameter of the observing aperture. Resolution is often further reduced by scatter or turbulence. Here we show that analysis of scattered coherent illumination can be used to achieve resolution proportional to <inline-formula><tex-math notation="LaTeX">$\lambda R_{s}/A_{s}$</tex-math></inline-formula>, where <inline-formula><tex-math notation="LaTeX">$R_{s}$</tex-math></inline-formula> is the range between the scatterer and the target and <inline-formula><tex-math notation="LaTeX">$A_{s}$</tex-math></inline-formula> is the diameter of the observed scatter. Theoretical analysis suggests that this approach can yield resolution up to 1000× better than the diffraction limit. We present laboratory results demonstrating <inline-formula><tex-math notation="LaTeX">$>30\times$</tex-math></inline-formula> improvement over direct observation. In field experiments, we use a 23.5 cm aperture telescope at 100 m to resolve 27.78 <inline-formula><tex-math notation="LaTeX">$\mu$</tex-math></inline-formula>m features, improving on diffraction limited resolution by <inline-formula><tex-math notation="LaTeX">$>10\times$</tex-math></inline-formula>. The combination of lab and field results demonstrates the potential of scatter analysis to achieve multiple order of magnitude improvements in resolution in applications spanning microscopy and remote sensing.

Synthetic Aperture Scatter Imaging Qian Huang , Zhipeng Dong , Gregory Nero , Yuzuru Takashima , Timothy J. Schulz , Member, IEEE, and David J. Brady , Fellow, IEEE Abstract-Diffraction limits the minimum resolvable feature on remotely observed targets to λR c /A c , where λ is the operating wavelength, R c is the range to the target and A c is the diameter of the observing aperture.Resolution is often further reduced by scatter or turbulence.Here we show that analysis of scattered coherent illumination can be used to achieve resolution proportional to λR s /A s , where R s is the range between the scatterer and the target and A s is the diameter of the observed scatter.Theoretical analysis suggests that this approach can yield resolution up to 1000× better than the diffraction limit.We present laboratory results demonstrating > 30× improvement over direct observation.
In field experiments, we use a 23.5 cm aperture telescope at 100 m to resolve 27.78 µm features, improving on diffraction limited resolution by > 10×.The combination of lab and field results demonstrates the potential of scatter analysis to achieve multiple order of magnitude improvements in resolution in applications spanning microscopy and remote sensing.
Index Terms-Coherent imaging, non-line-of-sight imaging, phase retrieval, super resolution.

I. INTRODUCTION
T HE resolution of wave imaging systems is limited by the collecting aperture.Lensing effects by materials in the space between the object and imager may be used to increase effective aperture size.This has most famously been used in astronomical gravitational lenses, but related effects are observed in the atmosphere [1], [2].Coherent focusing through the atmosphere, however, requires "lucky" conditions [3].Here, we propose and demonstrate order-of-magnitude improvements on diffraction limited imaging using incoherent secondary scatter, which requires only that such scatter exist.Secondary scatter has previously been used to achieve super-resolution in radar imaging [4], but to our knowledge has not previously been demonstrated at optical frequencies.Previous optical studies have gone beyond diffraction-limited resolution by characterizing the transfer matrix of a disordered scattering medium [5], Fig. 1.System geometry for imaging from diffuse scatter.We can record multiple observations of scatter on multiple scatterers, or by displacing a single scatterer (e.g., moving the solid scatterer in the diagram to the position marked in translucency.).[6] but have not considered backprojection from a diffuse scatter surface.The major difference between the work reported here and radar imaging is that the amplitude and phase of the diffusely scattered optical field is not directly observable.Similarly, the difference between this work and multiple scatter imaging is that here there is no need to characterize the scattering surface.Rather, we apply phase retrieval to recover the field incident on the scatter and then use the scattering plane as a remote synthetic aperture.
As illustrated in Fig. 1, we seek to image an object at a range R c relative to the observing camera.If a diffraction-limited camera directly observes the object, the minimum object feature resolvable is ≈ R c λ A c , where A c is the camera aperture and R c is the object range [7].Alternatively, the camera may choose to observe light scattered first by the object and then scattered again by secondary objects.Observable scatter may arise from air or water borne particles or from secondary reflections off of intermediate surfaces.If the secondary scatter is specular, as with a mirror or smooth surface, the effect is just to change the imaging path.More commonly, the scatter is diffuse, in which case a random phase is imposed on the scattered field.While this means that phase sensitive or holographic detection of the scattered field is unlikely to be useful, incoherent scattering is essential to ensuring that the scattered field is observable at the camera aperture (e.g., the scatterer radiates uniformly in all directions).By measuring only the radiance of the scattered signal one can apply phase retrieval algorithms to recover the phase of the field prior to the scattering event.
Secondary scatter has previously been used in optical imaging in the context of non-line of sight imagers [8], [9], [10].Such systems use pulsed or multispectral illumination to image obscured objects by range gating or synthetic wavelength holography [11].Resolution is limited by pulse or spectral width and is several orders of magnitude worse than the direct view diffraction limit.While one can imagine systems that combine nonline of sight imaging and scatter, whether or not the observer has a direct line of sight to the object is not material to the present study.The point here is that the phase retrieval on the scattered field can create a synthetic aperture with a greater angular extent than the direct observation aperture, which in turn enables super resolution.
We represent the optical field scattered by the object as ψ(x).The camera images |ψ| 2 , i.e., the radiant intensity of on the surface of the scatterer.Note that while the scatter plane is characterized by its own amplitude and phase reflectance, we assume here that the amplitude is uniform and that the phase is not material because only the irradiance is measured.To backproject the scatter data into an image of the object we need to recover ψ(x).Phase retrieval algorithms achieve this objective by iteratively enforcing prior constraints on the field [12], [13], [14].Observation of the field under multiple transformation states, for example using illumination diversity as in Fourier ptychography, or by coding illumination or detection patterns improves conditioning [15].Here, in Fresnel-zone experiments we capture multiple scatter planes and in Fraunhofer-zone experiments we apply a support constraint on the reconstructed image to enable phase retrieval.
Assuming that we are able to recover ψ on the surface of the scatterer, the object is imaged by computational backpropagation.In practice, backpropagation is implemented as part of the phase recovery process by iterating between scatter space and object space.In this process, the minimum resolvable object feature is λR s /A s , where R s is the range between the object and the scattering plane and A s is the cross-section (diameter) of the scatter pattern.Comparing with the direct view minimum resolvable object feature, we find the net resolution has improved by the ratio α may exceed one in diverse applications.For example, in microscopy one may image scatter at a range comparable to the camera range, but the size of the scatter may exceed the camera aperture, in which case α ≈ A s A c .Alternatively, in remote sensing applications one is likely to find that R c R s 1.Each of these situations is demonstrated in experiments presented below.
The angular resolution of the camera determines the field of view on the object.The maximum reconstructed field of view (FOV) is equal to the ratio of the wavelength to the sampling period on the scatter.Assuming that the camera is diffraction limited this yields FOV = A c R sc , where R sc is the range between the camera and the scatter and A c is the camera aperture.Various illumination, motion, and multiplane sampling strategies may be imagined to increase this field of view.For example, camera motion or a camera array could be used to synthesize a larger aperture as discussed in [16].Here, however, we limit our focus to a simple demonstration using a single imaging aperture.

II. SYSTEM DESIGN
To demonstrate near-field imaging with R s R c ≈ 1, we used a laboratory system with planar objects and a white paper (HP Fig. 2. System layout.From left to right are: a collimated illumination system that produced a coherent planar wave, a transmissive planar target on the translation stage, a camera and a paper screen.Distances were measured when the target was at the home position (0 mm) on the stage.Fig. 3. System geometry for remote sensing.A collimated beam illuminates a transmitting planar object to produce a scatter.and the scatter falls on a white screen.A telescope is used to capture the pattern on the screen and image it onto a sensor.Office20 8.5 × 11 printer paper) as a scatterer, as sketched in Fig. 2. The illumination was a λ = 532 nm collimated continuous-wave laser beam.This plane wave illuminated a planar target mounted on a translation stage.When the stage is at home position, the distance between the objects and the scatter was R s = 2654 mm, and the distance between the object and the camera was R c = 2518 mm.These distances are measured by a laser range finder (RockSeed S2-50).We used a FLIR BlackFly camera with a 12 mm F/1.6 Arducam lens and a 540 × 720 CMOS sensor.The aperture A c = 7.5 mm and the pixel pitch Δ pp = 6.9 μm.The camera was R sc = 139 mm in front of the screen such that each camera pixel corresponded to = 80 μm on the screen.We cropped the recorded image a 460 × 460 pixel region containing the scatter pattern.The propagation can be regarded as incoherent, therefore we used simple geometric analysis to project the camera image onto screen coordinates.To aid in phase retrieval, we captured two scatter planes when the target was at 0 mm (home position) and 50 mm on the stage, corresponding to R s = 2654 mm and R s = 2704 mm.
As an example, Fig. 4(a) shows a 1951 USAF resolution chart, the dimension of which was 5 mm × 4 mm.Fig. 4(b) and (c) shows the scatter images at the two object ranges.Differences between the two scatter images are illustrated in Fig. 4(d).The cross section of the observed scatter field was A s = 37 mm.Based on the scatter geometry, the minimum resolvable feature size for phase retrieval was R s λ A s = 38 μm.Since the camera is also limited by the geometric aberration and pixel sampling aside from diffraction, the minimum resolvable object feature that each pixel corresponded to was 45 mm.The expected resolution improvement relative to the direct camera view was, therefore, 38×.

III. METHODS
We designed a multiplane error-reduction phase retrieval (PR) algorithm based on [17] to image the object given a set of scatter images |ψ 1 | 2 to |ψ n p | 2 and associated distances z 1 to z n p , and for the application demonstrated in this paper, we captured the scatters sequentially.The "scatter image" in this manuscript refers to the irradiance signal from the target on the scatterer.The amplitude and phase modulating properties of the screen aren't of importance to us.We simply directly image the irradiance pattern from the screen onto our detector.To model this process, consider the two-dimensional complex field right after the target surface to be represented as f (x, y) and its Fourier transform as f (u, v), where the vector (x, y) is a spatial coordinate and (u, v) a spatial frequency.The propagated complex field ψ(x, y) on the screen surface a distance z to the target can be modeled by scalar diffraction as where √ u 2 + v 2 < 1/λ for propagating waves.Note that one can also calculate f from ψ and its Fourier transform ψ by backpropagation (z < 0).Hence, estimating the target f is equivalent to estimating the phase of ψ.Phase retrieval can be achieved by many different sampling and processing strategies [15], here we apply an error-reduction algorithm in combination with numerical diffraction between the object and multiple scatter planes.
Assuming in (2) f (x, y) is sampled on a Cartesian grid with period δ and is zero-padded to have N samples in the x and y dimensions, the diffracted field may be calculated by discrete Fourier transforms (DFT) with the angular spectrum transfer function [18].The sampling periods for the DFT are Δx = Δy = δ, and Δu = Δv = 1/(Nδ).Let [m, n] be a discrete spatial coordinate and [p, q] be a discrete frequency coordinate, all running from −N/2 to N/2.The diffracted field is ( Assuming that the field is most tightly focused at the object, the extent of the field expands on propagation.With a constant space-bandwidth product, this means that the transverse spatial frequencies of the field decrease on propagation (e.g., the field blurs).When the field propagates forward (z > 0), the frequency decreases by the rate proportional to 1/z.Once the Nyquist rate of the field drops significantly below the sampling rate, the field can be downsampled without incurring aliasing.Similarly, the field during backward propagation (z < 0) can be upsampled when appropriate.In light of this, we use a multistage angular spectrum method (MASM) with bicubic down/up-sampling of the field upon significant decreases/increases in the spatial bandwidth.Here a "stage" refers to a single resampling behavior and "multi" indicates resampling can happen more than once.The field frequency of an object with diameter X can be approximated as X λz in Fresnel zone.When the downsampling ratio is 2 in the forward propagation, for example, downsampling can be triggered when X Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

Require:
n p number of planes target size in pixels along m and n axes n i number of iterations Main Code: MASM and its inverse lift the constraint that f and ψ share the same sampling rate and enable propagation within our computational budget.In our experiments, δ was 10 μm and β was 0.25.We applied MASM analysis in the short-range phase retrieval algorithm shown in Algorithm 1, where | • | and φ(•) take the amplitude and phase of a complex matrix elementwisely, 1 is a column vector ∈ R βN of all 1 entries, and rect(x) is the rectangle function The algorithm started with random initialization of f , and improved the estimate recursively.Here we assigned a random phase ϕ to one of the amplitude measurements, |ψ 1 | for example, and backpropagated it to derive our first estimate of f .For one plane c k which was selected randomly, the estimate of f was made to conform to a support constraint based on the knowledge of the object's spatial information, inspired by [19].Support was limited to a rectangular region consisting of a pixels in m direction and b pixels in n direction.The restrained field was propagated a distance z c k forward.The propagated field ψ c k kept the phase but replaced its amplitude with the known measurement.Then the new field propagated inversely, yielding a new estimate of f .Once all the planes were visited, one iteration was finished and the estimate of f would be the start point of the next until the maximum number of iterations was reached.The above process can be regarded as improved fusion of parallel and successive algorithms in [20].Empirically it reduces noise and converges fast.
Due to the ambiguity of the phase retrieval algorithm [21], the true field f , f with a constant phase shift and f * reflected about the origin are all acceptable solutions.We removed this ambiguity by requiring f to be real and nonnegative based on the knowledge the objects under test are approximate amplitude masks.Hence we enforced this phase constraint in the experiments on top of the support constraint by setting f However, the phase modulation is introduced inevitably to real objects with thickness, thus causing artifacts in PR reconstruction.To relax the phase constraint, we reconsider phase retrieval as an optimization problem where • p is the notation of l p -norm.In contrast to the PR algorithm that performs alternating projections, the alternative approach plugs in a regularizer and imposes a soft penalty on f that deviates from priors, which is in the plug-and-play (PnP) framework.Let us denote the regularizer as R(•) and the weighted parameter μ, (5) is reformulated as follows: Inspired by RED [22], here we combine a denoiser with finite support as our priors.R(•) becomes where D(•) is a denoiser.The constraint pushes the amplitude of f to be compact and noise-free.In our PnP phase retrieval algorithm, we applied a benchmark neural denoising algorithm DnCNN [23] following prDeep [24].We used FASTA [25] solver.In practice, we initialized FASTA with the result we got from our PR algorithm and iterated FASTA n F times while keeping other default parameters.
When the propagation distance is far greater than the object size, we may pack (2) into a concise form: which is known as Fraunhofer Approximation.The term inside the integral is the Fourier transform of the signal f (x, y).This equation basically reveals the diffracted field is the Fourier transform of the original signal up to a scale.To be considered as far field [21], z should satisfy where X is the diameter of the object.If the diameter X = 200 μm and the wavelength λ = 0.66 μm, for example, z should be larger than 0.12 m.Following the naming convention of the short ranging method, the discrete version of propagation can be written as We use FP to represent the above forward Fraunhofer propagation.FP naturally integrates an adaptive sampling rate, which considerably reduces the computation cost.The long-range phase retrieval algorithm follows Algorithm 1, except MASM and MASM −1 are replaced by FP and FP −1 , respectively.The multiplane setting may no longer be necessary as zs within the Fraunhofer regime only contribute to scaling.Thus, the algorithm can be concluded as Algorithm 2. The support and phase constraints were also enforced to resolve ambiguity.

IV. RESULTS
Results from both the laboratory and field experiments which were discussed in the System Design section are presented here.The error-reduction-based phase retrieval (PR) method which was discussed in the Methods section is used to reconstruct the targets of interest from the captured scatter images.
The aforementioned resolution chart was used as the target to analyze the resolution of the near-field imaging system.We set the number of planes n p = 2 and the number of iterations n i = 500.The support constraints a and b for this and the following experiments were set to focus tightly on the object.As illustrated in Fig. 5, the reconstructed image resolved up to the 4th element of group 3 (11.31lp/mm).This is equivalent to 44.2 mm minimal resolvable object feature.Note that the application of support constraint is crucial to the reconstruction quality as we observed the reconstruction without support constraint was unrecognizable.The same conclusion applies to all the following experiments.As the PR algorithm heavily used MASM and its inverse, we parallelized convolutions and Fourier transforms on our GPU by using the PyTorch library.Compared with the runtime on a 2.2 GHz AMD EPYC 7552 CPU, MASM and MASM −1 with regard to the experimental Rs run 6× faster on an NVIDIA Tesla V100S GPU.In this lab experiment, the total running time is about one hour on an NVIDIA Tesla V100S GPU.
To analyze system performance for diverse objects, we printed additional targets on plastic transparency film.Three targets with text "A", "OSC',' and "UoA" were 3 ∼ 5 mm in height and 4 ∼ 7 mm in width and printed clear against the black background.Each target was cropped and mounted on a microscope slide.Fig. 6 shows one of the targets and its scatter image.
We reconstructed each target from two of its associated scatter images.The estimate of each target converged in 200 iterations.Due to inhomogeneity in the plastic films, the outputs from the PR algorithm were not as sharp as for the chrome on the glass resolution target.We refined the phase retrieval results using our PnP algorithm.We chose l 2 -norm as the data fidelity term and set the weight of the regularization term to 0.1.The number of iteration n F of PnP solver FASTA [25] was 50.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Fig. 6.Object A and its scatter.Target "A" and its scatter images when the target was at the home position.A: Target "A".B: Scatter of "A" at home position.(a) Target "A".(b) Scatter image (0 mm).Fig. 7. Lab result.Comparisons between our reconstruction by PR results (row 2, 3) and direct views (row 4) of 4 targets (1951 USAF resolution test chart, "A", "OSC", and "UoA" in row 1 sequentially).Images were cropped to highlight the regions of interest and rescaled for better alignment to display.The actual size of each target is labeled, which also applies to its reconstruction and direct view.The object pixel pitch of the reconstructions is 10 µm, while of direct views was 1.42 mm.As illustrated in the direct view data, each object occupied only a few pixels when observed by the camera.
With the assistance of the PnP algorithm, however, artifacts were drastically reduced.Comparisons in Fig. 7 between direct views and reconstruction results from our phase retrieval algorithms demonstrate significant improvements in resolution.From calibration we measured that each pixel of the direct view corresponded to 1.42 mm on the target at the home position, indicating the net resolution in practice was improved by the ratio α = 1.42 mm 44.2 um ≈ 32.For longer-range demonstrations, field experiments were performed on a cool morning (about 12 degrees Celsius) in predawn hours to minimize turbulence.The distance from the target to the screen was large enough to be in the Fraunhofer diffraction region for the object size.We recorded only one scatter image since the only difference between the scatters from different locations in the Fraunhofer diffraction region on a translation stage would be a scaling factor of the diffraction pattern.Therefore, no significant phase information is to be gained by taking more than one image.were the raw data from the telescope imaging system.The results (row 4) were reconstructed by the long-range phase retrieval algorithm [26].
The transmissive target was a 1963 A resolution test chart.We masked the test chart to only illuminate regions of interest.Fig. 8 summarizes the field results.The object images were taken with a microscope, the direct view images were taken by back-illuminating the targets and directly observing them from the range R c = 98 meters, the scatter images were taken of the scattered light at a range R sc = 100 meters, and the reconstruction results were generated from their associated scatter images.As detailed in the methods section, we again applied error-reduction phase retrieval and backpropagation to reconstruct the image, in which case we used a support constraint on the object rather than multiple planes.n i was set to 1000.It took less than 8 seconds for the reconstructions to converge on an NVIDIA Tesla V100S.H row represents only horizontal bars that are 27.78 μm in height each separated by 27.78 μm spaced in the vertical direction, and HV row represents a group of horizontal and vertical bars with line spacing of 39.37 μm.This demonstrates that at a range of 100 m we can resolve features of a target that are 27.78 μm in size using our technique, which is 10× improvement relative to the diffraction limit (275 μm) and ≈ 36× improvement relative to the actual direct view (1 mm).The achieved angular resolution was 0.28 microradians.
Reconstruction quality depends on the input to the phase retrieval algorithm.We have the option to feed the algorithm with the captured irradiance image directly, or with the square root of this captured image.Ideally, the input to the algorithm is the diffracted field, which is best represented by the square root of the irradiance.However, we observed that the reconstruction result is better if we use the irradiance as the input of the algorithm shown in Fig. 9.We believe this is because taking the square root of the irradiance amplifies noise.Environmental conditions like ambient light may contribute to the signal in an undesired way.A solution to this problem would be to use a narrower band filter to reject as much ambient light as possible.

V. DISCUSSION
As we have seen, phase retrieval on diffuse scatter can be used to improve imager resolution by more than an order of magnitude over direct view performance.Of course, the critical question is how large can the improvement factor be? To answer this question, we consider the scatter signal generated by object features at the resolution limit.An object feature of cross-section δ reflects radiant power P δ 2 , where P is the power density illuminating the object.The fraction of this power collected by the camera is , where σ is the fraction of the radiant power that is scattered and R sc is the range between the scattering surface and the camera.As discussed above, the feature size is related to A s by δ ≈ λR s /A s .Assuming that a detectable feature must deliver N p photons to the camera, one finds A s ≤ λR s A c 2R sc σT P πN p , where T is the exposure time.
To get an idea of the limits of this relationship, one might assume that the illuminating power density is limited by diffraction from the observing aperture, i.e., P = LA 1.We explored the relationship between the reconstruction resolution and the exposure in simulation.Fig. 10 shows simulations with the addition of Poisson noise for various exposure levels.The exposure level is listed in photons per pixel in the diffracted field, but the intensity is not uniformly distributed in this field so the expected flux at important features will greatly exceed 1 photon.At high flux levels α may be limited by the observed aperture, but as illustrated in 10(a) and (b) as flux drops the effective aperture will drop below the value defined by the camera field of regard.Equation (11) suggests that resolution should fall in proportion to the square root of the flux.Fig. 10(a) and (c) are roughly consistent with this prediction with a reduction from ≈ 10 line pairs per millimeter (lp/mm) in (c) to ≈ 4 lp/mm (a) for a 10× reduction in flux.
The fact that the illumination flux, T L, must greatly exceed the minimum detectable flux, N p , in order for scatter imaging to achieve an advantage is not surprising.The factor σ A 2 c 4πR 2 sc reflects the loss in quantum efficiency of scatter imaging relative to a coherent aperture at the same location, this factor might easily be < 10 −7 .However, using coherent illumination it is not unreasonable to illuminate targets with flux that will overcome this loss.Since T L may exceed 10 20 photons and N p might be as little as 10 5 , it is not unreasonable to imagine σT L πN p ≈ 10 7 , which leads to diverse situations with α 1.For example, if the laser power is 1 KW and the observation time is 0.1 seconds, T L ≈ 10 21 .With this flux and illuminating a target at a range of 10 Km with an observing aperture of 10 cm and a scatter efficiency of σ = 0.1, α might exceed 1000.
While the experimental results presented here rely on phase retrieval from planar scattering, one expects similar analysis from scatter generated over a volume will also be effective.While such analysis involves more sophisticated tomographic field reconstruction, it does not impact spatial resolution analysis.This suggests that phase retrieval on coherent scatter will be useful in any imaging system where such scatter is observed.Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

Fig. 4 . 1 ,
Fig. 4. The 1951 resolution chart used as the target for the laboratory experiment is shown in (a).A camera captures the scattered light on the screen from the target at two different target positions on the translation stage: (b) 0 mm position and (c) 50 mm position.Both scattered images shown here in (b) and (c) have beencropped and we are displaying the square-root of the captured image.The normalized absolute difference between (b) and (c) is shown in (d), where whiter regions indicate more deviation.

Fig. 5 .
Fig. 5. Reconstruction of 1951 USAF resolution test chart.Reconstructions from Fig. 4 data.Elements 1 to 4 of group 3 in the red blocks were magnified and displayed at right.

Fig. 8 .
Fig. 8. Field experiment reconstruction result.Far field experiment result with two different objects: horizontal (H) bars with 27.78 µm spacing and a group of horizontal and vertical (HV) bars with 39.37 µm spacing (row 1).Direct View images (row 2) were taken 98 m away from the objects, and the scatters (row 3)were the raw data from the telescope imaging system.The results (row 4) were reconstructed by the long-range phase retrieval algorithm[26].

Fig. 9 .
Fig. 9. Field experiment reconstruction result with irradiance input.Reconstructions for horizontal bars and horizontal with vertical bars with their irradiances as the input of a phase retrieval algorithm.(a) Horizontal bars.(b) Horizontal with vertical bars.

2 c λ 2 R 2 C 2 cα
, where L is the power of the illuminating source.In this caseA s ≤ R s A 2R sc R cσT L πN p .Substitution in (1) yields the resolution improveseparates into two interesting factors, (1) the field of view of the camera on the scatter, A c /R sc , and (2) the inverse root of the quantum efficiency for collection of illuminating photons.Since one generally expects that A c R sc 1, α > 1 requires σT L πN p

Qian
Huang received the B.S. degree in electronic information science and technology from Nanjing University, Jiangsu, China, in 2018, and the Ph.D. degree in electrical and computer engineering from Duke University, Durham, NC, USA, in 2022.He is a Computer Vision and Deep Learning Engineer with NVIDIA, CA, USA.In 2021 and 2022, he was a Research Intern with the Wyant College of Optical Sciences, University of Arizona, Tucson, AZ, USA.His research interests include computational imaging, computer vision, and deep learning.Zhipeng Dong received the B.Sc. degree in optical sciences engineering and applied mathematics from the University of Arizona (UA), Tucson, AZ, USA, in 2021, where he is currently working toward the Ph.D. degree in optical sciences with the Wyant College of Optical Sciences.His research interests include super-resolution, ptychography, and imaging processing.Gregory Nero received the B.S. degree in imaging science from the Rochester Institute of Technology, Rochester, NY, USA, in 2020.They are currently working toward the Ph.D. degree in optical sciences with the James C. Wyant College of Optical Sciences, University of Arizona, Tucson, AZ, USA.Their research interests include designing SLMbased optical engineering solutions for display and communication systems as well as developing computational imaging techniques for super-resolution.Yuzuru Takashima received the B.S. degree in physics from Kyoto University, Kyoto, Japan, in 1990, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, USA, in 2004 and 2007, respectively.Since 2011, he has been a Full Professor pioneering on MEMS-based lidar and near-to-eye AR display systems with the James C. Wyant College of Optical Sciences of University of Arizona, Tucson, AZ, USA.Prior to joining the University of Arizona, he was a Research Staff with Stanford University on high density holographic data storage systems.He was also a Research Specialist with the Toshiba Corporate Manufacturing Research Center, Japan, where he designed lens systems, and developed ultraprecision manufacturing process of optical components.He is a Fellow of SPIE, and a Senior Member of OPTICA.He serves as a General Co-Chair of SPIE Industrial Optical Systems and Devices (iODS).