Single-pixel diffuser camera

We present a compact, diffuser-assisted, single-pixel computational camera. A rotating ground glass diffuser is adopted, in preference to a commonly used digital micro-mirror device (DMD), to encode a two-dimensional (2D) image into single-pixel signals. We retrieve images with an 8.8% sampling ratio after the calibration of the pseudo-random pattern of the diffuser under incoherent illumination. Furthermore, we demonstrate hyperspectral imaging with line array detection by adding a diffraction grating. The implementation results in a cost-effective single-pixel camera for high-dimensional imaging, with potential for imaging in non-visible wavebands.


I. INTRODUCTION
By encoding depth or spectral information, imaging assisted by a diffuser or thin scatterer can retrieve a three-dimensional (3D) data cube rather than the conventional 2D image obtained from an optical sensor. The concept of a 'diffuser camera' ('DiffuserCam') [1] has gained a lot of interest, due to its compact and costeffective approach. Retrieval of a three-dimensional (3D), multi-view [2], multispectral [3] or hyperspectral image [4] via single-shot computational imaging has recently been demonstrated. However, such approaches are challenging for more esoteric wavelength bands, for instance, x-ray or terahertz imaging. As the name suggests, singlepixel imaging [5][6][7][8][9], produces images without the need for a 2D detector, making use of structured detection or illumination of the object to computationally derive an image. A such, single-pixel approaches are of interest in offering alternatives to conventional imaging, both for applications in the visible, but also as a low-cost alternative in regimes such as x-ray [10,11], infrared [12] and terahertz band [13], and even imaging atoms [14]. Additionally, single-pixel approaches help to inform highperformance imaging techniques, for example, 3D depth, time-resolved or multispectral imaging, in which CCD based systems would be complicated or expensive to implement.
Typically, a single-pixel camera uses a digital micromirror device (DMD), which is placed in the image plane, to modulate the image of objects with different 2D structured sampling patterns. The single-pixel detector then measures the corresponding total light intensity. By correlating the 1 dimensional (1D) single-pixel signals with the modulation patterns, reconstruction algorithms such as compressed sensing can rebuild the 2D image. Alternatively, the DMD can modulate the illumination of the object, an approach commonly called computational ghost imaging (CGI) [6,15,16]. The DMD can be replaced by a liquid crystal spatial light modulator [17], rotating ground-glass diffusers [18] or LED arrays [19,20]. Image retrieval can be achieved using the same algorithms as in the structured detection scheme.
In cases where only passive imaging is needed, the structured detection scheme offers the benefit of a more compact and cheaper imaging system, due to the lack of a need for light sources. However, when using wavelengths such as x-ray, conventional DMDs cannot be used for modulation. Thus, x-ray single-pixel imaging was realized using the CGI scheme, using, for example, a monochromatic x-ray beam passing through a slit array and a moving porous gold film [10] or using polychromatic x-rays and a sheet of rotating sandpaper [11] to generate pseudothermal illumination speckle patterns.
Here in this work, we present a single-pixel diffuser camera (SP-DiffuserCam) that uses a low-cost rotating ground glass diffuser instead of a DMD for 2D structured detection modulation with incoherent light. We show that the random and fixed patterns of a simple diffuser placed in the image plane can serve as 2D light intensity modulation in single-pixel imaging. We refer to this concept as a passive version of classical GI, but with no need for simultaneous measurements of reference patterns. We show this concept is readily extendable for hyperspectral imaging. Figure 1 presents the SP-DiffuserCam concept. The first procedure is a calibration process to map the speckle-liked patterns of the diffuser, where the intensity distributions P (x, y, θ) for each angle θ(0 • ≤ θ ≤ 360 • ) are acquired sequentially by rotating the diffuser. Note that this could be achieved using an laterally moving stage, but here we only consider rotation for system compactness. The second step is the temporal single-pixel measurement. An object O(x, y) is illuminated by the same incoherent light source and imaged on the diffuser plane by a lens. The transmitted light intensity passing through the object I(θ), measured by a single-pixel detector under another repeated rotation, is simply the integration of the light on the detector plane,

II. PRINCIPLE
(1) We then can acquire the object image R(x, y) by using the differential correlation approach [21], where denotes the ensemble average over the distribution of patterns and I P (θ) = P (x, y, θ) dx dy denotes the weights of the patterns. For hyperspectral imaging, the detection intensity I(θ, λ) is simply the single-pixel signal corresponding to different wavelengths, measured using a line detector placed after a grating. Thus, the spectral image data cube can be reconstructed as by only replacing the I(θ) with I(θ, λ) in Equation 2 and without characterizing the surface roughness for the other wavelengths.

A. Single color SP-DiffuserCam
A simple proof-of-concept experimental realization of the approach is presented in Figure 2. Figure 2 (a) depicts the optical configuration of a single-color SP-DiffuserCam, where the light source is a monochromatic 530 nm LED with a bandwidth of 33 nm (M530D2, Thorlabs). The object is imaged on the diffuser plane, illuminated by the collimated light. The ground glass diffuser (∅ = 24mm, DG10-120-MD, Thorlabs) is mounted on a motorized rotation stage (PRM1/MZ8, Thorlabs), which has a 25 • /second rotation velocity. A silicon amplified photodetector (PDA100A2, Thorlabs) is used to measure the intensity fluctuations. For calibration, the speckle patterns P (x, y, θ) of the diffuser are recorded by a camera (panda 4.2 bi, PCO AG) through the same lens (f=50mm) used for the photodetector to keep the same numerical aperture; the exposure time of 4ms is the same for both detectors. A motor controller (KDC101, Thorlabs) and a low-cost DAQ device (USB-6002, National Instruments) are used to synchronize the rotator and the detector or the camera. To ensure our patterns are sufficiently different, we use the edge of the diffuser region(about d = 6mm from the center of the diffuser) as the image plane area. Due to the small diameter of the diffuser, however, the neighbouring patterns still have similar areas along the rotation direction. The objects (Figure 2(b) left) are 3D printed transmission masks of 'U', 'T', 'S' with a thickness of 2mm and the size about 3mm. The retrieved 64 × 64 pixel images with sampling ratios of 2.2%, 4.4%, and 8.8% are shown in Figure 2(b), corresponding to a minimum acquisition time of 14.4s, which is limited by the maximum rotation velocity. Here we also use a simple block-matching and the collaborative filtering algorithm [22] for noise suppression, which takes about 0.02 seconds. Outlines of objects emerge at 2.2%, with image quality increasing with improved sampling ratios to 8.8%. Further increasing the number of scattering patterns would not improve the reconstruction quality. The first reason is the 'imperfect' reference patterns with overlap regions between neighbouring ones. The second one is that a higher sampling ratio leads to a finer angle variation of measurement within one cycle of rotation, and increased overlap areas of the reference patterns. Thus, the above challenges induce a phenomenon that the spatial information along the vertical direction of rotation is reconstructed better than the parallel ones. This is evident in Figure 2(b), in the top-right corner of 'U', the bottom of 'T' and the emerging noisy pixels in the left of 'S'. Note the rotation directions are clockwise the three masks, with the sampling area in the lower half of the circular diffuser.

B. Comparison of different diffusers
The performance of our setup is largely dependent on the choice of diffuser. Here we compare the 120-grit diffuser, which is used in Figure 2, and a 220-grit diffuser (∅ = 24mm, DG10-220-MD, Thorlabs), which has smaller grain size on the polished surface. Figure 3(a) shows typical transmission images of the two diffusers, in which the former one shows a higher contrast than the later one, due to the coarser polishing. We calculate the correlation coefficient C(I θ1 , I θ2 ) of the two groups of diffuser patterns, for a quantitative study, as defined by where θ = 0 • ,1 • ,· · · , 360 • ,Ī θ1 andĪ θ2 denote the average values for two arbitrary patterns I θ1 and I θ2 , respectively. The mean correlation coefficients between different angles for the 120 grit diffuser is C(I 120 ) = 0.55, while for the 220 grit one we find C(I 220 ) = 0.87. Note that these coefficients are based on 1024 × 1024 pixels reference patterns, and they would be further higher in the condition of Figure 3(b), where the patterns are binned to 64 × 64 matrices with lower contrasts. A higher correlation means an improved intrinsic coherence between the sampling basis and leads to a lower detection efficiency of the spatial images. Key to contemporary singlepixel imaging is the use of high-efficiency orthonormal patterns, such as Hadamard [12] or Fourier basis [7]. However, since we adopt a commercial diffuser, the sampling patterns in our setup are actually pseudo-random grayscale matrices, which are more related to classical GI using laser and dynamic diffuser induced speckle patterns [18]. According to a study of the influence of speckle size in GI [23], an optimal speckle size exists in the range where the speckle size is comparable to the feature size of the object. This is the reason that we choose to use the 120-grit diffuser, as it is our coarsest diffuser and the one most approaches the feature size of the masks (about 0.4mm). Future work could focus on the optimization of the diffuser or an integrated mini camera.

C. Hyperspectral SP-DiffuserCam
One of the advantages of the SP-DiffuserCam is that it is easy to integrate other functions within the platform, such as the hyperspectral imaging shown in Figure 4. To do this, we can directly use the pre-recorded reference patterns without additional calibration for specific wavelengths. Figure 4  ferent from the monochrome setup, a low-cost diffraction grating slide (1,000 Line/mm, Rainbow Symphony, $0.40), is used to disperse the incident light. The object image on the diffuser passes through the grating after the imaging lens (f=50mm) and is focused onto a line array detector via the collection lens (f=35mm). Here a compact COMS camera (acA1920-150um, Basler) serves as line-detector by binning the pixels to a 16 × 1 array in a sensor area of 9.5mm × 3.3mm. To keep the same numerical aperture with the single-pixel measurement, we use the same two lenses system for calibration.
The 1D spectral intensity at each degree (exposure time: 4ms) are recorded in a 2D matrix I(θ, λ). Then this matrix is converted to a spectral data cube R(x, y, λ) by the reconstruction Equation 3. Here we apply standardized [International Commission on Illumination (CIE)] color-matching functions to the spectral cube to produce the pseudo-colored images to make consistent with the original object (made by sticking the color filters to the 'T' mask, a blue filter for the '-' and a red filter for the '|' of 'T'). Figure 4 (b) shows the retrieved spectral image in the blue band (461 ± 35nm) and the red band (627 ± 10nm). Compared with the object photograph in the lower right corner, the middle, overlap area in the reconstruction has much lower intensities, due to refraction between the junction of the two filters. The reconstructed spectra of two selected areas in Figure 4 (b) show a similar trend when compared to measurements using a conventional spectrometer (UV-2401 PC UV-VIS, Shimadzu) of the same two color filters. Finally, we show the 3D spatio-spectral data cube in Figure 4 (d), as well as the spatial mapping image at single wavebands ranging from 426nm to 637 nm.

IV. CONCLUSION
In conclusion, we have reported a passive single-pixel camera that employs a rotating diffuser as the spatial modulator in the image plane and uses incoherent light, for 2D imaging from 1D temporal signals. Note that classical pseudothermal ghost imaging [15,18,21,24] has also used a rotating ground-glass diffuser illuminated by a laser beam to generate dynamic speckles, and then split the beam into a reference and an object beam. After that, a CCD is used to measure the diffracted speckles, and a single-pixel bucket detector is used to measure the signals from the object, simultaneously, in which both CCD and the object keep the same distances from the diffuser. As a comparison, our setup uses pre-recorded reference patterns and does not need to measure these in experiments. Another difference is the intensity distribution on the surface of the diffuser is used here instead of the laser interference speckles after the diffuser. In fact, our work can be considered as the passive version of x-ray ghost imaging that uses pre-recorded patterns and an incoherent source [10,11]. However, we directly use the patterns of the diffuser as the reference patterns, not the diffracted speckles after propagation for a distance as in x-ray ghost imaging. Thus, our work would help to compact these imaging systems. We also demonstrate our concept is readily extended to achieve lowcost hyperspectral imaging, for 3D spatio-spectral image retrieval with temporally 1D spectral signals. Furthermore, the SP-DiffuserCam can be explored with coherent light, or other forms of imaging techniques such as time-resolved imaging [25] using fast response detectors and phase imaging [9,24] by adopting phase engineered diffusers. We, therefore, anticipate that this work will open opportunities for developing cost-effective integrated single-pixel cameras, especially in exotic wavebands and imaging under ultra-low illumination.