Bayesian 3D Reconstruction of Subsampled Multispectral Single-photon Lidar Signals

Light detection and ranging (Lidar) single-photon devices capture range and intensity information from a 3D scene. This modality enables long range 3D reconstruction with high range precision and low laser power. A multispectral single-photon Lidar system provides additional spectral diversity, allowing the discrimination of different materials. However, the main drawback of such systems can be the long acquisition time needed to collect enough photons in each spectral band. In this work, we tackle this problem in two ways: first, we propose a Bayesian 3D reconstruction algorithm that is able to find multiple surfaces per pixel, using few photons, i.e., shorter acquisitions. In contrast to previous algorithms, the novel method processes the jointly all the spectral bands, obtaining better reconstructions using less photon detections. The proposed model promotes spatial correlation between neighbouring points within a given surface using spatial point processes. Secondly, we account for different spatial and spectral subsampling schemes, which reduce the total number of measurements, without significant degradation of the reconstruction performance. In this way, the total acquisition time, memory requirements and computational time can be significantly reduced. The experiments performed using both synthetic and real single-photon Lidar data demonstrate the advantages of tailored sampling schemes over random alternatives. Furthermore, the proposed algorithm yields better estimates than other existing methods for multi-surface reconstruction using multispectral Lidar data.


I. INTRODUCTION
Single-photon Lidar devices provide accurate range information by constructing, for each pixel, a histogram of time delays between emitted light pulses and photon detections. Using time correlated single-photon counting (TCSPC) technology, Lidar systems are able to provide accurate depth information (of the order of centimetres) over very long ranges, while allowing the use of laser sources of low power [1]. The acquired range information (3D structure) has many important applications, such as self-driving cars [2], the study of structures behind dense forests [3] and environmental monitoring [4]. Multispectral Lidar (MSL) systems gather measurements at many spectral bands, constructing one histogram of time delays per wavelength, as illustrated in Fig. 1. The spectral diversity can be obtained either using a supercontinuum laser source [5], [6] or multiple lasers [7]. The detector generally This work was supported by the Royal Academy of Engineering under the Research Fellowship scheme RF201617/16/31. This work was partly conducted within the ECOS project 'Colored apertude design for compressive spectral imaging' supported by CNRS and Colciencias, and within the STIC-AmSud Project HyperMed.
consists of a spatial form of wavelength routing to demultiplex the channels [5]- [7] or wavelength-to-time codification [8].
Recovering spatial and spectral information from MSL data is a challenging task, specially in scenes with strong ambient illumination (i.e., multiple spurious detections) or when the acquisition time is very low (i.e., very few photon detections per histogram). Moreover, in a general setting, it is possible to find more than one object per pixel. This phenomenon occurs in scenes where the light goes through semi-transparent materials (e.g., glass), or when the laser footprint is such that multiple surfaces appear in the field of view. Thus, several signal processing algorithms have been proposed to address these challenges: while many algorithms are available for single-wavelength Lidar, either assuming a single surface per pixel [9]- [11] and multiple surfaces per pixel [12]- [14], to the best of our knowledge, only the single-surface-per-pixel case was studied in the multispectral case [6], [15], [16]. The single-depth assumption greatly simplifies the reconstruction problem, as it significantly reduces memory requirements and overall complexity [15].
In this work, we propose a new method to perform 3D reconstruction with subsampled MSL data, which is capable of finding multiple surfaces per pixel. The novel method draws ideas from a recently published algorithm named MANIPOP [14], which obtains state-of-the-art reconstructions in the multiple surface, single-wavelength case. However, due to the significantly larger dimensionality of multispectral data, we propose to modify the Bayesian model and estimation strategy of MANIPOP. Adopting a Bayesian framework similar to [14], we assign a spatial point process prior to promote spatial correlation and a Gaussian Markov random field prior to regularize the spectral reflectivity within surfaces/objects. Inference using the resulting posterior distribution is performed using reversible jump Markov chain Monte Carlo algorithm (RJ-MCMC), coupled with a multiresolution approach that improves the convergence speed and reduces the total computing time of the algorithm. We introduce new RJ-MCMC proposals, which take into account the additional spectral dimension and improve the acceptance ratio, compared to the ones proposed in [14]. Moreover, we propose an empirical Bayes approach to build the prior distribution associated with the background detections, which further improves the convergence of the RJ-MCMC sampler. Datasets containing dozens of wavelengths can be prohibitively large for practical 3D reconstruction algorithms, both in terms of memory and computing requirements. For example, a typical MSL hypercube with 32 wavelengths has more than 10 9 data voxels. To alleviate this problem, some Fig. 1. Example of an MSL system with three different wavelengths (red, green and blue). On the right, a schematic shows the working principles of a single-photon multispectral Lidar device. The red, green and blue arrows illustrate the laser pulses sent by the laser sources and reflected by the target onto the single-photon detectors. The white arrow depicts the background photons emitted by an ambient illumination source that reach the detectors at random times.
The figure on the left shows the collected histograms for a given pixel: the discrete measurements are depicted in red, green and blue, while the underlying Poisson intensity (i.e., the parameters to estimate from the data) is shown in black.
compression and subsampling strategies have been proposed. While TCSPC technology hinders compressive techniques along the depth axis 1 , reducing the number of measurements can be achieved by integrating multiple wavelengths in a single histogram [8] or measuring less histograms (i.e., subsampling) [15]. The wavelength-to-time approach proposed by Ren et al. [8] compresses L histograms into a single one by shifting in time the photon detections according to each measured wavelength. While significantly reducing the data size, this strategy is not well-suited in the presence of multiple surfaces per pixel, as the resulting likelihood becomes highly multimodal and extremely difficult to handle. A recent analysis in [15] studied different random subsampling schemes, without obtaining any significant differences in terms of reconstruction quality in the low-photon count regime. In this work, we investigate a new pseudo-random subsampling scheme for lowphoton count MSL data based on ideas from coded aperture design [18], [19]. By choosing the pixels measured for each wavelength in a more principled way, we achieve better results than the completely random schemes of Altmann et al. [15]. Furthermore, the proposed subsampling strategy can be easily implemented in many Lidar systems, reducing the total number of measurements, i.e., the time to acquire an MSL frame. Raster-scan systems using a laser supercontinuum source [5], [6] can be easily modified to measure only a subset of pixels per wavelength. More interestingly, single-wavelength array technology [20] can be combined with coded apertures [18], which measure different wavelengths at each pixel.
The main contributions of this work are • a new Bayesian 3D reconstruction algorithm for subsampled MSL data with multiple surfaces per pixel • the analysis of an appropriate subsampling scheme based on coded aperture design, which provides better results than completely random alternatives. The new algorithm is referred to as MUSAPOP, as it models MultiSpectrAl Lidar signals using POint Processes. The remainder of the paper is organized as follows. Section II recalls the classical observation model for single-photon MSL data. Sections III and IV present the Bayesian 3D reconstruction algorithm and the associated RJ-MCMC sampler. Section V details the principled subsampling strategy. Experiments performed with synthetic and real Lidar data are introduced and discussed in Section VI. Conclusions and future work are finally reported in Section VII.

II. SINGLE-PHOTON MULTISPECTRAL LIDAR
A full multispectral Lidar data hypercube Z ∈ Z Nr×Nc×T ×L + consists of discrete photon-count measurements, where Z + = {0, 1, . . . } is the set of positive integers, N r and N c are the numbers of vertical and horizontal pixels respectively, T is the histogram length (i.e., the number of time bins) and L is the number of acquired wavelengths. The 3D reconstruction algorithm estimates a point cloud, referred to as Φ, from the data hypercube Z. The 3D point cloud is represented by an unordered set of points, that is where N Φ is the total number of points, and c n = (x n , y n , t n ) T ∈ [1, N r ] × [1, N c ] × [1, T ] ⊂ Z 3 + and r n = (r n,1 , . . . , r n,L ) T ∈ R L + are the coordinate vector and the spectral response of the nth point, respectively. In the presence of distributed objects, the observed photon count at pixel (i, j), bin t and spectral band follows a Poisson distribution [15], whose intensity is a mixture of the background level, denoted by b i,j, , and the responses of the surfaces present in that pixel, i.e., where P(·) denotes the Poisson distribution, g i,j, ∈ {0, 1} is a known binary variable that indicates whether wavelength at pixel (i, j) has been acquired/observed or not, n = {n : (x n , y n ) = (i, j)} is the set of points corresponding to pixel (i, j) and wavelength , and h (t) is the impulse response of the Lidar device at wavelength , which can be measured using a spectralon panel during a calibration step. Note that to lighten notations, we assume that the impulse responses are only wavelength-dependent but the algorithm can be easily adapted when the responses vary among pixels. Assuming mutual independence between noise realizations in different bins, pixels and spectral bands [1], the likelihood of the proposed model can be written as For clarity in the notation, we will also denote the set of point coordinates as Φ c = {c n , n = 1, . . . , N Φ } and the set of spectral responses as Φ r = {r n , n = 1, . . . , N Φ }. The set of all background levels is denoted by , which is the concatenation of L images b , one for each wavelength. The cube of binary measurements is designated by G ∈ {0, 1} Nr×Nc×L , where [G] i,j, = g i,j, . Note that the model used in the MANIPOP algorithm [14] can be obtained from (2) by setting all the binary variables to 1, and considering only one band, i.e., L = 1.

RECONSTRUCTION
Recovering the position of the objects c n , their spectral signatures r n and background levels B from the subsampled MSL data Z is an ill-posed inverse problem, as many solutions can explain the observed photon counts. Thus, prior regularization is necessary to promote reconstructions in a set of feasible real-world 3D point clouds. In this work, we adopt a Bayesian framework, which allows us to include prior knowledge about the scene through tailored prior distributions assigned to the parameters of interest.

A. Prior distributions
The proposed model considers prior regularization for the point positions and reflectivity. As explained in Section III-A3, an empirical Bayes prior [21] is assigned to the background levels.
1) Spatial configuration: We adopt the spatial prior distribution of 3D points developed in the MANIPOP algorithm. This distribution is designed to promote spatial correlation between points within one surface and repulsion between points belonging to different surfaces. While only a brief summary is included below, we refer the reader to [14] for a detailed discussion about this prior model. The spatial point process prior for the position of the points is modelled by a density defined with respect to a Poisson point process reference measure [22,Chapter 9], i.e., where f 1 (Φ c ) and f 2 (Φ c |γ a , λ a ) are the Strauss and area interaction [23] processes respectively. The repulsive Strauss process is written as 0 if ∃ n = n : x n = x n , y n = y n and |t n − t n | < d min 1 otherwise where d min is the minimum separation between two surfaces in the same pixel. Attraction between points within the same surface is promoted by the area interaction process, that is where m(·) denotes the standard Lebesgue measure, S(c n ) defines a convex set around the point c n , and γ a and λ a are two hyperparameters, accounting for the amount of attraction and total number of points, respectively. Both densities define Markovian iteractions between points, only correlating points in a local neighbourhood. Moreover, the combination of both processes implicitly defines a connected-surface structure, which is used to model 2D manifolds in a 3D space.
2) Reflectivity: The spectral signatures are related to the materials of the surfaces [16]. Neighbouring points corresponding to a surface composed of a specific material show similar spectral signatures. This prior information is added to the model using a Gaussian Markov random field distribution, where the neighbours are defined by the connected-surface structure of the point process prior. First, in order to avoid the positivity constraint on the intensity r n, , we use the canonical form [24]- [26], m n, = log(r n, ) where m n, ∈ R denotes the log-intensity of the nth point at band . As multispectral devices only acquire dozens of well-separated wavelengths, the spectral measurements within a pixel do not show significant correlation. Hence, although potential correlations between wavelengths could be modelled, we choose here to neglect this correlation to keep the estimation strategy tractable. As a consequence, we consider the following reflectivity prior model Spatial correlations between log-intensity values in neighbouring pixels are defined according to the distribution where σ 2 and β are hyperparameters controlling the level of smoothness. The precision matrix P is very sparse due to the Markovian structure, being defined by is the set of neighbours of point c n , which is obtained using the connected-surface structure illustrated in Fig. 2, and d(c n ; cñ) is the Euclidean distance between two points, normalized according to the camera parameters of the scene to have a physical meaning [27].
3) Background levels: Background detections are due to detector dark counts and ambient illumination, as explained in [12], [28]. If the transceiver system is mono-static [1], the set of background levels can be interpreted as a multispectral passive image of the scene, as background detections Fig. 2. Connectivity at an inter-pixel level. Two different surfaces are denoted by the colours red and blue, where each square represents one pixel. Pixels without points are denoted by white squares. For simplicity, in this example all points are considered to be present at the same depth. Note that each pixel can be connected with at most 8 neighbours.
generally come from ambient illumination reflected onto the target and collected by the single-photon detector. In this case, the background levels are strongly spatially correlated within each wavelength. However, in bi-static systems [12], the transmit and receive channels do not share the same objective lens aperture, yielding weakly or uncorrelated background detections. Although not showing a strong spatial correlation in this second case, all the background levels have similar values, which also serves as prior information.
a) Independent prior distributions: In order to simultaneously model potential spatial correlation and ensure the positivity of the background levels, MANIPOP uses a 2D gamma Markov random field, which was introduced by Dikmen et al. in [29]. However, this prior is not well suited for MSL data as it introduces an undesired penalization for large background levels, whose negative effects are amplified when considering multiple bands (see Appendix A for details). Other alternatives such as Gaussian Markov random fields [24] cannot be sampled directly in closed form, requiring proposals with a rejection step, whose mixing and convergence scale badly with the dimension of the spectral cube, as shown in [30]. To alleviate these problems, we assign independent gamma priors, i.e., are the shape and scale hyperparameters of the gamma distributions. Despite using independent priors, we can capture the spatial correlation by setting the hyperparameters (K, Θ) appropriately. More precisely, in a similar fashion to variational Bayes [31] or expectation propagation [32] methods, in order to simplify the estimation of B, we specify (8) such that p(B|K, Θ) is similar to another distribution q(B) = L =1 q (b ) which explicitly correlates the background levels in neighbouring pixels and assumes mutual independence between spectral bands. Here, we use as a similarity criterion the Kullback-Leibler divergence As discussed in Appendix B, solving (9) can be achieved by computing expectations with respect to q(B).
b) Empirical Bayes approach: To ensure that the prior model (8) is informative, a suitable distribution q(B) should be chosen. Assuming that we have a coarse estimate of the point cloud (this information will be obtained using the multiresolution approach detailed in Section IV-B), we build the distribution q(B) following an empirical Bayes approach, as illustrated in Fig. 3. First, one can discard almost all the signal photons in the dataset by removing the photons detected in the compact support of h (t) around each point (see Fig. 3, central subplot). The number of bins that is not excluded in each pixel is referred to as v i,j, . Secondly, we integrate the remaining photons of each pixel, obtaining a coarse estimate of the per-pixel background photons, denoted byz i,j, ,b . We where b a vectorized image of background levels at wavelength , D ∈ R N rN c×N rN c a positive semidefinite matrix, α B is a fixed hyperparameter controlling the degree of smoothness. In mono-static systems 2 , a two-dimensional Laplacian filter is chosen for D to promote spatial correlation [24], whereas in bi-static systems, D is replaced by the identity matrix, only penalizing large background levels. µ is a translation parameter centringb i,j, in the linear part of the exponential function and is defined as ). As mentioned above, solving (9) requires the computation of expectations with respect to q(B) which are unfortunately not available in closed form. Instead of using additional MCMC sampling to find numerical approximations (detailed in Appendix B), obtaining samples from (10) is more attractive both in terms of convergence and computational complexity than using the original full datacube which includes mixtures of background and signal photons. Indeed, (10) simply involves integrated photon counts (over the range dimension). Moreover, given the independence property of q(B) among spectral bands, all the bands can be processed independently in parallel.

B. Posterior distribution
Following Bayes theorem, the joint posterior distribution of the model parameters is given by where Ψ denotes the set of hyperparameters Ψ = {γ a , λ a , γ s , σ 2 , β, K, Θ} and π(·) is the Poisson point process reference measure. Fig. 4 shows the directed acyclic graph associated with the proposed hierarchical Bayesian model.
Instead of using the MAP for the background levels, the minimum mean squared error estimator of B [21] is preferred, as we found experimentally that it provides better estimates. This estimator is defined aŝ As this expectation cannot be derived analytically, we propose to investigate Markov Chain Monte Carlo (MCMC) simulation methods. In particular, we use a reversible jump MCMC algorithm that can handle the varying dimension nature of the spatial point process. This sampler generates N m samples of Φ and B from the posterior distribution (11) denoted as These samples are then used to approximate the statistics of interest, i.e.,Φ ≈ arg max s=0,...,Nm−1 where N bi is the number of burn-in iterations.

A. Reversible jump MCMC moves
While other stochastic simulation algorithms for varying dimensions can be used [22], we choose an RJ-MCMC sampler, as it allows us to build proposals tailored for the MSL reconstruction problem. RJ-MCMC can be interpreted as a natural extension of the Metropolis-Hastings algorithm for problems with an a priori unknown dimensionality. Given the actual state of the chain θ = {Φ, B} of model order N Φ , a random vector of auxiliary variables u is generated to create a new state θ = {Φ , B } of model order N Φ , according to an appropriate deterministic function θ = g(θ, u). To ensure reversibility, an inverse mapping with auxiliary random variables u has to exist such that θ = g −1 (θ , u ). The move θ → θ is accepted or rejected with probability ρ = min{1, r θ, θ }, where r(·, ·) is defined as and K(θ |θ) is the probability of proposing the move θ → θ , p(u) is the probability distribution of the random vector u, and ∂g(θ,u) ∂(θ,u) is the Jacobian of the mapping g(·). The RJ-MCMC algorithm performs birth/death, dilation/erosion, spatial and mark shifts, and split/merge moves with probabilities p birth , 1 − p birth , p dilation , 1 − p dilation , p shift , p mark , p split and 1 − p split respectively. Due to the Markovian nature of the prior distributions, all the proposed moves are local, having a complexity proportional to the size of the neighbourhood. These moves are detailed in the following subsections. For ease of presentation, we summarize the main aspects of each move, inviting the reader to consult Appendix C for more details.
a) Birth and death moves: The birth move proposes a new point θ = (c NΦ+1 , m NΦ+1 ) uniformly at random in the 3D cube. The spectral signature of the new point is proposed by extracting a fraction (1 − u ) from the current value of the background level b i,j, according to the SBR w , that is for each wavelength where U(0, 1) denotes the uniform distribution on the interval (0, 1). The death move proposes the removal of a point. In contrast to the birth move, we modify the background level according to b) Dilation and erosion moves: Birth moves have low acceptance ratio, as the probability of randomly proposing a point within or close to the surfaces of interest is very low. However, this problem can be overcome by using the current estimation of the surface to propose in regions of high probability. The dilation move proposes a point inside the neighbourhood of an existing surface with uniform probability across all possible neighbouring positions where a point can be added. Once a new position c NΦ+1 is proposed, the spectral signature is sampled in the same way as the birth move (16). The complementary move (named erosion), proposes to remove a point c n with one or more neighbours. In this case, the background is updated in the same way as the death move.
c) Mark and shift moves: The mark move updates the log-intensity of a randomly chosen point c n . Each wavelength is updated independently using a Gaussian proposal with variance δ m as a proposal (also known as Metropolis Gaussian random walk), that is m n, ∼ N (m n, , δ m ) ∀ = 1, . . . , L.
Similarly, the shift move updates the position of a uniformly chosen point using a Gaussian proposal with variance δ t The values of δ m and δ t are adjusted by cross-validation to yield an acceptance ratio close to 41% for each move, which is the optimal value, as explained in [22,Chapter 4]. d) Split and merge moves: Some pixels might present two points with overlapping impulse responses in depth. In such cases, a death move followed by two birth moves would happen with very low probability. Hence, as in MANIPOP, we propose a split move, which randomly picks a point (c n , m n ) and proposes two new points, (c k1 , m k1 ) and (c k2 , m k2 ). The log-intensity is proposed for each wavelength following the mapping where B(·) denotes the beta distribution and η is a fixed parameter. The new positions are determined according to where Be(·) denotes the Bernoulli distribution, L h is the length in bins of the instrumental response at the wavelength with longest impulse response. The complementary move, named merge move, is performed by randomly choosing two points c k1 and c k2 inside the same pixel (x k1 = x k2 and y k1 = y k2 ) that satisfy the condition The merged point (c n , r n ) is obtained by the inverse mapping L =1 e m k 1 , + e m k 2 , which preserves the total pixel intensity and weights the spatial shift of each peak according to the sum of the intensity values.
1) Sampling the background: In order to exploit the conjugacy between the Poisson likelihood and gamma priors for the background levels, we use a data augmentation scheme as in [33], which classifies each photon-detection according to the source (target(s) or background), i.e., z i,j,t, = n:(xn,yn)=(i,j)z i,j,t, ,n +z i,j,t, ,b z i,j,t, ,b ∼P(g i,j, b i,j, ) z i,j,t, ,n ∼P(g i,j, r n, h(t − t n )) wherez i,j,t, ,n are the photons at bin t associated with the nth surface andz i,j,t, ,b are the ones associated with the background. In the augmented space defined by (z i,j,t, ,n ,z i,j,t, ,b ), the background levels are sampled as follows where B(·) denotes the Binomial distribution. The transition kernel defined by (23) produces samples of b i,j, distributed according to the marginal posterior distribution of B. In practice, we observed that only one iteration of this kernel is sufficient.

B. Full algorithm
We adopt a multi-resolution approach to speed up the convergence of the RJ-MCMC algorithm, in a fashion similar to [14]. The dataset is downsampled by integrating photon detections in patches of N bin × N bin pixels. Hence, the number of pixels is reduced by a factor of N 2 bin , meaning less points and background levels to infer with N 2 bin times more photons per pixel. The estimated point cloud at the coarse scale is upsampled using a simple nearest neighbour algorithm and used as initialization for the next (finer) scale. In all our experiments we repeat the process for K = 3 scales. The background hyperpriors K and Θ are initialized with noninformative values, i.e., k i,j, = 0.01 and θ i,j, = 100 for all pixels (i, j) and wavelengths . In finer scales, these hyperparameters are computed using the algorithm detailed in Section III-A3. The multi-resolution approach is finally summarized in Algorithm 1.

V. SUBSAMPLING STRATEGY
Despite not being able to design compressive measurements along the depth axis, we can still reduce the number of measurements in the two spatial (horizontal and vertical) dimensions and in the spectral dimension [15]. Given the point positions, recovering their reflectivity profile reduces to a multispectral image restoration problem using measured data corrupted by Poisson noise. While many compressive sensing strategies have been proposed for measurements under this noise assumption [ limitation if multiple surfaces per pixel are considered: photondetections belonging to different wavelengths cannot be integrated into a single histogram, as the reconstruction problem becomes highly non-convex, preventing practical reconstruction algorithms 3 . Indeed, summing histograms associated with different wavelengths and including multiple peaks generates histograms with even more peaks (possibly overlapping), which makes the 3D reconstruction and the reflectivity estimation more difficult. As a consequence, we only consider subsampling of depth histograms, which incorporates all of the practical sampling limitations. Following the formulation of the observation model (2), the subsampling strategy consists of choosing the binary coefficients G for a given compression level W/L, with W the average number of observed band per pixel. Several subsampling strategies have been proposed for different applications, such as halftoning and stippling [37], rendering, compressive spectral imaging [38]- [40], compressive computed tomography [41], [42], geometry processing [43], amongst others [44]- [46]. These approaches exploit the sampling geometry of the sensing devices to design a set of criteria. Similarly to coded aperture snapshot spectral imaging systems, the distribution of reflectivity profiles of 3D surfaces in real natural scenes suggests an uniform sampling in the row, column and spectral axes. Following the design in Correa et al. [19], the coefficients are chosen according to the spatiotemporal characteristics of blue noise, which distributes the measurements in spectral and spatial dimensions as homogeneously as possible [47]. The binary cube G is obtained by minimizing the variance of (weighted) measurements per 3 As mentioned in the introduction, the system presented in [8] considers the integration of photons belonging to different histograms, but is limited to one surface per pixel. local neighbourhood, i.e., where VAR{·} denotes the variance operator, M (i,j) ss denotes the set of pixels in a local neighborhood of (i, j) and w i ,j are the weights. The minimization is simplified by dividing the data in slices of L/W contiguous bands and running the algorithm introduced in [19] per slice. As shown in Fig. 5, the proposed strategy distributes the measurements uniformly in space, while other random strategies [15] tend to exhibit clusters, leaving some regions without measurements.

VI. EXPERIMENTS
To illustrate the efficacy of the proposed method, the new reconstruction algorithm is compared to other alternatives [5] using a synthetic dataset. Subsequently, the new subsampling scheme is compared with other random subsampling choices for a real MSL dataset. In all the experiments, the performance was measured using the following summary statistics: • True detections F true (τ ): Probability of true detection as a function of the distance τ , considering an estimated point as a true detection if there is another point in the ground truth point cloud in the same pixel (x true n = x est n and y true n = y est n ) such that |t true n − t est n | ≤ τ . • False detections F false (τ ): Number of estimated points that cannot be assigned to a ground truth point at a distance τ . • Mean intensity absolute error at distance τ (IAE): Mean across all the points of the intensity absolute error L =1 |r true n, − r est n , |, normalized with respect to the total number of ground truth points. The ground truth and estimated points are coupled using the probability of detection F true (τ ). Note that if a point was falsely estimated or a ground truth point was not found, then they are considered to have resulted in an error of L =1 |r n, |.

Hyperparameter
Coarse scale Fine scale γa e 2 e 3 λa (NrNr/N 2 bin ) 1.5 (NrNr) 1.5 The comparison is done with normalized intensity values, that is T t=1 h (t) = 1 for = 1, . . . , L. • Background normalized mean squared error NMSE B : Mean of the normalized squared error of the estimated background at each wavelength, i.e.,

A. Synthetic data
We first assessed the performance of the proposed algorithm using a synthetic dataset created from the "Art" scene of the Middlebury dataset [48], as shown in Fig. 6a. The measurements were obtained by simulating the single-photon multispectral Lidar system of [16]. The generated dataset has N r = 283 and N c = 231 pixels, L = 3 wavelengths (red, green and blue), a bin width of 0.3 mm and a mean of 9.83 photons per wavelength per pixel, where approximately 3.27 photons are due to the background illumination. We compared the proposed method to a two-stage algorithm that first estimates the point positions using MANIPOP [14] (single-wavelength multiple-return state-of-the-art algorithm) on the most energetic spectral band and then infers the spectral signatures with fixed point positions, following the procedure suggested by Wallace et al. [5], as summarized in Algorithm 2. Fig. 6 shows the estimated 3D point clouds, whereas Fig. 7 illustrates F true (τ ), F false (τ ) and IAE for both methods.
Algorithm 2 Two-stage MANIPOP [5], [14] Input: MSL waveforms Z Depth estimation: Find most powerful wavelength w (Φ,B) ←MANIPOP(Z w ) for = 1, . . . , L and = w do Update (Φ,B) using MANIPOP(Z ) in a fixed dimensional setting (only using background and reflectivity moves) end for The proposed algorithm performs significantly better than the two-stage alternative, as it finds 98.4% of the true points, whereas the two-stage MANIPOP only recovers 87.65% of these points. The proposed MUSAPOP algorithm performs slightly worse in terms of false detections, finding 8990 false points in comparison to 4172 by the competitor. In terms of intensity estimation, MUSAPOP obtains significantly better results, having an asymptotic IAE of 0.88, whereas the two-stage algorithm obtains an IAE of 1.24. The estimated background levels are shown in Fig. 8 showing that the proposed method yields an NMSE of 0.053, whereas the two-stage algorithm leads to an NMSE of 0.264. The total execution time for MUSAPOP was 1300 s, whereas the two-stage MANIPOP required 593 s.

B. Real MSL data
The proposed subsampling scheme was evaluated on a real MSL dataset [16]. The scene consists of L = 32 wavelengths sampled at regular intervals of 10 nm from 500 nm to 810 nm, N r = N c = 198 pixels and T = 4500 histogram bins. The target is composed by a series of blocks of different types of clay and two leaves. Fig. 9 shows an RGB image of the scene and the 3D reconstruction using the full acquisition time of 10 ms per wavelength per pixel. We compare the blue noise codes mentioned in Section V with the random schemes introduced in [15], all yielding the same total number of measurements and acquisition time: 1) Random sampling without overlap: W out of L bands per pixel are sampled without replacement (i.e., for a given pixel, each wavelength is measured at most once). 2) Random sampling without overlap: For each wavelength, W/L% of the pixels are sampled without replacement. 3) Proposed sampling method: For each wavelength, W/L% of the pixels are chosen following the scheme presented in Section V. The codes were evaluated for W = 1, 2, 4, 8, 16 bands per pixel and acquisition times of 0.1, 1 and 10 ms per measurement (i.e., the histogram of one wavelength), using as groundtruth the reconstruction obtained with all the measurements and an acquisition time of 10 ms. Fig. 10 shows the percentage of true detections, IAE and background NMSEs for all codes, acquisition times and numbers of sensed bands per pixel W . In terms of total number of estimated points, the blue noise codes achieve better performance in high compression scenarios W = 1, 2 and low acquisition times (0.1 and 1 ms). For example, for an acquisition time of 1 ms, almost all points are reconstructed using blue noise codes, whereas the random codes only yield around 97% of the ground-truth points. The choice of the subsampling strategy has a stronger impact in terms of IAE, achieving smaller IAE for all acquisition times and number of bands per pixel. Fig. 11 shows the execution time for an acquisition time of 0.1 ms and different numbers of sensed bands. The proposed RJ-MCMC sampler has a complexity proportional to the number of photon detections in the support of the impulse response around the 3D point being modified, whereas the background update has a complexity proportional to the total number of active histogram bins in the Lidar scene. The background extraction step required around 15% of the total execution time, which could be significantly reduced if all the bands were processed in parallel instead of sequentially as it is done in the current implementation.

VII. CONCLUSIONS AND FUTURE WORK
This paper has studied a new 3D reconstruction algorithm referred to as MUSAPOP using multispectral Lidar data,  which is able to find multiple surfaces per pixel. The proposed method leads to better reconstruction quality than other alternatives, as it considers all measured wavelengths in a single observation model. While based on some ideas initially investigated in MANIPOP [14], MUSAPOP also relies on new strategies to deal with the very high dimensionality of the multispectral problem. The first novelty is the use of an empirical Bayes prior for the background levels, which speeds up significantly the RJ-MCMC algorithm. A second new idea is the use of dilation/erosion and split/merge moves yielding higher acceptance rates in the multispectral case. Finally, the subsampling strategy further reduces both the algorithm's complexity and total number of measurements, leading to faster acquisitions and reconstructions. The sparse point cloud representation of the proposed method speeds up the computations proportionally to the number of measurements, whereas other dense models [12], [13] would not achieve similar improvements. Further work will be devoted to the design of compressive systems that are not limited to subsampling the spectral cube. Moreover, the compression can also be extended to depth information, for example using a range-gated camera [17].

APPENDIX A GAMMA MARKOV RANDOM FIELD
Most classical prior distributions (often referred to as regularization terms in the convex optimization literature) for images, such as Laplacian [24] and total variation [49], only penalize variations between neighbouring pixels, ignoring the mean intensity of the image. However, the gamma Markov random field [29], includes a penalization on the mean intensity, which promotes pixels with smaller values. This can be shown by inspecting the marginal distribution (defined as [30]), where α B is a hyperparameter controlling the degree of smoothness and M B denotes the set of pixels in the neighbourhood of pixel (i, j). For an image of constant intensity c, we have b i ,j ,l = b i,j,l = c for all pixels and spectral bands, yielding the density This dependency on the mean promotes reconstructions with lower background levels, decreasing the acceptance ratio of death and erosion moves (that propose to increase the background levels). In the case of MANIPOP, only one band is considered (L = 1). Thus, the bias towards smaller background levels does not impact the overall reconstruction significantly. However, in the MSL case (L >> 1), the reconstruction quality is reduced, hindering the use of gamma Markov random fields.

APPENDIX B EMPIRICAL PRIOR FOR THE BACKGROUND LEVELS
The prior for the background levels is chosen to minimize the Kullback-Leibler divergence in (9), where the correlated model q is given by (10). The minimization of (9) can be written as Considering the product of independent gamma distributions in (8), the problem reduces to (k i,j, , θ i,j, ) = arg min for all pixels i = 1, . . . , N r and j = 1, . . . , N c and wavelengths = 1, . . . , L. The expected values E q {b i,j, } and E q {log b i,j, } cannot be obtained in closed form for the Poisson-Gaussian model of (10). Thus, we approximate them numerically by obtaining MCMC samples ofb i,j, . As explained in [30], off-the-shelf sampling strategies (e.g., Hamiltonian Monte Carlo [22, Chapter 9]) do not scale well with the dimension of the problem, being inefficient when applied to large multispectral Lidar datasets. Hence, we consider proposals from a Gaussian approximation of (10) (as detailed in [24]) using the perturbation optimization algorithm [50], accepting or rejecting them according to the Metropolis-Hastings rule [22], [24]. We generate 10 3 samples {b (s) i,j, , s = 1, . . . , 10 3 } and compute the desired expected values as Finally, the values of the hyperparameters are obtained by setting θ i,j, = E q {b i,j, } and minimizing (27) with a onedimensional Newton method.