UNO: Unlimited Sampling Meets One-Bit Quantization

Recent results in one-bit sampling provide a framework for a relatively low-cost, low-power sampling, at a high rate by employing time-varying sampling threshold sequences. Another recent development in sampling theory is unlimited sampling, which is a high-resolution technique that relies on modulo ADCs to yield an unlimited dynamic range. In this paper, we leverage the appealing attributes of the two aforementioned techniques to propose a novel unlimited one-bit (UNO) sampling approach. In this framework, the information on the distance between the input signal value and the threshold is stored and utilized to accurately reconstruct the one-bit sampled signal. We then utilize this information to accurately reconstruct the signal from its one-bit samples via the randomized Kaczmarz algorithm (RKA). In the presence of noise, we employ the recent plug-and-play (PnP) priors technique with alternating direction method of multipliers (ADMM) to exploit integration of state-of-the-art regularizers in the reconstruction process. Numerical experiments with RKA and PnP-ADMM-based reconstruction illustrate the effectiveness of our proposed UNO, including its superior performance compared to the one-bit $\Sigma\Delta$ sampling.


I. INTRODUCTION
Sampling theory lies at the heart of all modern digital processing systems.The original sampling problem entails identifying a continuous function on Euclidean space from discrete data samples.It is addressed by the classical sampling theorem, commonly and variously, attributed to Cauchy [1], de La Vallée Poussin [2], Whittaker [3], Ogura [4], Kotelńikov [5], Raabe [6], Shannon [7], and/or Someya [8].A seminal result in this context, referred to as Whittaker-Kotelńikov-Shannon (or, simply Shannon's) theorem, states that it is possible to fully recover a bandlimited function from values measured on a regular sampling grid as long as the bandlimitation is an interval with length not exceeding the density of the sampling grid.Restating this in signal processing terms, a lowpass bandlimited signal can be perfectly reconstructed from its discrete samples taken uniformly at a sampling frequency that is at least the Nyquist rate, i.e., twice the signal bandwidth.During the past few decades, several variants and extensions of this result have solidified the extensive role of sampling theory in science and engineering [9][10][11].
Shannon's theorem assumes the existence of samples that are of infinite precision and infinite dynamic range (DR).But, in practice, it is realized by the quantization of the signals through analog-to-digital converters (ADCs) that clip or saturate whenever the signal amplitude exceeds the maximum recordable ADC voltage, leading to a significant information loss.The effects of finite precision quantization are characterized in the form of rate distortion theory [12,13].However, investigations into finite DR or clipping effects are relatively recent [14][15][16][17].Substantial work has been done and is still ongoing to overcome this problem, and the literature is too large to summarize here; see, e.g., [18] and the references therein, for comparisons of various techniques.Overall, these approaches require declipping [19], multiple ADCs [20], and scaling techniques [21], which are expensive and cumbersome.Recently, some studies [18,22,23] have proposed the unlimited sampling architecture to fully overcome this limitation by employing modular arithmetic.To perfectly reconstruct the signal of interest from modulo samples (up to a unknown constant), the unlimited sampling theory suggests the sampling rate to be slightly higher than the Nyquist rate and the norm estimate of the bandlimited signal be known.
Conventional multi-bit ADCs require a very large number of quantization levels to represent the original continuous signal in high resolution settings.Sampling at high data rates with high resolution ADCs, however, would dramatically increase the overall power consumption and the manufacturing cost of such ADCs [24].This problem is exacerbated in systems that require multiple ADCs such as large array receivers [25].An immediate solution to such challenges is to use fewer bits for sampling.Therefore, in the recent years, the design of receivers with low-complexity one-bit ADC has been emphasized to meet the requirements of both wide signal bandwidth and low cost/power.One-bit quantization is an extreme quantization scenario, in which the ADCs are merely comparing the signals with given threshold levels, producing sign (±1) outputs.This enables the signal processing equipment to sample at a very high rate yet with considerably lower cost and energy consumption than the conventional ADCs [24,[26][27][28][29].
In the classical problem of one-bit sampling, the signal is reconstructed by comparing the signal with a fixed, usually zero, threshold.This leads to difficulties in estimating signal parameters.In particular, when zero threshold is used, the power information of the input signal x is lost in one-bit data because the signs of x and ηx are identical for η > 0. This problem has been addressed in a few recent works [28,[30][31][32][33][34][35], which show that time-varying sampling thresholds enable better estimation of the signal characteristics.In particular, time-varying thresholds were considered for the covariance recovery from one-bit measurements in [28].This was extended in [33] for a significantly improved estimation of signal autocorrelation via the modified arcsine law.In non-stationary scenarios, [33] applied the modified arcsine law to utilize time-varying sampling thresholds.Applications of one-bit sampling to diverse problems such as sparse parameter estimation [31], localization [36], and phase retrieval [32] have also appeared in the contemporary literature.
Evidently, one-bit and unlimited sampling frameworks address complementary requirements.A one-bit ADC only compares an input signal with a given threshold.Therefore, essentially, one-bit sampling is indifferent to DR because, apart from the comparison bit, other information such as the distance between the signal value and the threshold is not stored.On the other hand, the self-reset ADC in unlimited sampling provides a natural approach to producing judicious time-varying thresholds for one-bit ADCs.
In this paper, to harness advantages of both methods, we propose unlimited one-bit (UNO) sampling to design sampling thresholds which are highly informative about the signal of interest.

A. Prior Art
Unlimited sampling of continuous-time signals that are sparse in Fourier domain was discussed in [37].
Extensions to graph signals [38], multi-channel arrays [39], and sparse outliers (noise) [40] have also been proposed.Reconstruction algorithms have included wavelet-based [23], generalized approximate message passing [41], and local average [42] techniques.Very recently, non-idealities in hardware prototyping were considered in [43,44]; a computational sampling strategy in the form of unlimited sampling with hysteresis [45] was found to be more flexible for circuit design specifications.
To reconstruct the full-precision signal from one-bit sampled data, conventional approaches [46,47] include maximum likelihood estimation (MLE) and weighted least squares.However, these methods have high computational cost, especially for high-dimensional input signals.To this end, we propose using the randomized Kaczmarz algorithm (RKA) [48,49], which is an iterative algorithm for solving a system of linear inequalities that arise naturally in the one-bit quantization framework.While the deterministic version [50] of the Kaczmarz method usually selects the linear equation sequentially, the RKA is random in its selection in each iteration leading to a faster convergence.The RKA is simple to implement and performs comparably with the state-of-the-art optimization methods.
Among prior studies involving both one-bit and unlimited frameworks, state-of-the-art results in [51] proposed one-bit Σ∆ quantization via unlimited sampling, whose objective is to shrink the DR between the input signal and its one-bit samples.This study developed a guaranteed reconstruction as long as the DR of the input signal is less than the DR of the one-bit data (i.e., 1).However, when the ratio of the input signal amplitude to the ADC threshold is large, then the imperfect noise shaping in sigma-delta conversion degrades this reconstruction.Contrary to this work, our proposed UNO technique focuses on a different problem, i.e., shrinking the DR between the input signal and the time-varying sampling thresholds.The one-bit sampling is typically performed at significantly high rates.As a result, the resulting observation inequalities form an overdetermined system.When the difference between the DR of the input signal and that of the thresholds increases, the reconstruction degrades significantly.We show that jointly exploiting both unlimited and one-bit sampling techniques provides a more efficient solution by a considerable reduction of the aforementioned gap.
In practice, errors arising from quantization noise degrade the reconstruction quality in unlimited sampling framework.In this context, [18] derived the reconstruction guarantees by including this error as bounded additive noise to the modulo samples.Contrary to this approach, we consider the more realistic case of additive noise to the input signal.We show that our RKA-based reconstruction is also effective for noisy one-bit sampled signals because it is independent from the statistical properties of the modulo samples.

B. Our Contributions
Our main contributions in this paper are: 1) Combined unlimited and one-bit sampling framework.In the proposed UNO framework, we leverage upon the benefits of both one-bit and unlimited sampling techniques.The result is a sampling approach that yields unlimited DR and a low-cost, low-power receiver while retaining a high sampling rate.We design time-varying sampling thresholds for one-bit quantization, whose DR is closer to that of the original signal.This aids in accurately storing the information of distance between the signal values and thresholds to utilize in the signal reconstruction task.We show that compared to the onebit reconstruction with random thresholds [24], our proposed UNO sampling based on time-varying thresholds performs better, especially for high DR signals.
2) RKA-based reconstruction.The signal reconstruction from one-bit measurements requires solving an overdetermined linear feasibility problem that we recast as a one-bit polyhedron and efficiently solve it via the RKA.By generating an abundant number of one-bit samples, we show that the singular values of one-bit data matrix that creates the one-bit polyhedron are equal to the number of time-varying threshold sequences employed in one-bit sampling.Further, we numerically investigate the effects of ADC threshold and signal amplitude in the RKA-based UNO reconstruction.
3) Performance guarantees.Our theoretical analyses show that a proper selection of the sufficient number of samples further enhances the reconstruction performance of the UNO.We prove that the convergence rate of the RKA when applied to the one-bit polyhedron depends on the size of the input signal and the total number of RKA iterations.In this context, we also obtain a lower bound on the number of required iterations for perfect reconstruction.4) Reconstruction in the presence of additive noise.When the input signal is contaminated with additive noise, we apply the recently introduced new plug-and-play (PnP) priors [52] to the alternating direction method of multipliers (ADMM) as an additional reconstruction algorithm step.In image denoising problems, the PnP-ADMM is used to replace the shrinkage step of the standard ADMM algorithm with any off-the-shelf algorithm to ensure the noise variance is sufficiently suppressed.Although PnP-ADMM appears ad hoc, it yields a better performance than state-of-the-art methods in several different inverse problems [52,53].For the noisy UNO, we deploy this algorithm to reconstruct the original signal from overdetermined and underdetermined noisy systems.Moreover, we show that the additive noise to the input signal contaminates the modulo samples with noise that is expressed in terms of the input noise.

C. Organization and Notations
In the next section, we provide an introduction to one-bit quantization with time-varying sampling thresholds.Particularly, the one-bit sampled signal reconstruction problem is formulated as an overdetermined system of linear inequalities.In Section III, we recall the concept of unlimited sampling as proposed in [18,22].We introduce the RKA in the context of signal reconstruction in Section IV.This is a prelude to Section V, which proposes UNO sampling to design judicious thresholds and guarantee the one-bit signal reconstruction in the high-DR.In Section VI, we provide several numerical experiments to illustrate UNO-based sampling and analyze the reconstruction error.We consider the noisy measurement scenario in Section VII and conclude in Section VIII.
Throughout this paper, we use boldface lowercase, boldface uppercase, and calligraphic letters for vectors, matrices, and sets, respectively.The notations C, R, and Z represent the set of complex, real, and integer numbers, respectively.We represent a vector x in terms of its elements {x i } or (x) i as x = [x i ].
We use (•) and (•) H to denote the vector/matrix transpose and the Hermitian transpose, respectively.
The identity matrix of size N is I N ∈ R N ×N .The Frobenius norm of a matrix B ∈ C M ×N is defined as , where b rs is the (r, s)-th entry of B. The function diag(•) outputs a diagonal matrix with the input vector along its main diagonal.The p -norm of a vector b is b p = ( i b p i )

II. ONE-BIT SAMPLING: OVERDETERMINED LINEAR SYSTEM FORMULATION
Several approaches have been proposed in the literature to reconstruct the signal of interest from one-bit samples with the most of them formulating this task as an optimization problem.For example, the covariance matrix formulation of [24] employs the cyclic optimization method to recover the input autocorrelation elements.A convex program based on the Gauss-Legendre integration to recover the input covariance matrix from one-bit sampled data was suggested in [33].Other recent works exploit sparsity of the signal and apply techniques such as 1 -norm minimization [54,55], 1 -regularized MLE formulation [47,56], log-relaxation [57], and Lasserre's semidefinite program relaxation [36] to lay the ground for signal reconstruction.In the following, we explain our one-bit polyhedron formulation, wherein a strong efficient and easily implementable solver of linear feasibility problems is applied to the aforementioned application-specific methods.

A. One-Bit Quantization Using Time-Varying Thresholds
Consider a bandlimited continuous-time signal x ∈ PW Ω that we represent via Shannon's sampling theorem as [10] where 1/T is the sampling rate, Ω is the signal bandwidth, and sinc(t) = sin(πt) πt is an ideal low-pass filter.Denote the uniform samples of x(t) with the sampling rate 1/T by x k = x(kT).
In practice, the discrete-time samples occupy pre-determined quantized values.We denote the quantization operation on x[k] by the function Q(•).This yields the quantized signal as In one-bit quantization, compared to zero or constant thresholds, time-varying sampling thresholds yield a better reconstruction performance [24,33].These thresholds may be chosen from any distribution.In this work, to be consistent with state-of-the-art [24,28,46], we consider a Gaussian non-zero time-varying For one-bit quantization with such time-varying sampling thresholds, r k = sgn (x k − τ k ).

B. One-Bit Polyhedron
The information gathered through the one-bit sampling with time-varying thresholds may be formulated in terms of an overdetermined linear system of inequalities.We have r k = +1 when x k > τ k and r k = −1 when x k < τ k .Collecting all the elements in the vectors as one can formulate the geometric location of the signal as Then, the vectorized representation of (2) is r where Ω diag (r).Suppose x, τ ∈ R n , and that τ ( ) denotes the time-varying sampling threshold in -th signal sequence, where For the -th signal sequence, (3) becomes where Ω ( ) = diag r ( ) .Denote the concatenation of all m sign matrices as Rewrite the m linear system of inequalities in (4) as where R and Γ are matrices, whose columns are the sequences r ( ) m =1 and τ ( ) m =1 , respectively.The linear system of inequalities in ( 6) associated with the one-bit sampling scheme is overdetermined.
We recast (6) into a one-bit polyhedron as Instead of complex high-dimensional optimization with techniques such as MLE, our objective is to employ the polyhedron ( 7) that encapsulates the desired signal x and leads to solving linear inequalities with linear convergence in expectation.

III. UNLIMITED SAMPLING
In a variety of applications, clipping or saturation poses a serious problem to signal reconstruction.
For instance, in scientific imaging systems such as ultrasound [14], radar [58], and seismic imaging [59], strong reflections or pulse echoes blind the sensor.In audio processing, clipped sound results in highfrequency artifacts [15].In this context, unlimited sampling suggests that, instead of point-wise samples of the bandlimited function x(t), the signal is digitized using a self-reset ADC with an appropriately selected threshold λ > 0 such that any signal value outside the range [−λ, λ] is folded to the same range [18,22].The folding corresponds to introducing a non-linearity in the sensing process [18,22].
We denote the folding by the modulo operator M λ that represents the following mapping: where xk are the modulo samples of x(t).
The unlimited sampling theorem [18] (reproduced below) states that, if the estimate of the norm of the bandlimited signal is known, then its perfect reconstruction (up to additive multiples of 2λ) from its modulo samples is possible with at least sampling period T ≤ (2πe) −1 , where e is the Euler's number and the signal bandwidth has been normalized to π.
Theorem 1 (Unlimited sampling theorem [18]).Assume x(t) to be a finite energy, bandlimited signal with maximum frequency Ω max and let xk , k ∈ Z in (8) be the modulo samples of x(t) with sampling rate 1/T .Then a sufficient condition for the reconstruction of x(t) from {x k } is that T ≤ 1 2Ωmaxe (up to additive multiples of 2λ).
Theorem 1 implies that the sampling rate depends on only the bandwidth and is independent of the ratio of ADC threshold λ to the signal amplitude.In other words, the DR of the input signal is unlimited.
Recently, stable unlimited sampling reconstruction in the presence of noise has also been obtained [18].
The reconstruction of the bandlimited function x(t) from its modulo samples {x k } is achieved as follows.Assume that x(t) admits a decomposition [18,22], where x(t) = M λ (x(t)) and the error x between the input signal and its modulo samples is where u∈Z D u = R is a partition of the real line into intervals D u .As indicated by ( 9), if x is known, then x can be reconstructed from x.It follows from ( 10) that x takes only those values that are integer multiples of 2λ thereby leading to a robust reconstruction algorithm [18].To obtain x (up to an unknown additive constant) and subsequently the desired signal x(t), the reconstruction procedure in [18,22] requires the higher-order differences of . Define the inverse-difference operator as a sum of real sequence {s b }, i.e., Then, applying ∇ ∆ N x and rounding the result to the nearest multiple of 2λZ yields x .For a guaranteed and stable reconstruction performance, a suitable choice for difference order N is [18], where β x is chosen such that β x ∈ 2λZ and x ∞ ≤ β x .Algorithm 1 summarizes the unlimited sampling reconstruction procedure.
Algorithm 1 Input signal reconstruction from modulo folded samples.
Output: The approximation of the input signal x.

IV. ONE-BIT SIGNAL RECONSTRUCTION
To reconstruct x from the sign data r ( ) m =1 , we solve the polyhedron search problem through RKA because of its optimal projection and linear convergence in expectation [32,49,60].

A. Basic Theory of RKA
The RKA is a subconjugate gradient method to solve overdetermined linear systems, i.e, Cx ≤ b where C is a m × n matrix with m > n [48,49].The conjugate-gradient methods turn this inequality to an equality of the following form: and then solve it as any other system of equations.Given a sample index set J , without loss of generality, rewrite (13) as the polyhedron where {c j } are the rows of C and the disjoint index sets I ≤ and I = partition J .The projection coefficient β i of the RKA is [49,60,61]: The unknown column vector x is iteratively updated as where, at each iteration i, the index j is drawn from the set J independently at random following the distribution Note that, (7) has only the inequality partition I ≤ .Herein, m = m × n and n = n.The row vector c j and the scalar b j in the RKA ( 14)-( 17) are j-th row of − Ω and j-th element of − (vec (R) vec (Γ)), respectively.It may be readily verified that the distribution of choosing a specific sample index j for the inequalities in ( 7) is uniform, i.e., Pr{j = k} = 1 mn .In one-bit reconstruction, c j = −ω j , wherein ω j is the j-th row of Ω; a j -th coordinate vector with ±1 as its j -th element and with 1 ≤ k ≤ m.This property makes the update process ( 16) similar to that of the randomized Gauss-Seidal method using the coordinate vector in each iteration [49,62].This approach is commonly used for solving high-dimensional linear feasibility problems by updating only one dimension at any iteration.
The structure of matrix Ω leads to a similar efficient RKA implementation by updating only the generic element j at each iteration, i.e., (x i+1 ) j = (x i ) j − β i r j , where r j is the one-bit data at index j .

B. Error Reconstruction Bound
At the i-th iteration, the error between the RKA estimate x i and the optimal solution x has been shown to follow the convergence bound [48,49,60,63] where is scaled condition number [64] of Ω, which is a block matrix of m diagonal matrices per (5).We have , where σ min = min {σ i } is the minimum singular value of Ω [65] (maximum singular value is σ max similarly defined).Following Lemma 1 evaluates singular values of Ω.
Proof: Compute the square matrix = Ω (1) Ω (1)  + Ω (2) Ω (2) Hence, the eigenvalues of P are equal to m.In other words, the singular values of Ω are It follows from (20) and Lemma 1 that κ Ω = mn Set the algorithm termination criterion to the condition where 1 is a positive constant.Based on this criterion and ( 19), the following Proposition 1 states the lower bound on the number of required RKA iterations.
Proposition 1.The number of RKA iterations i required to achieve the optimal solution x of length n from its one-bit samples within the error specified by ( 24) is i ≥ where ω 0 = x 0 − x 2 2 is the initial squared error (at i = 0) and 1 is a positive constant.
Proof: Define q i x 0 − x 2 2 ≤ 1 , or equivalently, Note that ω 0 is a constant scalar that depends on only the initial and optimal solutions.Substituting (23) in (26) and taking logarithm on both sides yields Since the optimal solution is unknown, ω 0 may not be precisely determined.However, a suitable number of required iterations may still be selected following Theorem 1 with a reasonable guess for ω 0 .
For instance, an initial value x 0 may be chosen in the direction of optimal solution x so that a ω 0 is obtained [48,49].

C. Numerical Example
Fig. 1a illustrates the RKA reconstruction of a sawtooth signal from one-bit polyhedron in (7) for 10 sweeps (periods) with a fundamental frequency of 50 Hz.We discretized the generated signal x(t) at the sampling rate (interval) of 1 kHz (T = 0.001 s).The time-varying sampling thresholds were drawn from the distribution τ ( ) ∼ N (0, I), for all ∈ L. Define the normalized mean squared error, , where x and x denote the true (discretized) signal and its reconstructed version, respectively.Since RKA selects each hyperplane randomly in each iteration, we repeat the reconstruction in Fig. 1a for 15 times.The averaged NMSE over all experiments is only ∼ 0.0012 or −29.2082 dB.

D. Limitations of Conventional One-Bit Reconstruction
Denote the DRs of the desired signal x and the time-varying threshold τ by DR x and DR τ , respectively, where we define the DR of a vector as its ∞ -norm.If DR x ≤ DR τ , then the reconstructed signal x may be found inside the polyhedron (7) with a high probability for an adequate number of samples.
Otherwise, if DR x > DR τ , there is no guarantee to obtain x since the desired solution cannot be inside the finite-volume space imposed by the set of inequalities in (7) indicating an irretrievable information loss.We demonstrate this as follows.Without loss of generality, consider x k = DR x for x k > 0. Assume If DR x > DR τ , then we have τ k < DR x = x k .Therefore, to reconstruct the k-th entry of the input signal x k , we always have a gap δ = x k − τ k > 0 that is not covered by any sample to capture the amplitude information of x.Hence, the desired signal is not found inside the finite-volume space imposed by the inequalities in (7).
In Fig. 1a, DR τ = 3 is larger than DR x = 1 thereby leading to a low reconstruction NMSE.We now consider x to be a bandlimited function with piece-wise constant Fourier transform values are drawn uniformly at random, i.e., x(ω) ∼ unif(0, 1).This signal is the same as the one used in [18].The timevarying sampling thresholds were generated following the procedure explained in Section IV-C.Fig. 1b shows the RKA-based reconstruction of the bandlimited signal from the polyhedron (7).Around t = 0 (corresponding sample index is 364 in the plot), the reconstruction severely degrades because DR x = 5 is set to be larger than DR τ = 3.Indeed, when the difference between DR x and DR τ increases further, we observe a significant loss of information in the reconstructed signal (Fig. 1c).

V. TOWARD A RECONSTRUCTION GUARANTEE FOR ONE-BIT SAMPLING
Since RKA does not guarantee an exact signal reconstruction from one-bit measurements in (7) when the DR of the signal exceeds that of the time-varying sampling threshold, it is pertinent to design the time-varying sampling threshold such that DR x ≤ DR τ .This is not always possible because the desired signal is unknown.We address this limitation via UNO, which is our proposed new one-bit sampling method based on the concept of unlimited sampling.
As discussed in Section III, unlimited sampling yields signal amplitudes folded within the range [−λ, λ].
This suggests an alternative time-varying threshold with the same DR as the modulo samples x = [x k ]; i.e.DR τ = λ.In other words, the thresholds are modified to be closer to the clipping value and the self-reset ADC is integrated with one-bit sampling.We summarize this UNO sampling framework as follows: 1) Apply the modulo operator defined in (8) to the input signal x and obtain modulo samples x = M λ (x).
3) Apply the one-bit quantization to modulo samples as r ( ) = sgn x − τ ( ) .Proposition 2 (Judicious threshold design).Under the UNO sampling framework, the following DR guarantee holds: Assume each one-bit sampling threshold τ ( ) is distributed as τ ( ) ∼ N 0, σ 2 τ I .Then, considering the ADC threshold λ, σ τ will be equal to λ 3 with a probability of at least 0.99.
Proof: With a probability of at least 0.99, the DR of each τ ( ) ∼ N 0, σ 2 τ I is 3σ τ [66].When σ τ = λ 3 , then time-varying sampling threshold also has a DR of λ with a probability of at least 0.99.In Proposition 2, we design time-varying sampling threshold sequences so that their DR is close to that of the input signal.This enables storing the information of distance between the input signal and the thresholds without any loss of information via one-bit sampling.Fig. 3 shows a comparison of conventional one-bit sampling and UNO for the high DR scenario; the transfer function of the former is plotted in Fig. 3a.We consider the same bandlimited signal as in Section IV-D and a random threshold τ ∼ N (0, I).In case of one-bit sampling, the signal values and thresholds differ considerably at some points (Fig. 3b) and, consequently, the information on the distance between the signal value and the threshold samples is completely lost.For UNO, the threshold is chosen closer to the folded signal with λ = 0.5 (Fig. 3c).This preserves the information of the input signal in the modulo samples (Fig. 3d).
For reconstruction of the signal of interest x from UNO samples, we reformulate the polyhedron (7) for modulo samples as This overdetermined system of linear inequalities in ( 28) is then solved via RKA and, from the resulting   Algorithm 2 Signal reconstruction in UNO.
Output: The approximation of the input signal x.
7: Find the modulo signal in P via RKA.
8: for i = 1 : i max do 9: Reconstruct the input signal via Algorithm 1 from x.
12: return x In Fig. 4, we show that increasing the number m of time-varying sampling threshold sequences guarantees the RKA-based reconstruction as it leads the space formed by the intersection of half-spaces (inequality constraints in ( 28)) to completely shrink to the true value modulo signal x inside the volume space imposed by unlimited sampling.This volume space is a cube because the constraints applied to the modulo samples are −λ ≤ xk ≤ λ.Here, the blue planes/lines representing the linear inequalities form a finite-volume space around the optimal point (displayed by the yellow circle inside the cube) by increasing the number of one-bit sampling thresholds.In the top panel, we show the specific case of a trihedron (i.e., modulo samples are x ∈ R 3 ) to represent the effect of increasing the number of threshold sequences on the reconstruction performance.The bottom panel shows the same effect for 2-D crosssection of the trihedron.The constraints are not enough to create a finite-volume space in Fig. 4a and d.
On the other hand, in Fig. 4b and e, such constraints create the desired finite-volume polyhedron space but are unable to capture the optimal point.Finally, in Fig. 4c and f, the optimal point is successfully captured by the resulting finite-volume space.The following theorem summarizes the UNO guarantees.
Theorem 2 (UNO sampling theorem).Assume x(t) to be a finite energy, bandlimited signal with maximum frequency Ω max .Let xk , k ∈ Z, introduced in (8) be the modulo samples of x(t) with sampling rate 1/T .Assume x contains the modulo samples reconstructed by the RKA and define the reconstruction error as e = x − x .Then, the sufficient condition for the reconstruction of bandlimited samples , where τ ( ) ∼ N 0, λ 2 9 I , up to additive multiples of 2λ is where h ∈ N is given by and Proof: While reconstructing the modulo samples from one-bit data via RKA, the real modulo samples are represented by the linear model The error in RKA reconstruction may be viewed as noise for modulo samples.According to [43,Theorem 3], the sampling rate for the contaminated modulo samples in (32) to reconstruct the bandlimited samples Ωmaxe , where h ∈ N, and Clearly, (30) follows from (33).Moreover, to ensure that the log function used in ( 30) is positive, we have λ 4 e ∞ ≥ ζ > 1 leading to a lower bound for the ADC threshold λ ≥ 4ζ e ∞ .This completes the proof.
Theorem 2 provides the lower bound for the ADC threshold λ in Eq. ( 31).The upper bound on T for UNO sampling is lower than or equal to that of the unlimited sampling (the equality holds when h = 1) which associates with a higher sampling rate in UNO.As mentioned later in Section VI-B, oversampling is a a common scenario in one-bit quantization techniques and not a major concern in UNO implementation.Note that the resulting error e of RKA is different than the noise considered in [43,Theorem 3] in the sense that, unlike the latter, the corresponding reconstructed modulo samples in UNO obey | xk | < λ.This ensures that N in (12) guarantees ∆ N x ≡ M λ ∆ N x or equivalently ∆ N x ≡ M λ ∆ N x ; we refer the reader to [18] for more details on this aspect.As a result, UNO perfectly reconstructs the input samples x k in the sense that xk = x k + e k (up to additive multiples of 2λ) with the same N considered in the noiseless unlimited sampling reconstruction of [43, Section IV.B].

VI. UNO RECONSTRUCTION: NUMERICAL ILLUSTRATIONS AND ERROR ANALYSES
We assessed the performance of the UNO reconstruction through extensive numerical experiments.In particular, we validate that the size of the cube imposed by self-reset ADCs (red contours and shaded regions in Fig. 4) and, hence, the reconstruction error depend on the ADC threshold λ.We then investigate the effect of input signal amplitude x ∞ on the reconstruction performance.In all experiments, we considered the same high DR input signal as in Section IV-D.

A. Varying ADC Threshold
The number of time-varying sampling thresholds was set to m = 400.In each experiment, the generated signals have the same DR x = 8 but the ADC threshold λ changes.For a given λ, the sequences of timevarying sampling threshold are drawn randomly following the distribution τ ( ) ∼ N 0, λ 2 9 I m =1 .The inset shows the same plot on a larger scale.We observe that increasing in λ leads to higher NMSE because the volume of the unlimited sampling cube grows further, and consequently, more hyperplanes may be required to contain a specific volume around the optimal point in the feasible region.

B. Varying Input Signal Amplitude
Here, we generated the input signals with varying DRs.In each experiment, the ADC threshold λ was fixed to λ = 0.5, for which we generated sequences of time-varying sampling threshold as Next, we study the reconstruction for a signal with an extremely high DR, with x(t) ∞ = 1000.
In theory, the unlimited sampling theorem guarantees reconstruction with T ≤ 1 2Ωmaxe .However, in practice, signal reconstruction from unlimited samples has its own limitations due to error propagation by the finite-difference operator.Specifically, for a large DR of input signal compared to that of the ADC threshold λ, the order of difference operator N should also be large.But a large N would also amplify the quantization/round-off noise, leading to an unstable reconstruction.In this scenario, more samples (given by the oversampling factor) are required to decrease N .Note that, unlike conventional ADCs, an abundant number of samples does not lead to an increase in power consumption, manufacturing cost, and per-bit chip area in one-bit ADCs.Fig. 6d shows an accurate UNO reconstruction for λ = 1 and a Although UNO and one-bit Σ∆ method [51] are different in their respective theoretical foundations and applications, here we compare their reconstruction performance for the same signal.The ADC threshold was set to λ = 1 and sequences of the time-varying sampling threshold were drawn as For the specific case of x ∞ = 40, Fig. 7 compares the UNO-reconstructed signal x with the one-bit unlimited Σ∆-reconstructed signal xΣ∆ when the ratio between the input signal amplitude and the ADC threshold η = x ∞ λ is large.The one-bit unlimited Σ∆ degenerates in some parts of the input samples, while the UNO accurately reconstruct the signal.Table III further compares the reconstruction NMSE, averaged over 15 experiments, of both sampling methods for different amplitudes x ∞ ∈ {20, 50}.Here, the degradation in one-bit Σ∆ reconstruction for large η is because of the round-off noise in software and, primarily, imperfect noise shaping in sigma-delta conversion that results in sample corruption.

C. Analysis of Reconstruction Error
To ensure a bounded reconstruction error, the feasible region in (28) cannot have an infinite volume in an asymptotic sense when amplitude constraints are imposed by unlimited sampling.As mentioned before, by introducing more samples, it is possible to obtain a polyhedron with a bounded volume that contains the desired point.Further, as we illustrated in Fig. 4, adding more inequality constraints to (28) leads to shrinkage of this polyhedron.We now prove this result, i.e., in a probabilistic sense, that increasing the number of samples leads to the reconstruction error approaching zero, and that the resulting overdetermined linear system of inequalities guarantees the convergence of RKA [32,49,60].In other words, using an abundant number of samples (or oversampling in one-bit), the probability of creating the finite-volume space around the desired point x is increased.Define the distance between the optimal solution x and the j-th hyperplane of (28) as where ω j is the j-th row of Ω.This distance is also the residual error1 of (28).Intuitively, it is easy to observe that by reducing the distances between x and the constraint-associated hyperplanes generally increases the possibility of capturing the optimal point.For a specific sample size m = mn, when the volume of the finite space around the optimal point is reduced, the mean d j x , τ ( ) m j=1 [49], i.e., T ave = 1 m m j=1 d j x , τ ( ) , also decreases.Denote D x , τ ( ) = x − τ ( )2 2 .Then, T ave becomes In the one-bit phase retrieval approach studied in [32], a Chernoff bound was derived to quantify the possibility of creating the above-mentioned finite-volume and the number of samples required for RKA.
We apply this result below in Theorem 3 to the UNO reconstruction from one-bit samples.Here, we have replaced the error between the true value signal and the initial value x0 − x 2 2 with the residual error ω j x − τ ( ) 2 2 in the Chernoff bound.The latter explains the distance between the hyperplanes and the true value x by including the sampling threshold sequences into its expression.
Theorem 3. [32, Theorem 1] Assume the distances d j x , τ ( ) m j=1 between the desired point x and the hyperplanes of the polyhedron defined in (28) are independent and identically distributed random variables.Then, 1) The Chernoff bound of T ave is where a is an average distance point in space at which the finite-volume space around the desired signal is created, and is the moment generating function (MGF) of the reconstruction error, µ dj = E d κ j , and O (m ) denotes the higher-order terms.
2) M T decreases with an increase in the sample size leading to an increase in the lower bound in (36).
Theorem 3 states that the abundant number of samples in conventional one-bit quantization significantly affect the reconstruction performance of RKA for a system of linear inequalities in (28).Based on this result, Claim 1 shows the efficacy of UNO sampling.
Claim 1. Increasing the number of time-varying sampling threshold sequences m is not an effective approach to guarantee the desired signal reconstruction with RKA without using unlimited sampling.
Proof: For RKA-based reconstruction in Section IV, assume that we increase the number of timevarying sampling threshold sequences from m to m+κ.Therefore, from (36) of Theorem 3, the Chernoff bound of the reconstruction error is where P T = inf t≥0 MT e ta and d j x , τ ( ) = ω j x − τ ( ) 2 2 .Without loss of generality, assume x k = DR x for x k > 0, and δ = x k − DR τ when DR x > DR τ .The infimum of the distance (38) occurs when τ The term δ n in (39) does not depend on the number of time-varying sampling thresholds m + κ.In other words, increasing the value of m does not guarantee the reconstruction of the desired signal in the polyhedron (7) via the RKA.This phenomenon is also observed in connection to P T .A considerable difference between the signal values and thresholds leads to larger values of d j x , τ ( ) thereby increasing the moments µ (κ) dj or MGF M T .Therefore, the dependence of P T on M T is unaffected when m is increased.
Note that by using the unlimited sampling and imposing amplitude constraints, the considered distances become bounded and T ave (35) is guaranteed to be lower than a small a.Then, the volume of the resulting finite-space will be smaller than that of the cube imposed by unlimited sampling.In Fig. 8, we show that UNO reconstruction NMSE, averaged over 15 experiments, significantly improves with the increase in the number of time-varying threshold sequences m.The ADC threshold was set to λ = 0.5 and the signal DR was x ∞ = 20.
Remark 1.According to Theorem 3, when the number of time-varying threshold sequences m is increased, the reconstruction error e = x − x and e ∞ become smaller.We have a smaller lower bound on h defined in (30) and, consequently, a lower sampling rate based on (29).In other words, a larger m yields a smaller UNO oversampling factor.

VII. RECONSTRUCTION IN THE PRESENCE OF NOISE
In the presence of noise, one-bit Σ∆ sampling currently lacks similar guarantees.In one-bit noisy models of [47,67], a linear measurement model with additive Gaussian noise was considered.Then, based on the MLE formulation for Gaussian likelihood function, the input signal is recovered.However, in case of non-Gaussian contamination, the MLE objective is nonconvex and the recovered solution is not unique.Moreover, MLE-based reconstruction is computationally more complex for high-dimensional signals.
Previously, for unlimited sampling, [18] has shown recovery of noisy bandlimited samples from their modulo samples up to an unknown additive constant, where the noise is entry-wise additive to the modulo samples, i.e., ỹ = x+ , and is the noise vector.Contrary to this, we propose an approach to reconstruct unlimited one-bit sampled signal when the noise is additive to the input signal, which itself has a linear relationship with a desired parameter.This linear model for the noisy measurement y is where θ is the desired parameter vector and the noise follows the distribution ∼ N 0, σ 2 I m .Here, we may have y / ∈ [−λ, λ].Our goal is to estimate θ from the UNO samples of noisy measurement y obtained as Our recovery approach comprises using RKA and Algorithm 1 (with N specified by ( 55)) to reconstruct noisy measurements from one-bit data, and then exploiting the PnP-ADMM method to estimate the desired parameters from linear overdetermined or undetermined systems.

A. PnP-ADMM-Based UNO Reconstruction
From the UNO samples (41), we reconstruct y via Algorithm 2. The reconstructed signal ȳ also follows the linear model (40).Therefore, we use ȳ to estimate θ through the regularization where ρ(θ) is the penalty term and η > 0 is the real-valued regularization parameter.There is a rich body of literature to select the penalty function ρ(•) including the 1 -norm [68], smoothly clipped absolute deviation (SCAD) [69], adaptive least absolute shrinkage and selection operator (LASSO) [70] and the minimax-concave (MC) penalty which has a relationship with Huber functions [71].
One of the standard approaches to solve regularized problems such as in ( 42) is ADMM that relies on splitting variables [72].We consider Using the augmented Lagrangian, we reformulate problem (43) as where p is the dual variable and β is a real-valued design parameter.Denote u = p β .Then, The ADMM tackles (45) by alternating the minimization of θ and ν.The update of ν is essentially denoising of θ k + u k−1 by the regularization ηρ(ν).This is the key idea behind PnP-ADMM, where the proximal projection is replaced with an appropriate denoiser D(.).For further details on various denoisers used in PnP techniques, we refer the interested reader to [52].Algorithm 3 summarizes the noisy UNO reconstruction procedure.

B. ADC Threshold Selection in Noisy UNO
Theorem 4 certifies that the additive noise to the input signal results in an additive noise in modulo domain.
Theorem 4. Assume the noise vector in the measurement model y = x+z to be z where ỹ = M λ (y).
Proof: Applying the modulo operator M λ in (8) to the noisy measurements y produces where z ∼ N 0, σ 2 z I m .Since we have a + b ≥ a + b for two arbitrary real numbers a and b, it follows from (48) that Using the identity a + b ≤ a + b + 1, we obtain A binary combination of the right-hand sides of ( 49) and ( 50) is equivalent to where q k ∈ {0, 1}.Rewrite (51) as where zk = mod (z k , 2λ) − 2(1 − q k )λ, which completes the proof.
It follows from Theorem 4 that the noise corruption in the input signal carries over to the modulo samples.The following theorem unveils the UNO reconstruction guarantee in the presence of noise.
Theorem 5. (UNO sampling with noise) Assume x(t) to be a finite energy, bandlimited signal with maximum frequency Ω max .Let y(t) denote the noisy signal following a linear model y(t) = x(t) + z(t), where z(t) denotes the additive noise signal.Denote the pre-filtered y(t), x(t) and z(t) by, respectively, y φ (t), x φ (t) and z φ (t), where φ ∈ PW Ω with cut-off frequency Ω max following a linear model y φ (t) = x φ (t)+z φ (t).Denote the samples of y φ (t), x φ (t), z φ (t) and the modulo samples of y φ (t) by, respectively, (y φ ) k , (x φ ) k , (z φ ) k and (ỹ φ ) k , where the sampling rate is 1/T .The modulo samples reconstructed by the RKA are denoted by ȳφ with the reconstruction error is e = ȳφ − ỹφ .Then, the sufficient condition to reconstruct bandlimited samples x k from UNO samples r ( ) = sgn M λ (y φ ) − τ ( ) m =1 , where τ ( ) ∼ N 0, λ 2 9 I , up to additive multiples of 2λ in the sense that ( ȳφ where (z φ ) k is defined in Theorem 4 and h ∈ N is determined by Proof: The proof, ceteris paribus, follows from repeating the proof of Theorem 2 by replacing the RKA error e with zφ + e.
According to Theorem 5, noisy UNO sampling requires more samples (given by the oversampling factor) specified by (53) than the noiseless case in (29).This is similar to other conventional noisy samplers.For example, Cadzow denoising [73], used to suppress the effect of noise in sparse samplers similarly requires such an oversampling [74].

C. Numerical Examples
We investigated PnP-ADMM-based noisy UNO reconstruction with A = [a ij ] to be a ij ∼ N (0, 1) and y = y t + , where y t was generated as in Section IV-D and ∼ N 0, σ 2 I m .Fig. 9a and b show accurate noisy UNO reconstruction of the parameter vector with fixed σ 2 = 0.1 in case of, respectively, overdetermined (r = 728, s = 100) and underdetermined (r = 728, s = 1000) systems in (40).Fig. 9c demonstrates the efficacy of Noisy UNO in estimating the desired parameter θ from (40) when only UNO samples of noisy measurement y are available.Theorem 5, the ADC threshold was set to λ = 1.5.

VIII. SUMMARY
The design of alternative sampling schemes to enable practical implementations of Shannon's theorem -from theory to praxis -has been an active research topic for decades.In this context, our proposed UNO presents a framework of merging one-bit quantization and unlimited sampling.This sampling framework naturally facilitates a judicious design of time-varying sampling thresholds by properly utilizing the information on the distance between the signal values and the thresholds in a high DR regime.The noiseless UNO reconstruction relies on exploiting RKA algorithm while the noisy reconstruction is based on the PnP-ADMM heuristic.These low-complexity approaches are preferable over existing costly reconstruction optimization approaches [52,53].
The UNO framework achieves multiple objectives of high sampling rate, unlimited DR, less complex and potentially low-power implementations.Our numerical and theoretical analyses demonstrate accurate reconstruction for several different scenarios.Some theoretical questions remain open, e.g. on the relationship between the number of threshold sequences m and reconstruction error in a closed form.This may help in finding the required number of thresholds sequences for perfect reconstruction.Further, a hardware verification of UNO on the lines of unlimited sampling in [43] is also desired.
The function sgn(•) yields the sign of its argument.In the context of numerical computations, and denote the floor and ceiling functions, respectively.The function log(•) denotes the natural logarithm, unless its base is otherwise stated.The notation x ∼ U(a, b) means a random variable drawn from the uniform distribution over the interval [a, b] and x ∼ N (µ, σ 2 ) represents the normal distribution with mean µ and variance σ 2 .The operator mod(a, b) between two values a and b, returns the remainder of the division operation a/b.

Figure 1 .Lemma 1 .
Figure 1.(a) The input sawtooth wave signal x is reconstructed from one-bit measurements using the RKA to yield x. Here, DRx = 1 and DRτ = 1.The inset shows the same plot on a larger scale.(b) As in (a) but for the bandlimited input signal from [18] with DRx = 5.(c) As in (b) but for DRx = 8.

σ 2 min
= n.Conventionally, the condition number of a matrix is defined as σmax σ min .From Lemma 1, all singular values are equal.Hence, κ Ω = n σmax σ min is indeed the condition number scaled by n.This leads to

Figure 2 .
Figure 2. The UNO sampling architecture.The proper choice of the sampling interval T in the middle block is specified by Theorem 2.

Fig. 2
Fig. 2 illustrates various steps of our UNO sampling technique.The following Proposition 2 states the UNO threshold design.
Figure 3. (a) Transfer function of conventional one-bit ADC where the i-th element of the input signal x = (x)i is compared with a randomly selected threshold τ (b) High DR input signal x and its thresholds samples τ.(c) As in (a), but for UNO with the judicious time-varying threshold λ.(d) The unlimited samples x compared with the thresholded samples τ and λ.

Figure 4 .
Figure 4. Top: Trihedron space (polyhedron (28) in 3 dimensions) (blue), unlimited sampling cube (red), and true value of the modulo signal x ∈ R 3 (yellow) for (a) m = 2 (b) m = 6 and (c) m = 20.Bottom: As in the top panel, but only a cross-section (unshaded with same color boundary) at Z = 0 plane is shown for (d) m = 2 (e) m = 6 and (f) m = 20.Each inequality constraint is shown by a half-space whose feasible region is marked by black arrows.

Figure 5 .
Figure 5. Reconstruction of the input signal from one-bit measurements using UNO when the ADC threshold is (a) λ = 1, (b) λ = 0.5, and (c) λ = 0.2.(d)-(f) As in, respectively, (a)-(c) but the true unlimited samples are compared with their reconstructed samples.

Figure 6 .
Figure 6.Reconstruction of the input signal from one-bit measurements using UNO Algorithm 2 when the ADC threshold is set to λ = 0.5 and the input signal amplitude x ∞ is (a) 10, (b) 15, and (c) 20.(d) As in (a) but for λ = 1 and x ∞ = 1000.

Figure 8 .
Figure 8.Average NMSE for RKA-based UNO reconstruction with respect to the number of time-varying threshold sequences m for λ = 0.5 and x ∞ = 20.

Figure 9 .
Figure 9. Reconstruction of the desired parameter vector θ following the linear model (40) using PnP-ADMM-based UNO for an (a) overdetermined system with A ∈ R 728×100 and (b) underdetermined system with A ∈ R 728×1000 .Here, to facilitate a better visual presentation, the number of threshold sequences start from m = 500.(c) Reconstruction of the noisy input signal from one-bit measurements using PnP-ADMM-based UNO.
∞ = max k |x k |.For a vector x, ∆x = x k+1 − x k denotes the finite difference and recursively applying the same yields N -th order difference, ∆ N x.We denote the Ω-bandlimited Paley-Wiener subspace of the square-integrable function spaceL 2 by PW Ω such that PW Ω = {f : f, f ∈ L 2 , supp( f ) ⊂ [−Ω, Ω]},where f is the Fourier transform of f .The Hadamard (element-wise) product of two matrices B 1 and B 2 is B 1 B 2 .The column-wise vectorized form of a matrix B is vec(B).Given a scalar x, we define the operator (x) + as max {x, 0}.For an event E, 1 (E) is the indicator function for that event meaning that 1 (E) is 1 if E occurs; otherwise, it is zero.
=1 .Fig.6shows accurate UNO reconstruction for different values of x ∞ .TableIIreports the corresponding NMSE averaged over 15 experiments.

Table I AVERAGED
UNO RECONSTRUCTION NMSE FOR FIXED x