Covariance Recovery for One-Bit Sampled Non-Stationary Signals With Time-Varying Sampling Thresholds

The recovery of the input signal covariance values from its one-bit sampled counterpart has been deemed a challenging task in the literature. To deal with its difficulties, some assumptions are typically made to find a relation between the input covariance matrix and the autocorrelation values of the one-bit sampled data. This includes the arcsine law and the recently proposed modified arcsine law which unleashes a promising performance in covariance recovery by taking advantage of time-varying sampling thresholds. However, the modified arcsine law also assumes input signals are stationary, which is typically a simplifying assumption for real-world applications. In fact, in many signal processing applications, the input signals are readily known to be non-stationary with a non-Toeplitz covariance matrix. In this paper, we propose an approach to extending the arcsine law to the case where one-bit ADCs apply time-varying thresholds while dealing with input signals that originate from a non-stationary process. In particular, the recovery methods are shown to accurately recover the time-varying variance and autocorrelation values. Furthermore, we extend the formulation of the Bussgang law to the case where non-stationary input signals are considered.


I. INTRODUCTION
C OVARIANCE matrix recovery plays an important role in statistical signal processing applications such as directions of arrival (DOA) estimation, radar waveform design, target parameter estimation, communication channel estimation, and adaptive radar detection [2], [3], [4], [5], [6], [7], [8], [9]. When digital signal processing is concerned, using one-bit quantization and digitization, in which the input signals are compared with given threshold levels, allows for sampling at a very high rate Manuscript  and with lower energy consumption [10], [11], [12], [13]. As a result of employing one-bit sampling, however, we can only use the sign data as partial available information to recover the signal covariance, and second order statistics in general, making it more challenging. In [14], [15], [16], [17], the authors have considered the input signal as a stationary zero-mean Gaussian process, and with this assumption, the input covariance is recovered by taking advantage of the arcsine law which connects the covariance of an unquantized signal with that of its quantized counterpart [12], [18]. In [19], the relationship between the cross-correlation matrix of the input signal and the one-bit output data is characterized as the Bussgang law for the stationary zero-mean Gaussian signals. Note that the sampling threshold levels are considered to be zero in these research efforts. The zero threshold values can give rise to some difficulties in signal amplitude recovery and a considerable portion of signal information may be lost.
In particular, the power information of the input signal x is lost in one-bit data with employing zero threshold. Because, there is no difference between the sign of x and that of ηx for η > 0 [20]. As a natural alternative, time-varying thresholds are utilized in recent works which can lead to enhancements in recovery performance [1], [21], [22], [23], [24]. It is worth pointing out that the main difference between one-bit sampling with time-varying thresholds scheme utilized in this paper and delta-sigma (ΔΣ) sampling, is that the one-bit ΔΣ ADC may not properly sample the input signal with a large slope which may result in lost of amplitude information.
Owing to the successful performance of time-varying sampling thresholds for signals amplitude recovery, such timevarying thresholds were considered for the covariance recovery problem [1], and more extensively in [25], exhibiting a significantly improved performance in the estimation of signal autocorrelation values via a modified arcsine law. Moreover, taking time-varying thresholds into consideration for cross-correlation matrix recovery, promising results were demonstrated with a modified Bussgang law in [25].
A critical restriction of the arcsine law, as well as the modified arcsine law, lies in the necessary assumption of a stationary input signal [1], [14], [25]. In real-world communication and digital signal processing applications, however, input signals are non-stationary in general and have time-varying variances [26], [27], [28]. In such scenarios, covarince recovery is an even more prominent tool in the analysis of non-stationary processes This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ and systems, and can provide useful insights into their innate dynamics [29], [30], [31], [32]. Nevertheless, in a non-stationary environment, the expected accuracy of covariance recovery is typically diminished.
In this paper, we present an approach to extend our modified arcsine law for time-varying sampling thresholds, discussed in [25], to recover signal covariance matrices with an arbitrary non-Toeplitz structure. Moreover, a Bussgang law with timevarying thresholds is established for the non-stationary scenario.

A. Contribution of the Paper
We study the covariance recovery for a non-stationary input signals in one-bit quantization systems deploying time-varying thresholds. In particular, we formulate an integral-based relation between the autocorrelation function of the one-bit sampled data and the generic covariance matrix entries of the input signal. Moreover, a closed-form formulation for the mean of the input signal is obtained and the utilized to recover the time-varying signal variances. It is demonstrated that to recover the autocorrelation values, we should evaluate the obtained integral which appears to be intractable analytically. To approach this problem, we first employ a one-point piece-wise Padé approximation (PA) to recast the integrands as rational expressions which are readily integrable. Next, we formulate an estimation criterion to recover the desired input autocorrelation values. The accuracy of the PA is also investigated. In the next step, two well-known numerical integration techniques are employed to estimate the input autocorrelation values; namely, the Gauss-Legendre quadrature and the Monte-Carlo integration techniques. Interestingly, the proposed estimation criteria for these approaches take convex form, facilitating an accelerated recovery. Lastly, a modified Bussgang law for non-stationary input signals is presented. By using the modified Bussgang law, the matrix elements associated with the cross-correlation between the input signal and the one-bit sampled data can be recovered. Several numerical results are presented to illustrate the effectiveness of the proposed methodologies. Compared to [20] and the conference version of this work, we can summarize our contributions as follows: r Obtaining an integral-based relation between the autocorrelation function of the one-bit sampled data and the generic covariance matrix entries of non-stationary input signals. Specifically, at first we obtain the time-varying variance of non-stationary input signals using the sample mean of the output one-bit data, and then solve the aforementioned integration for estimating the off-diagonal elements of the input covariance matrix.
r Approximating the mentioned integral-based relation via PA, and subsequently solving its associated estimation criterion by a non-convex optimization method. Moreover, we investigate PA in term of the fitness analysis which introduces a bound on the threshold mean. This bound plays an important role in obtaining a good estimate of the input covariance matrix from one-bit sampled data using non-convex programming.
r Approximating the mentioned integral-based relation by numerical methods such as the Gauss-Legendre and the Monte-Carlo integration approaches laying the ground for deploying convex programming tools in optimizing the associated estimation criteria. Therefore, obtaining the global solution of our estimation problem will be guaranteed, as further validated by presenting various numerical examples in this paper.
r Formulating the maximum likelihood estimation (MLE) criterion in order to estimate the parameters of the timevarying threshold merely by using the available one-bit information. Additionally, for the first time in the literature, we formulate the Bussgang law for time-varying sampling thresholds, referred to the modified Bussgang law, to estimate the cross-correlation matrix from one-bit sampled data.

B. Organization of the Paper
Section II is dedicated to formulating the autocorrelation function of the one-bit sampled data with time-varying thresholds in the case of non-stationary inputs. In Section III, the time-varying variances are recovered by using the proposed formula for the mean of the one-bit sampled data. Sections IV presents our Padé Approximation (PA) to recover the input signal autocorrelation sequence. Subsequently, Sections V and VI discuss two widely-known numerical integration techniques, i.e. the Gauss-Legendre quadrature and the Monte-Carlo integration methods, applied to our arbitrary non-Toeplitz covariance matrix recovery problem. Section VII is where the various methods proposed for covariance recovery are compared. A proper thresholding for covariance recovery through the estimation of the threshold parameters is discussed in Section VIII. The modified Bussgang law for time-varying thresholds in the case of non-stationary signals is presented in Section IX. Finally, Section X concludes the paper.
Notation: Throughout this paper, we use bold lowercase letters for vectors, and bold uppercase letters for matrices and uppercase letters for matrix entries. For instance, R x and R x (i, j) denote the autocorrelation matrix and the ij-th element of the autocorrelation matrix of the vector x, respectively. (·) and (·) H denote the vector/matrix transpose, and the Hermitian transpose, respectively. The expected value is defined as E{.}. The Frobenius norm of a matrix B ∈ C M ×N is defined as For an event E, I (E) is the indicator function for that event; i.e. I (E) is 1 if E occurs and 0 otherwise. The Q-function is defined as Further, Q −1 is an inverse Q-function. The error function (erf) is defined as The incomplete Gamma function is defined as Finally, the cumulative distribution function (CDF) of a zero mean Gaussian process z ∼ N (0, ζ) is defined as

II. COVARIANCE RECOVERY IN NON-STATIONARY SCENARIO
We assume that the input signal is a zero-mean non-stationary Gaussian process x ∼ N (0, R x ), where R x is the non-Toeplitz covariance matrix of x with the time-varying diagonal elements. Specifically, the input signal is supposed to be non wide sense stationary (WSS) or weak stationary (for abbreviation we use "non-stationary" instead of "non-WSS"). A signal is wide sense stationary (weak stationary) if and only if: a) The signal mean is constant over time . This means that we have a constant variance over time; i.e.
, [34], [35]. In our case, the input signal can satisfy (a) and (c), however, the autocorrelation function does not rely on lag is a non-Toeplitz covariance matrix with distinct diagonal elements (time-varying variance). Consequently, our signal is nonstationary or marginal heteroskedastic [33], [35]. A simple but famous example for such signals are those originating in Wiener processes or Brownian motion η t ∼ N (0, t). The input signal x ∈ R N is considered to be an arbitrary temporal sequence of the distribution ensembles {x(k)} whith k ∈ {1, . . . , N x }.

A. Modified Arcsine Law for Non-Stationary Input Signals
We consider a non-zero time-varying Gaussian threshold τ that is independent of the input signal with the distribution τ ∼ N (d = 1d, Σ), and define a new random process w such that w = x − τ. Clearly, w is a non-stationary Gaussian process with w ∼ N (−d, R x + Σ = P) where P is a non-Toeplitz matrix. The autocorrelation function of the one-bit quantized output process is formulated in the following.
Theorem 1: Suppose p 0i = P(i, i), p 0j = P(j, j) and p ij = P(i, j), where P is the covariance matrix of w. Consider the one-bit quantized random variable y = f (w) where f (.) is the sign function. Then, the autocorrelation function of y takes the form where α n and β n are evaluated as Proof: The covariance matrix of y can be written as where is a column vector. Clearly, I is a matrix including only entries of the form ±1. Note that one can write the output covariance matrix as R y = [E{y i y j }] N ×N . Therefore, the autocorrelation of f (w i ) and f (w j ) is given by, where κ and λ(d) are defined as The autocorrelation function in (8) can be rewritten as We can simplify (11) using the relation κ By employing polar coordinates w i = ρ cos θ, w j = ρ sin θ, we can recast the integral in (12) as where α n and β n are readily defined in (6), and Integrating (13) with respect to ρ leads to (15) a transition for which you can find more detailed derivations in Appendix A. This completes the proof.
It remains to evaluate the integral in (5) in terms of {p 0i }, {p 0j } and {p ij }, which have to be estimated-a task that is central to our efforts in the rest of this paper. Finding {p 0i }, {p 0j } and {p ij } results in time-varying input variance and autocorrelation recovery, which can be achieved by considering the relation: For i = j, the input variance is hence given by R

III. TIME-VARYING VARIANCE RECOVERY
To recover the time-varying variances {r 0i }, the following lemma would be useful.
Lemma 1: The first moment (mean) of the one-bit sampled data, depends on the threshold distribution and the power of sampled data via the relation Proof: We have . We can further simplify (18) as which completes the proof. In light of the above, a relation between the input variance and the mean of one-bit sampled data is established, which provides an additional avenue to estimate the variances {p 0i }. More precisely, according to Lemma 1 and (16), the input time-varying variances {r 0i } are given by where {r 0i } denote the estimates of {r 0i }, σ 2 τ is the threshold variance, and {μ i } denote the entries of µ which may be estimated via the sample mean 1 [34].

A. Numerical Results
We will examine the effectiveness of (20) in estimating the time-varying input variances. In all experiments, the input signals were generated as zero-mean Wiener process sequences with time-varying variance ranging from 0.2 to 0.8. Also, the number of states is 100 (i.e., x ∈ R 100 ). Accordingly, we made use of the time-varying thresholds with d = 0.5 and diagonal Σ whose diagonal entries are set to 0.2. In the non-stationary input signal case, the true input variance is not a constant number. Therefore, we define the experimental mean square error (MSE) of the estimater 0i of a variance r 0i as where {r e 0i ,r e 0i } are the time-varying variances and their estimates in the e-th experiment. Also, the number of experiments is assumed to be E = 15. As can be seen in Fig. 1, we can accurately estimate the time-varying variance elements of an input signal based on (20) for i = 2 and j = 8 (r 02 = r 08 ). The results are obtained for the number of ensembles N x ∈ {1000, 3000, 6000, 10000}, with fixed d and Σ for each experiment. It is observed that the accuracy of time-varying variance recovery will significantly improve as the number of one-bit samples grows large.
To further investigate the effectiveness of our proposed approach, we generate a non-stationary Gaussian process according to x ∼ N (0, σ 2 t I), where {σ 2 t } are generated based on the generalized autoregressive conditional heteroskedasticity (GARCH) model with order one, i.e. GARCH(1, 1), which may be written as [33], where {x t } are elements of x, t |ψ t−1 denotes the conditional random variable t given its previous ensembles set ψ t−1 , and {ζ 0 , ζ 1 , ζ 2 } are our GARCH model parameters. In Fig. 2, we present an example of time-varying variance sequence recovery. The true input signal time-varying variance σ 2 t and the estimated values by our approach are presented when t is a temporal sequence of length 20.
So far, we have obtained the time-varying variance elements of the input covariance matrix. To recover the input autocorrelation values ({r ij }, i = j), the integral in (5) should be evaluated which appears to be difficult to find in closed-form. Therefore, in the following, we deploy various approximations to facilitate its evaluation, which enables to the recovery of all elements of the covariance matrix R x .

IV. ANALYTIC APPROACH FOR COVARIANCE RECOVERY
The first part of the integration in (5) can be analytically evaluated as If the threshold is considered as a zero-mean Gaussian process (τ ∼ N (0, Σ)), one can resort to the well-known arcsine law relation for non-stationary signals, i.e., R y (i, j) = 2 π sin −1 ( p ij √ p 0i p 0j ). However, in the general case, we evaluate all parts of the integration in (5). Computing the integral in (5) with 4βn with respect to θ appears to be a difficult task. Thus, in the following section, the Padé approximation (PA) [36], [37], [38] is utilized to approximate said integrands D 1 and D 2 . This facilitates the recovery of {p ij } in Section IV-B.

A. Padé Approximation
As in [25], we use PA to approximate D 1 and D 2 . Note that the integration in (5) To have a better fitness, we again use the idea of piece-wise PA with three distinct intervals [0, A similar approximation can be proposed for It is straightforward to verify that the two functions D 1 (θ; p 0i , p 0j , p ij , d) and D 2 (θ; p 0i , p 0j , p ij , d) are analytic at the expansion points. Also, the Q-function in (5) is approximated by theQ-function as [39]: Substituting D 2 (θ; p 0i , p 0j , p ij , d) with its approximation and evaluating the integration in the associated parts of (5) results in:  Fig. 3. Example plot of the estimation criterion G(p ij ) with respect to p ij showing its multi-modality, i.e. having multiple local optima.
Similar approximations can be obtained for terms associated with the function D 1 (θ; p 0i , p 0j , p ij , d).

B. Recovery Criterion
Based on our discussions in Section III, p 0i and p 0j may be immediately obtained by (20). Then, {p ij } are estimated by formulating a minimization problem. For this purpose, one may consider the following criterion: where the autocorrelation of output signal (R y ) can be estimated via the sample covariance matrix (SCM) [35], with {y(k)} being the observed sign vectors, and χ being the same as defined in (14). Note that by now we have derived an approximated version of (5) using PA. Let H n (p 0i , p 0j , p ij ) denote this approximation. Therefore, we can alternatively consider the criterion: A numerical investigation of (31) reveals that it is multi-modal, i.e. with multiple local minima-see Fig. 3 for an example of the optimization landscape of G(p ij ). Taking the feasible region of {p ij } into account, we can formulate the recovery problem: where p m = min{[p 0i , p 0j ]}. The non-convex problem in (32) may be solved via the gradient descent numerical optimization approach by employing multiple random initial points. Once p ij is estimated, we can estimate the autocorrelation values of x via (16). The acquired optimum recovery results will be presented in the following.

C. Numerical Results
We will examine the effectiveness of the PA method by comparing its recovery results with the true input signal autocorrelation values in the non-stationary case. In all experiments, the input signals were generated as zero-mean Wiener process sequences with time-varying variance ranging from 0.2 to 0.8. The number of states is set to 100 (i.e., x ∈ R 100 ). Accordingly, we make use of the time-varying thresholds with d = 0.5 and diagonal Σ whose diagonal entries are set to 0.2.
We first present an example of autocorrelation sequence recovery. The true input signal autocorrelation and the estimated autocorrelation values by our approach are shown in Fig. 4, where i = 2 and j is a temporal sequence of length 13. Fig. 4 appears to confirm the possibility of recovering the autocorrelation values from one-bit sampled data with time-varying thresholds in the non-stationary case.

D. Fitness Analysis of the Proposed Approximations
In this section, we further examine the capability of the PA approach to approximate the integrands in (5). In fact, it appears a precise approximation of the integrands is connected to the onebit comparison thresholds that are used for sampling. To see how, note that the exponential term in (5), i.e. e α 2 n 4βn , should remain bounded to guarantee a well-behaved Padé approximation. This is due to the fact that, in (5), β n is non-zero and e α 2 n 4βn is the only term that can grow very fast and create a round-off numerical error. Consider a bounding of this term in the form e α 2 n 4βn < γ 1 .
Since ln(·) is a strictly increasing function, (33) can be alternatively expressed as: Note that α n in (6) is directly scaled by the sampling threshold mean d. By setting α n = dα e , (34) can be written as where α n and β n are defined in (6). To guarantee the bound in (35), we should have: The inner optimization problem max θ α 2 e 4β n I (0≤θ≤ π 2 ) can be solved via golden section search and parabolic interpolation method [40]. Note that γ 1 is greater than one. To prove this claim, it is sufficient to show since α 2 e is always non-negative. By plugging in θ = 0 in β n , we obtain β n θ=0 = 0.5p 0j (p 0i p 0j − p 2 ij ) −1 which is always positive.
To assess the goodness of the considered approximations, consider the integrand term: We compare Δ(·) with its approximated forms for θ ∈ [0, , whose results are plotted in Fig. 5 with parameters p 0i = 0.8, p 0j = 0.7, p ij = 0.05, and d = 0.7. Note that, in this case, (36) is satisfied by considering γ 1 = 2. As can be seen, PA appears to promise good fitness, with a small mean square error (MSE) of ∼ 1.1699e − 04.

V. GAUSSIAN QUADRATURE TECHNIQUE FOR COVARIANCE RECOVERY
In this section, the Gauss-Legendre quadrature approach [25], [41] is adopted to evaluate the integration in (5). This lays the ground for the recovery of {p ij } since p 0i and p 0j are obtained by (20). At first, an approximated version of (5) is obtained based on the Gauss-Legendre quadrature technique in Section V-A. Then, a criterion will be presented to recover {p ij }, and subsequently, the input autocorrelation values in Section V-B. Finally, the efficacy of this approach in estimating the input autocorrelation values is numerically evaluated.

A. Gauss-Legendre Quadrature Method for Integral Approximation
The central assumption to the use of the Gauss-Legendre quadrature technique is that the integrand should be finite within the domain of integration [41]. The integrands in (5) meet this assumption; it is easy to verify that num(β n ) = 0, where num(·) denotes the numerator of the fractional argument. Therefore, by employing the Gauss-Legendre quadrature technique, the relation in (5) can be approximated as where θ ε denotes the ε-th Gauss node. Note that the first part of the above integration was readily given in closed-form in (23).

B. Covariance Recovery via Convex Optimization
Based on our discussion in Section III, the values for p 0i and p 0j are simply given by (20). The parameters of interest {p ij } are then estimated by formulating a minimization problem; namely, we consider the following criterion: for which the autocorrelation of output signal R y can be estimated using the SCM in (30), and χ is the same as that in (14). Recall that we have obtained an approximated version of (5) using the Gauss-Legendre quadrature in (39). Let J n (p ij ) denote this approximation. As a result, we can alternatively use the criterion: It is interesting to note that the criterion in (41) is a convex function with respect to p ij (a proof is provided in Appendix B)-see Fig. 6 for an example of the optimization landscape of Ψ(p ij ). By considering the feasible region of p ij , the following recovery problem is obtained: where p m is defined in Section IV-B. The convex problem in (42) may be solved efficiently using the golden section search and parabolic interpolation approach. Once {p ij } is obtained, one can estimate the autocorrelation values of x via (16). The recovery results will be presented in the following.

C. Numerical Results
We examine the usefulness of the Gauss-Legendre quadrature technique by comparing its recovery results with the true input signal autocorrelation values in the non-stationary case. In all experiments, the input signals were generated as zero-mean Gaussian sequences with time-varying variance ranging from 0.2 to 0.8. Accordingly, we made use of the time-varying thresholds with d = 0.3 and diagonal Σ whose diagonal entries are equal to 0.1.
We present an example of autocorrelation sequence recovery. The true input signal autocorrelation and our estimated autocorrelation values are shown in Fig. 7 with i = 2 and j being a temporal sequence of length 13. Fig. 7 appears not only to confirm the possibility of recovering the autocorrelation values from one-bit sampled data with time-varying thresholds in the non-stationary case but also the effectiveness of the Gauss-Legendre technique.

VI. MONTE-CARLO INTEGRATION FOR COVARIANCE RECOVERY
In this section, the Monte-Carlo integration approach [25], [42] is utilized to evaluate the integral in (5). At first, we formulate an approximated version of (5) based on the Monte-Carlo integration approach in Section VI-A. We then present a new criterion to recover {p ij } based on this approximation. The efficacy of this approach in estimating the input autocorrelation values is numerically evaluated.

A. Monte-Carlo Method for Integral Evaluation
The Monte-Carlo integration technique can be utilized to recover the input signal covariance matrix. Here, the same idea is adopted to evaluate the integration in (5) for the non-stationary input signal scenario. More concretely, by employing the Monte-Carlo integration technique, one can approximate (5) as follows: where θ ε denotes the ε-th random number generated from the uniform distribution in the interval [0, π 2 ]. Note that the first part of the above integral was readily evaluated in closed-form in (23).

B. Convex Covariance Recovery
Similar the previous proposed approaches, we begin by estimating p 0i and p 0j through (20). We then aim at estimating the unknown parameters {p ij } by formulating a minimization problem. Namely, we consider the following criterion: where the autocorrelation of output signal R y can be estimated via (30). Let F n (p ij ) denote the approximation of (5) using the Monte-Carlo integration. Therefore, we can consider the following alternative criterion: Similar to the previous criterion in (41), Γ(p ij ) appears to be a convex function with respect to p ij , whose proof of convexity is similar to that for Ψ m (.) in Appendix B-see Fig. 8 for an example of the optimization landscape associated with Γ(p ij ). By considering the feasible region of the parameter of interest {p ij }, one can formulate the following recovery problem: where p m is defined in Section IV-B. The convex problem in (46) may be tackled by the same tools as proposed in Section V-B. Recovery of {p ij } leads to estimating the autocorrelation values of x via (16). The procedure of the proposed covariance matrix recovery approach is summarized in Algorithm 1. The optimum recovery results will be presented in the following.

C. Numerical Results
Herein, we examine the Monte-Carlo integration technique by comparing its recovery results with the true input signal autocorrelation values in the non-stationary case. In all experiments, the input signals were generated as zero-mean Gaussian sequences with time-varying variances ranging from 0.2 to 0.8. Accordingly, we made use of the time-varying thresholds with d = 0.3 and diagonal Σ whose diagonal entries are set to 0.1. We present an example of autocorrelation sequence recovery. The true input signal autocorrelation and the estimated autocorrelation values are shown in Fig. 9, with i = 2 and j being a temporal sequence of length 13. It can be observed from Fig. 9 that the Monto-Carlo based approach presents satisfactory recovery results in the non-stationary case as well.

VII. COMPARING THE PROPOSED RECOVERY METHODS
It would be of interest to compare the discussed covariance recovery approaches in the non-stationary setting: (i) employing the Padé approximation of the integrands in (5), (ii) applying the Gauss-Legendre quadrature technique, and (iii) applying the Monte-Carlo integration to evaluate the integral in (5). To this end, we generate a non-stationary input signal x ∈ R 5 with the ensemble length N x = 10000 and the following non-Toeplitz covariance matrix: The time-varying threshold is generated using the same settings as in Section IV-C. Table I illustrates the squared Frobenius norm of the error, normalized by the squared Frobenius norm of the desired covariance matrix R x : whereR x is the recovered covariance matrix. The presented results are averaged over 5 experiments.
In the non-stationary input signal scenario, similar to the stationary case, all three approaches show promising recovery results-see Table I. The Gauss-Legendre method has a better performance in recovering the input signal autocorrelation values in comparison with the PA technique and the Monte-Carlo integration. It is also worth noting that the two proposed numerical approaches other than the PA technique boil down to simplified convex programs, hence ensuring convergence to the global optimum. However, a proper selection of the number of nodes and quadrature points in the Gauss-Legendre quadrature and the Monte-Carlo integration techniques is crucial and may present itself as a bottleneck in an effective recovery. This is not an obstacle in applying the PA technique. As a result, one may wish to run the PA-based recovery to help with the proper deployment of the other two techniques.  II  COMPARISON BETWEEN THE TIME-VARYING THRESHOLDS AND THE  CONSTANT THRESHOLD To compare our approach which deploys time-varying sampling thresholds and that of [20] relying on a constant threshold for non-stationary input signals, we generate input signals with time-varying variance ranging from 0.2 to 1.5. Accordingly, we made use of the time-varying thresholds with d = 0.4 and diagonal Σ whose diagonal entries are set to 0.3 and the constant threshold with d = 0.4. Also, we consider i = 2 and j = 8 as our temporal indices. As can be observed in Table II, the performance of our approach provides a considerable enhancement to that of the constant threshold in terms of the MSE defined in (21). To see why, consider two distinct arbitrary sample indices j and k such that where sgn(.) is the sign function, d denotes the constant threshold, r i and x i are the i-th entry of the one-bit data r and input signal x, respectively. For the sake of this example, we can assume that we have x j = αx k for α > 1. Now suppose d is chosen such that r k = sgn(x k − d) = +1. Similarly, we will have r j = sgn(x j − d) = +1 in this case. As a result, the amplitude information of the input signal will be lost with constant threshold deployment. However, by selecting proper time-varying sampling thresholds, we can overcome this issue.

VIII. JUDICIOUS SELECTION OF SAMPLING THRESHOLDS
While the use of time-varying thresholds for one-bit sampling has shown promise in various signal recovery problems, tuning the applied thresholds provides both an opportunity and a challenge. In this section, we will discuss an approach to effectively set the sampling threshold mean d-whose significance was already shown in our analysis in Section IV-D. The value of the threshold mean d is one of the parameters in our recovery cost functions, and consequently, impacts the effectiveness of the autocorrelation sequence recovery by various proposed approaches.

A. Problem Formulation for Threshold Mean Optimization
Consider a set of thresholds distributed as τ ∼ N (d = 1d, Σ = σ 2 τ I). To design thresholds that are independent from the unknown zero-mean Gaussian signal (x) merely from one-bit data, we use the CDF of the observed sign data y and formulate a maximum likelihood estimation (MLE) problem. The goal is to determine the threshold mean d solely from the sign data y. The one-bit samples are generated as The probability vector p for the one-bit measurement vector y may be written as where The associated log-likelihood function is hence given by where r 0 is a vector containing the diagonal entries of the covariance matrix of the input signal x. As mentioned earlier, in the non-stationary scenario, the covariance matrix has an arbitrary non-Toeplitz structure. The entries of r 0 , appearing in the CDF, are the variances for the elements of the input signal.
To immediately derive our desired parameter d from (52), we define the following statistical linear model for our threshold τ: Therefore, the MLE is formulated as where the equality constraint is obtained from (17).

B. Numerical Illustrations for Threshold Mean Design
To numerically scrutinize our approach, the input signal is generated using the same settings as described in Section IV-C. The desired time-varying threshold is generated as a Gaussian process with d = 0.3 and Σ = 0.1I. The sign data y was generated accordingly to be utilized in order to estimate the desired threshold mean by the MLE problem in (54). The results are presented in Fig. 10 based on the NMSE between the desired threshold mean d and the recovered meand, defined as: Each presented data point is averaged over 5 experiments. As can be seen in Fig. 10, the proposed method can accurately estimate the mean of a time-varying threshold. The results are obtained with sequence lengths N x ∈ {1000, 3000, 6000, 10000}. Moreover, σ 2 τ is estimated using (54) with the average NMSE of ∼ 1e − 03 by considering N x = 10000.

IX. MODIFIED BUSSGANG LAW FOR TIME-VARYING SAMPLING THRESHOLDS
The modified Bussgang law for stationary input signals was derived in [25]. This modified Bussgang law presents a useful relation in stochastic analysis of stationary input signals when they are sampled with time-varying thresholds. In this section, the modified Bussgang law is extended to the case when the non-stationary input signals are considered in such settings.

A. Modified Bussgang Law for Non-Stationary Input Signals
By considering time-varying thresholds, the cross-correlation matrix between the one-bit sampled data and the non-stationary input signal can be formulated as follows.
Theorem 2: Suppose τ ∼ N (d = 1d, Σ) is a time-varying threshold, and x is a non-stationary input signal. Let y = g(w) denote the one-bit sampled data, where w = x − τ is distributed as w ∼ N (−d, R x + Σ = P), with p 0j = P(j, j) and p ij = P(i, j). Then, the cross-correlation matrix between y and x satisfies the relation, where ε 1 and ε 2 are given by (57) In particular, for i = j, the relation in (56) yields Proof: Suppose w i and w j are the i-th and the j-th entries of w (i = j) with E{w i } = E{w j } = −d, and that p 0i = P(i, i), p 0j = P(j, j), and p ij = P(i, j), where P denotes the covariance matrix of w. Consider the quantized random variables y i = g(w i ) and y j = g(w j ), where g(.) denotes a non-linear Fig. 11. The recovery of the cross-correlation between the input signal and the one-bit sampled data by the modified Bussgang law applied in conjunction with various one-bit autocorrelation recovery approaches for a sequence of length 13, with the true values plotted alongside the estimates. transformation function. Therefore, the cross-correlation function between w i and y j can be obtained as where κ and λ(d) are defined in (9) and (10), respectively. We first calculate the integral in (59) with respect to w i , i.e., where ε 1 and ε 2 are given as A detailed proof of the results in (60) and (61) is presented in Appendix C. Based on (61), it can be seen that the values of ε 1 and ε 2 are dependent on the entry index number j. As a result, the modified Bussgang law for the non-stationary input signal can be presented as: If the non-linear function g(.) is the sign function, which is the case in one-bit quantization, ε 1 and ε 2 can be obtained using similar steps as presented in [25]. However, in this case, p 0j should be utilized in lieu of p 0 . For i = j, (62) boils down to (58), a proof of which is presented in Appendix D. Based on (62), the cross-correlation matrix between the input and the output one-bit data is computed, where {p 0j } is obtained by (20) and {p ij } can be either recovered using (32), (42) or (46). Note that the cross-correlation matrix between the threshold vector τ and the output vector y can be estimated via a sample cross-correlation matrix:

B. A Numerical Investigation of the Modified Bussgang Law
We now examine the modified Bussgang law for the nonstationary input signals by comparing its recovery results with the true cross-correlation values between the input signal and one-bit quantized data. In all experiments, the input signal settings are the same as Section IV-C. The true cross-correlation between the input signal and the one-bit sampled data and the estimated cross-correlation values obtained using our approach are shown in Fig. 11, for i = 2 and j as a random sequence of length 13. Our results appear to confirm the possibility of recovering the cross-correlation values from one-bit sampled data with time-varying thresholds by employing any of the three recovery methods (PA, Gauss-Legendre method and Monte-Carlo integration).

X. CONCLUSION
We studied a generalization of the modified arcsine law discussed in [25] through Padé approximations, Gauss-Legendre quadrature approach, and Monte-Carlo integration, to cases where the input signal is assumed to be non-stationary. The numerical results present the efficacy of all three approaches in the covariance matrix recovery. Moreover, a modified Bussgang law was established for the one-bit sampling of non-stationary input signals with time-varying thresholds.

APPENDIX A DETAILED DERIVATIONS FOR THE INTEGRAL IN (13)
The focus herein is on obtaining the ultimate formalism for R y (i, j) in (15) from the relation in (13). In particular, based on (13), we define ζ(α n , β n ) ∞ 0 e −β n ρ 2 (e −α n ρ + e α n ρ )ρ dρ where the inner integral and the outer integral are called L 1 and L 2 , respectively. The inner integral may be evaluated as