Sparsity Order Estimation for Compressed Sensing System using sparse Binary Sensing Matrix

We present a composite Compressed Sensing system for the acquisition and recovery of compressible signals, where a sparse Binary Sensing Matrix aids Sparsity Order Estimation, and a Gaussian Sensing Matrix aids reconstruction. The Binary Sensing Matrix is deterministic and is adapted according to the varying nature of the sparsity order. We estimate the sparsity order by exploiting the sparse structure of the Binary Sensing Matrix and the statistics of the obtained measurements. We refine the estimates of the sparsity order using a Kalman filter with a discrete Markov model that characterizes the sparsity order variation. A Binary Sensing Matrix-Aided Orthogonal Matching Pursuit is developed for faster recovery of compressible signals. Simulation results on real-world and synthetic data demonstrate the merits of the proposed sparsity order estimation and recovery methods compared to other existing methods. Our proposed methods are practical and recover compressible signals at least 25% faster than the existing methods.


I. INTRODUCTION
Traditional data acquisition and transform-domain compression involve ( ) sampling of the compressible signal at rates at least the Nyquist rate and ( ) retaining only the significant coefficients and their locations when the acquired samples are transformed using a set of orthogonal bases such as Fourier, cosine, and wavelet. The number of significant transform-domain coefficients is the sparsity order and the collection of their locations is the support of the compressible signal. The Nyquist rate depends on the bandwidth of the compressible signal, which is several orders of magnitude greater than the sparsity of the signal, thereby resulting in more samples (measurements). Because hardware resources and processing power requirements are directly proportional to the number of measurements, there is a pressing need to reduce them significantly. Thus, considering the sparsity order instead of the bandwidth paves the way for breaching this Nyquist barrier, thereby drastically reducing the number of measurements by unifying the sensing and compression, resulting in the development of Compressed Sensing (CS) [1], [2], [3].
During CS acquisition, an −dimensional compressible signal with sparsity order is directly sensed and compressed into an −dimensional measurement vector using an × −dimensional sensing matrix . Mathematically, CS acquisition is represented as where is the measurement noise. Here, < ≪ , ≥ log( / ) for some constant [4], [5] and the Compression Ratio (CR) achieved is / . During CS recovery, the compressible signal can be reconstructed from the measurement vector using the same sensing matrix used during acquisition. CS recovery is primarily performed using either convex relaxation or greedy techniques. The key components of CS are the combination of a sensing matrix and the inherent sparsity of the signal in a suitable transform domain. VOLUME XX, 2017 For a practical CS system, the knowledge of exact sparsity order of a compressible signal is of great importance during CS acquisition and recovery. In the CS acquisition stage, the sparsity order dictates the minimum number of measurements, i.e., the size of the measurement vector to be acquired. In the CS recovery stage, the sparsity order controls the quality of estimation of the compressible signal.

A. THE PURPOSE OF SPARSITY ORDER ESTIMATION
The number of CS measurements required for the perfect recovery of an −dimensional compressible signal depends on the sparsity order of the underlying compressible signal and is given in [4], [5] as ≥ 2 log (2) when the components of sensing matrix are independent and identically distributed (i.i.d.) random Gaussian variables. In many recovery algorithms, the optimal tuning of parameters requires knowledge of the sparsity order of the signal. For example, in the LASSO [1] techniques of convex relaxation-based CS recovery, the recovered signal from the CS measurement vector = {" # , " $ , … , " & } with the knowledge on sensing matrix is given as, where the tuning parameter 1 = 6 7 82log is a function of the sparsity order and the measurement noise level 6 7 for the perfect recovery. For greedy algorithms, such as Orthogonal Matching Pursuit (OMP) [6] and Compressive Sampling Matching Pursuit (CoSaMP) [7], the recovery performance and the number of iterations depend on the sparsity order .
Most of the CS works assume that sparsity order is known beforehand and is not time-varying. This assumption makes the practical applicability of CS theory difficult. An improper assumption of sparsity order during CS acquisition results in either an insufficient or an excess number of measurements and affects the quality of reconstructed signal during recovery. Similarly, the improper assumption during CS recovery results in either early or late termination of CS recovery algorithms leading to either poor reconstruction or wastage of resources. Some greedy recovery algorithms such as Sparsity adaptive Matching Pursuit (SaMP) and its variants [8], [9], [10], [11], [12], [13] and Kronecker-based recovery [14], [15] do not necessarily require the knowledge of the sparsity order for recovery. However, the efficiency of such recovery algorithms depends on the 'step size' parameter that is greatly influenced by the knowledge of sparsity order. If sparsity order knowledge is not considered, a fixed smaller step size leads to a longer recovery time, whereas a fixed larger step size leads to poor recovery. Thus, Sparsity Order Estimation (SOE) has become an important topic for both CS acquisition and recovery.
The direct SOE methods using specially designed sensing matrices are available in [16], [17], [18]. Lopes [16] introduced the SOE from random Cauchy sensing matrix-based measurements, which are not helpful for recovery. Moreover, the distribution of Cauchy sensing matrix entries depends on the knowledge of noise statistics. The use of sparse random Gaussian sensing matrices for the SOE of true sparse signals was proposed in [17]. Both these methods [16], [17] require a priori information about the signal statistics to construct a sensing matrix that is practically infeasible. A specially designed sensing matrix possessing Khatri-Rao structure for SOE was recently presented in [18] where the matrix construction depends on the signal type, and the SOE performance is limited to lower sparsity order.
Direct SOE methods that exploit the characteristics of sensing matrices are available in [19], [20], [21], [22]. In [19], a spectrum sensing algorithm solved an optimization problem to remove the measurement noise effect, followed by an energy minimization problem using QR decomposition of the sensing matrix, and applied a threshold to obtain the sparsity order. This algorithm requires the signal and noise power to be known a priori to tune the threshold parameter and is not suitable for estimating a higher sparsity order. By exploiting the autocorrelation and cross-correlation properties of the column vectors of the GSM, two-step adaptive compressive spectrum sensing (TS-ACSS) was proposed in [20]. In the first step, a coarse SOE was performed by identifying the slope change in the ordered arrangement of the inner product results of the column vectors of the sensing matrix with the obtained measurements. The SOE was then refined using the CS recovery of the spectrum, and by comparing the estimates of the binary channel occupation in multiple iterations in the second step. This method not only requires more measurements for SOE but also involves multiple iterations that make the method computationally intensive and slow.
Recently, SOE was performed iteratively by exploiting the Restricted Isometry Property (RIP) of the random sensing matrix in [21]. However, this method requires the knowledge of two parameters: a weak-matching factor and an estimation factor, which are dependent on the sparsity order and are difficult to fix in practical scenarios because the sparsity order is unknown and varies over time. In [22], a sensing dictionary was initially constructed for CS recovery with the same dimensions as the measurement matrix but with weak mutual correlation. A sparsity adaptive estimation method was presented, similar to that in [21], with the difference of considering the mutual correlation constant instead of the isometry constant. The efficiency of this method depends on the step size parameter related to the unknown sparsity order.
Within the Multiple Measurement Vector (MMV) framework, the direct SOE method based on the trace of the covariance matrix of the measurements was presented in [23], with the assumption that the signal power is known a priori. By identifying the slope change in the ordered eigenvalues of the covariance matrix of the measurements, SOE was performed in [24]. However, this method is computationally intensive for finding eigenvalues and unsuitable for estimating a higher sparsity order.
A relative threshold-based sparsity estimation method (RTSE) was proposed in [8]. It performs SOE based on a reconstruction algorithm. The threshold for finding the largest components is based on the training set and cannot be fixed a priori, limiting its application. The SaMP algorithm [9] and its variants Adaptive Step size-SaMP (AS-SaMP) algorithm [10], Modified CoSaMP (MCoSaMP) algorithm [11], and sparsity adaptive segmented orthogonal matching pursuit (SAStOMP) algorithm [12] estimate the sparsity indirectly through a variable step size, and gradually increase the estimated support set to match the original. However, these SaMP algorithms require a step size, whose optimal value depends on an unknown sparsity order. An Optimized Adaptive Matching Pursuit (OAMP) algorithm was proposed in [13] which is similar to SaMP except for the energy entropybased order determination in updating the support. Recently, the deterministic binary block diagonal (DBBD) matrix-based sensing [14] and Kronecker-based recovery [15] have been proposed for acquiring and recovering compressible signals. The DBBD sensing matrix reduced hardware complexity. However, its structure accumulates the energy of significant neighboring components, making support estimation difficult under noisy settings and leading to degraded recovery performance.

C. MOTIVATION
The issues with the existing SOE methods are summarized as follows.
1) Existing SOE methods have the following assumptions and constraints that are not suitable for practical applications.
Therefore, there is a strong need to develop an SOE algorithm suitable for both CS acquisition and recovery. Motivated by the purpose of SOE and to overcome the aforementioned limitations of existing SOE methods, we propose an efficient and practically implementable CS measurement system that performs a novel instantaneous SOE on the fly using the same set of measurements acquired for use during recovery. The proposed SOE method is suitable for estimating the time-varying sparsity order and does not require any prior information regarding signal and noise statistics. We estimate the signal statistics from the obtained measurements and adapt our CS measurement system according to the time-varying signal statistics and sparsity order.
Inspired by the SOE methods that perform estimation exploiting the characteristics of sensing matrix design, we propose a composite CS measurement system for maximizing the SOE and recovery performance. In general, sensing or measurement matrices can be classified into two categories: random and deterministic. The entries of the random sensing matrices are i.i.d. Gaussian or Bernoulli variates. Random sensing matrices satisfy an important property called RIP with a high probability required for the perfect recovery of the signal from the obtained measurements. They are nonadaptive and suitable for recovering all types of sparse or compressible signals. However, they are not optimal because they are primarily unstructured and not designed to exploit the structure of the signals.
Compared to random sensing matrices, deterministic sensing matrices are fully structured and are signal or application-specific. Chirp sensing matrices, Reed-Muller sensing matrices, Binary Sensing Matrices (BSM) such as Quasi-Cyclic Low-Density Party Check (QC-LDPC) matrices, and binary Bose-Chaudhuri-Hocquenghem (BCH) are examples of deterministic sensing matrices. Recently, sparse BSM [25], [26], [27], [28], [29], [30], [31] based CS measurement systems have been considered because they are multiplier-less (for time-domain sparse signals) and perform faster compression. Although these matrices are simple to implement, they possess a weaker RIP and require more measurements for recovery. However, the measurements obtained using these matrices have specific statistical properties that are suitable for SOE.
Considering the guaranteed RIP and SOE using random and deterministic matrices, respectively, the sensing matrix can be a composite matrix comprising of both types of matrices for efficient CS acquisition and recovery. Studies have been conducted on composite sensing matrices [16], [32] for SOE and support estimation. A Hybrid Compressed Sensing (HCS) method was proposed in [32], where two submatrices are a sparse complex-valued submatrix for support estimation and a random dense real-valued submatrix for reducing the number of measurements. Using this method, the success of signal recovery depends on the size of both submatrices, which is a function of the sparsity order. The Lopes method [16] uses a different composite CS measurement system comprising two random sensing submatrices, i.e., Cauchy and Gaussian matrices. The random Cauchy matrix preserves the ℓ # norm suitable for SOE, whereas random Gaussian matrix preserves the ℓ $ norm suitable for recovery. However, this method is inefficient because the Cauchy matrix is designed with a priori knowledge on statistics of the noise, and the measurements obtained with the Cauchy matrix are not helpful for recovery. Thus there is a need to have a composite sensing matrix suitable for SOE and recovery. We propose a composite CS measurement system that comprises a deterministic sparse BSM and a random GSM to maximize the SOE and recovery performances, respectively. Here measurements obtained using both BSM and GSM are helpful for recovery.
As real-world compressible signals are time-varying, the sparsity order also varies with time. Here, we consider a birth-death process to model the time-varying sparsity order. The model employs a Kalman filter to improve the estimates of time-varying sparsity order.

D. CONTRIBUTIONS
Our main contributions are summarized as follows.
(1) A composite CS system comprising BSM and GSM matrices is proposed for the sensing and recovery of compressible signals with better performance. (2) A novel BSM-based SOE (BSOE) method for estimating the sparsity order of compressible signals is derived by exploiting the sparse structure of the BSM and the statistics of significant coefficients of the signal that are obtained from the measurements. (3) The estimates of the time-varying sparsity order are refined using the Kalman filter. The Kalman filter uses the temporal correlation of the sparsity order to make it less sensitive to measurement fluctuations and further minimize the SOE error (SOEE). (4) A novel BSM aided greedy recovery algorithm is proposed for CS recovery. The proposed recovery algorithm reduces the number of iterations required to identify the support, resulting in faster recovery without compromising the quality of the recovered signal. We demonstrate that the proposed CS system and SOE methods are designed suitable for both CS acquisition and recovery than most SOE methods developed for CS recovery. The design of the BSM in our CS system not only aids in SOE but in support indices recovery also. Thus, all the obtained measurements are useful. The ability of SOE in the absence of statistics of signal and noise makes the proposed system suitable for real-world applications.
The remainder of this paper is organized as follows. Section II introduces CS theory. Section III introduces a composite CS acquisition model for acquiring compressible signals and a model for sparsity order variation. In Section IV, the BSM-based measurements are used to derive the BSOE for both true sparse and compressible signals. The properties of BSOE are discussed, and it is shown that BSOE is Maximum Likelihood (ML) optimal. In Section V, the Kalman filter is designed to refine the BSOE. The sensitivity of the sparsity order of the BSM to the variance of BSOE is discussed in Section VI. The practical implementation of the proposed composite CS method using simple hardware elements is presented in Section VII. In Section VIII, BSM aided CS recovery and its performance are discussed. Section IX investigates the performance metrics of the proposed composite CS and Kalman-filtered BSOE methods during the acquisition and recovery of synthetic compressible signals and real-world vibration signals. Finally, Section X presents a discussion of the results.
The operators commonly used in this paper are ∥. ∥ : , Pr[. ], ?{. }, and VAR{. } which refer to the ℓ : norm, probability, expectation, and variance functions, respectively. The operators ⌊. ⌉, ⌊. ⌋, and ⌈. ⌉, round the argument to the nearest integer, greatest preceding integer and least succeeding integer, respectively. The notations ℝ, ℤ, and I indicate the real-number domain, integer-number domain, and order of complexity, respectively. The expression J K L refers to the number of ways to choose M elements from a set of elements. A Gaussian random variable N with mean 1 and variance 6 $ is denoted by N ∼ P(1, 6 $ ) and a Bernoulli random variable Q such that Pr[Q = 0] = S and Pr[Q = 1] = 1 − S is denoted by Q ∼ ℬ(0, S).

II. BACKGROUND TO CS THEORY
CS is an alternative to the well-known Nyquist-Shannon theory provided that the signal under consideration is either sparse or compressible. CS theory involves (i) sparse representation of the signal, (ii) CS acquisition to obtain compressed samples or measurements from the signal, and (iii) CS recovery to reconstruct the signal using the obtained measurements.

A. SPARSE REPRESENTATION
The majority of signals in nature exhibit inherent redundancy in a suitable transform domain with the help of (i) orthonormal basis functions, such as Fourier, wavelet, and cosine, or (ii) overcomplete dictionaries containing a combination of different orthonormal basis functions. A sparse representation identifies redundancy and determines the most concise representation of a signal in terms of a linear combination of vectors of a basis function or atoms of an overcomplete dictionary. A concise representation is a true sparse or compressible version of the original signal.

1) TRUE SPARSE SIGNAL
A signal is true sparse if it can be represented on a suitable orthonormal basis or dictionary V such that = VW, and the representation W has very few nonzero components or coefficients compared to its dimension. Some real-world examples of true sparse signals are the channel state information (CSI) of OFDM channels and the spectrum occupancy state of cognitive radio.

2) COMPRESSIBLE OR SPARSE-APPROXIMATED SIGNAL
The signal is compressible or sparse-approximated if its representation W has all nonzero coefficients and the descending order sorted magnitudes of nonzero coefficients obey the power-law decay i.e., |X̃2| ≤ [\ ]^, where |X̃2| is the \ _` sorted magnitude, [ > 0 and ) > 0 are constants. A faster decay indicates that only a few coefficients are significant with larger magnitudes, and the remaining coefficients are insignificant with near-zero magnitudes. Most real-world signals such as image, video, and audio are examples of compressible signals.
The sparsity order is the number of nonzero coefficients in the sparse representation for the true sparse signal or the number of significant coefficients above a certain threshold in the sparse representation for the compressible signal.

B. CS ACQUISITION
CS acquisition is a linear mapping of the −dimensional signal using an × −dimensional measurement or sensing matrix to obtain an −dimensional measurement vector , as shown in (1). The measurement matrix is designed such that it satisfies the RIP, a property akin to the orthonormality of Fourier or wavelet matrices, for recovering all signals with sparsity order . The RIP is given as, where b ∈ 0,1 is the isometry constant and the RIP guarantees that no two sparse signals of sparsity order can be mapped to the same through .

C. CS RECOVERY
Since (1) is a system of underdetermined equations as ≪ , there are infinite solutions. However, utilizing the fact that the solution is a sparse one, CS recovery solves (1) using either convex relaxation techniques or greedy techniques. Convex relaxation techniques are based on ℓ # norm minimization. These techniques are accurate. However, they have higher computational complexity and recovery time.
For faster reconstruction, greedy techniques such as OMP, CoSaMP, etc., are generally used. These greedy techniques exploit the orthonormal properties of the column vectors of the measurement matrix to search for the support indices of the signal in an iterative manner. In each step, one or more indices of the support are identified based on some greedy rules. The identified supporting indices are stored, and their effects are nullified from the obtained measurements. The number of steps required to complete the search depends on the sparsity order. After completing the search, the signal is recovered using the least-squares method as follows: where the submatrix has column vectors of identified by the support of the signal. It has been shown that if the measurement matrix satisfies the RIP, then greedy techniques perform similarly to convex relaxation techniques with an overwhelming probability [6], [7].
Thus, the sparsity order is a vital parameter governing all the three aspects of CS: sparse representation, CS acquisition, and CS recovery. In this paper, we assume that the signal is compressible on a known orthonormal basis function. We focus on modeling the CS acquisition and recovery systems followed by the derivation, analysis, and efficiency of the proposed SOE method.

III. THE PROPOSED CS ACQUISITION SYSTEM
The proposed CS acquisition system as shown in Fig. 1 comprises of the following elements: (A) compressible signal, (B) composite sensing matrix, (C) measurement vector, (D) SOE system, and (E) measurements size (number of measurements or entries in the measurement vector ) estimator. The continuous-time compressible signal h is acquired using a composite sensing system comprising of a total of , Gaussian and impulse basis functions (both basis functions are modulated a priori using the inverse of compressible signal's representation basis). The basis functions independently multiply the compressible signal and perform integrate and dump in every i seconds to produce the measurement vector , .
Here h represents the continuous-time index, ,i represents the sampling time index, and , represents the discrete time step. Using , and , the sparsity order , is estimated using the BSM-based SOE technique and is refined using the Kalman filter by exploiting the temporal correlation of the time-varying sparsity order. We know that sparsity order , has to be estimated from the obtained measurements , . However, the measurements size , of , must be estimated based on the sparsity order , before obtaining , using (2). Thus, it is practically impossible to simultaneously estimate the sparsity order and size of the measurements for the current time step, ,. Hence, the estimated sparsity order j , for the current time step, , is used to determine the number of measurements , + 1 for the next time step, , + 1, because naturally occurring time-varying compressible signals are quasi-static exhibiting stronger temporal correlation. VOLUME XX, 2017 The models for (i) the compressible signal, (ii) its sparsity order, and (iii) CS measurement of the proposed composite CS acquisition system are discussed in this section as follows. The SOE system and the measurements size estimator are explained in subsequent sections.

Consider a continuous-time dynamic compressible signal
h which has an −dimensional representation W h = {X # h , X $ h , … , X h } on an orthonormal sparsifying basis V = {k # h , k $ h , … , k h } at h _` time as given below.
where X 2 h is the coefficient of the \ _` basis k 2 h . Using the basis V given in (6), the sparse representation W h has only significant coefficients and the remaining coefficients are insignificant. The significant coefficients have magnitudes well above a specified threshold and contain most of the energy of the compressible signal, thus defining the sparsity order of the signal. The collection of indices of these significant coefficients is the support set W _ . Any classical compression technique retains these significant coefficients as well as their support W _ and leaves out the remaining − insignificant coefficients to obtain the signal W h , which is a −sparse approximation of W h . The error due to this approximation has energy where m n is user defined and typically ≥ 0.95. It determines the threshold for distinguishing between the significant and insignificant coefficients. Thus, the representation W h can be written as the sum of two disjoint signals, a −sparse signal W h , and an − −sparse signal W q h , i.e., where W h contains nonzero significant coefficients and − zeros, and W q h contains − nonzero insignificant coefficients and zeros.
Based on the compressible distributions given in [33], the insignificant coefficients are approximated as i.i.d. Gaussian noise such that X 2 h ∼ P 0, 6 r $ , when the \ _` coefficient is insignificant. At the same time, significant coefficients are independent and have different means and variances, i.e., X 2 h ∼ P 1 n s , 6 n s $ , when the \ _` coefficient is significant.

B. COMPOSITE CS ACQUISITION MODEL
The CS acquisition model acquires (h) for every i seconds to obtain an −dimensional measurement vector (h) using composite sensing basis functions & . An _` measurement " / (,i): , ∈ ℤ is obtained as follows: 1. a few of t / (h) are generated using sparse impulse basis functions and the rest are with dense Gaussian basis functions.
Without loss of generality, the CS acquisition model is represented in the discrete domain now onwards for better understanding.

STATISTICS OF BSM MEASUREMENTS
For the sake of brevity, we drop the time step notation , in this subsection. From (17), it may be noted that the _` BSM measurement (" ) / of is a random sum of significant coefficients, i.e., where | † is the support set of _` row of | }~& and the set { | † ∩ W } contains the indices which are common to both the support sets | † and W . Since each (X ) 2 ∼ P(1 n s , 6 n s $ ), every BSM measurement (" ) / is a random sum of Gaussians and is approximated as, (" ) / ∼ P(ℓ n 1 n , ℓ n 6 n $ ) (19) where ℓ n = (1 − S) is the average number of significant coefficients contributing to (" ) / , 1 n = # ∑ 2∈ W 1 n s is the mean value of the significant coefficients, and 6 n $ = # ∑ 2∈ W 6 n s $ is the average variance of the significant coefficients.
Here, the statistics 1 n and 6 n $ are unknown a priori. Using the thresholding factor m n to separate the significant and insignificant coefficients, an estimate of 6 q $ based on the concentration of measure is, It can be noted that an estimate 6 7 $ of the noise is available either during calibration by acquiring measurements in the absence of any signal or using Particle Swarm Optimisation (PSO) [34] method. The PSO method computes eigenvalues of measurements' covariance matrix and uses Minimum Description Length (MDL) criterion to separate eigenvalues corresponding to noise components to calculate the noise variance.

C. MODELING TIME-VARYING SPARSITY ORDER
The sparsity order (,) varies over time due to the continuous birth of new significant coefficients and the death of existing significant coefficients. Thus, the sparsity order variation can be modeled as a birth-death process as given below.
As each row of sparse BSM has very few ones compared to the number of zeros, there exists a finite probability of obtaining measurements that have contributions only from insignificant coefficients and measurement noise. This finite probability is exploited here to estimate the sparsity order of the underlying compressible signal, which is the topic of the next section.

IV. BSM-BASED SPARSITY ORDER ESTIMATION (BSOE)
The sparse BSM | }~& has }~& rows such that a fixed number of (1 − S) ones are available in each row. Thus, the probability of any randomly chosen element ž /2 = 1 in any row of | }~& is 1 − S. The number of rows }~& is also time-varying, and it varies with the sparsity order (,).

A. CASE 1: TRUE SPARSE SIGNALS UNDER NOISELESS SETTINGS
For true sparse signals, the insignificant coefficients are zeros. In other words, W q (,) = Ÿ and W(,) = W (,). Under the absence of measurement noise, the measurement " / (,) using BSM is a random sum of the significant coefficients of W(,). There exists a finite probability that " / (,) = (| }~& ) / W(,) = 0 i.e., the zeros of both the _` BSM row (| }~& ) / and the sparse representation W(,) mutually multiply with the nonzero entries of other vector to result in zero-valued measurements. The probability ¡ (,) for obtaining such zero-valued measurements can be used to estimate the sparsity order (,) for true sparse signals.
The following definition derives the sparsity order estimator ¢ £ }~¤OE for true sparse signals acquired under noiseless conditions. Derivation: Let | † be the support of an _` row of a sparse BSM | ¦ §¨ and n(w) be the support of sparse representation X(,). If the number of elements in the support n(w) is (,), then the probability ℓ (,) that both the support sets | † and n(w) to have ℓ common elements is Binomial distributed, and is given as, where 0 ≤ ℓ ≤ (,). As ℓ (,) is a function of (,), S, and ℓ, an estimate of (,) can be obtained using the same. When the support sets | † and n(w) do not have a common element i.e., ℓ = 0, (23) reduces to, which gives the probability of having nonoverlapping sets | † and n(w) and therefore resulting in a zero-valued measurement. The sparsity order (,) derived using (24) is, Thus, a simple sparsity order estimator ¢ £ }~¤OE (,) is obtained by estimating the probability ¡ (,). The probability ¡ (,) is Maximum Likelihood (ML) estimated as the proportion of zero-valued measurements " / (,) in a total of }~& (,) measurements, i.e., Substituting ̂¡(,) in (25) and rounding to the nearest integer, the sparsity order estimator ¢ £ }~¤OE (,) is,

B. CASE 2: COMPRESSIBLE SIGNALS UNDER NOISE SETTINGS
The procedure for obtaining the sparsity order estimator for compressible signals is similar to the above procedure for true sparse signals with certain changes in the computation of the probability ¡ (,).
For compressible signals in the presence of additive noise, both W q (,) and (,) are nonzeros. Thus the nonoverlapping sets | † and n (,) result in measurements without the contributions from the significant coefficients i.e., " / (,) = (" q ) / (,) + y / (,) ≠ 0 and here ¡ (,) is the probability of obtaining such a measurement devoid of contributions from the significant coefficients, which is the same as given in (24), i.e., Observe that here ¡ (,) corresponds to obtaining measurements lying in a bounded interval centered around the origin. When an _` measurement " / is devoid of significant coefficients i.e., when all the 1's in the _` row span only insignificant components, the variance of " / is equal to the variance of (1 − S) insignificant components i.e., (1 − S)6 q $ added with measurement noise variance 6 7 $ . Hence the _` measurement has a Gaussian pdf whose variance is (1 − S)6 q $ + 6 7 $ . The 99% area of this pdf is covered by ±38 (1 − S)6 q $ + 6 7 $ . Hence the bounding threshold Ï is set as Ï = 38 (1 − S)6 q $ + 6 7 $ . Thus the probability ¡ (,) is, which is estimated as the proportion of measurements out of }~& (,) measurements with a magnitude less than the threshold Ï. However, there exists a possibility that two or more of the significant coefficients may also be present in the sum " / (,), yet satisfying above condition, i.e., |" / (,) = (" n ) / (,) + (" q ) / (,) + y / (,)| ≤ Ï and are akin to having false alarms in detection problems. Such conditions result in biased estimates of ¡ (,) with an upward bias. Therefore the necessary correction is to subtract the probability Ð(,) of having two or more significant coefficients whose sum is insignificant. Therefore, Thus, By extending Definition 1, the BSOE for the compressible signals is given by the following definition.
The probability Ð ℓ," (,) depends on 1 n and 6 n $ which are functions of (,). Thus, both Ð ℓ, (,) and Ð ℓ," (,) are functions of (,) whose estimation is the objective of the paper i.e., to estimate (,), an estimate of Ð(,) is required, but Ð(,) depends on (,). To address this yet another chicken and egg problem, Ð ℓ, (,) can be precomputed for all the values of ℓ and from 0 to ÚÛÜ , where ÀÂÝ is the maximum possible sparsity order for a compressible signal, which is known beforehand. In the case of Ð ℓ," (,), the integrals of the type given in (32) can be evaluated using the McLaurin expansion for the desired level of accuracy.
Considering the pdf given in (34), the problem of estimating Ð(,) turns out to be an optimization problem such that, given the measurement vector (,) what would be the optimum pair of { (,), 1 n (,), 6 n $ (,)} which yields the probability ¡ (,). Hence, a method for the joint estimation of the optimum pair from the obtained measurements is discussed as follows.

C. JOINT ESTIMATION OF STATISTICS OF SIGNIFICANT COEFFICIENTS AND SPARSITY ORDER
The estimates of 1 n (,) and 6 n $ (,) for the significant coefficients in (21) are as follows. ae.
Here, rounding to the nearest integer function present for ¢ £ }~¤OE (,) is not considered to simplify the analysis. Using Taylor series approximation 1 VAR{log( ̂¡(,))} ≈ " ç Þd (w) è ç Þd (w) (38) The mean 1 : ´( w) and variance 6 : ´( w) $ of ̂¡(,) are computed using (35) and are given by, Remark: It is observed from (41) Hence, for any fixed S, the estimator ¢ £ }~¤OE (,) needs to be stabilized against its inherent sensitivity to the randomness in the estimate ̂¡(,). The robustness of the estimator can be improved by either increasing the number of BSM measurements }~& (,) or utilizing an appropriate model for the time-varying sparsity order (,). Here we use the discrete Markov model for characterizing (,) and a Kalman filter which is an optimal Linear Minimum Mean Squared Estimator (LMMSE) to refine ¢ £ }~¤OE (,).

V. KALMAN FILTERING OF SPARSITY ORDER ESTIMATE OBTAINED FROM BSOE TECHNIQUE
The 1 For some continuous and differentiable function g(p ¡ (n)),Taylor series approximation to the variance of g(p ¡ (n)) is, VAR{g(p ¡ (n))} where ø(,) denotes zero-mean random statistical fluctuation. Observe that the nonlinear relation in (42) forbids one from applying the Kalman filter to obtain an estimate of (,). To overcome this problem, we apply the logarithm and arrive at a modified form of the observation model as given below with state-dependent noise ø(,).
The procedure for estimating the sparsity order using the BSOE followed by Kalman filtering is summarized in Algorithm 1.

VI. OPTIMUM VALUE OF FOR MINIMIZING THE VARIANCE OF SPARSITY ORDER ESTIMATOR
The parameter S defines the sparsity order of BSM. It is crucial as it determines the SOE performance as well as the recovery performance. It is observed from (41) that VAR{¢ £ }~¤OE (,)} is a function of S and the optimum value of S for minimizing VAR{¢ £ }~¤OE (,)} is computed by solving the following: VOLUME XX, 2017  It is observed that S must increase as , increases, i.e., the BSM must be made very sparse to achieve a minimum variance of ¢ £ }~¤OE , . However, as S increases, the recovery performance degrades as the number of measurements spanning the significant coefficients decreases. From (17), each measurement " / , is the sum of (i) random sums of significant and insignificant coefficients and (ii) noise. As S approaches 1, the number of ones in each row reduces to zero resulting in more measurements to have contributions from insignificant coefficients and noise alone with high probability, satisfying |" / , | ≤ 38 1 − S 6 q $ + 6 7 $ which is a condition required for better SOE. However, at the same time, such measurements are not helpful for recovery as they do not have any contributions from significant coefficients. The remaining measurements having contributions from significant coefficients are only helpful for recovery. There is a tradeoff between the number of BSM-based noisy measurements used to estimate the sparsity order , and the number of remaining BSMbased measurements used for recovery, which is determined by the value of S. Hence, the value of S must be chosen such that the errors in both SOE and recovery are minimized.
Using the fact that S ≈ 1, (53) can be simplified as S ≈ exp −1.6/ . 54 As the sparsity order , for the current time step is not known, the value of S is computed using the previous estimate j , − 1 i.e., S ≈ exp −1.6/ j , − 1 . The computed value of S is used to construct the BSM.

A. CONSTRUCTION OF BSM
The BSM is constructed such that it spans all the significant coefficients distributed across positions to estimate the statistics 1 n and 6 n $ . As each row of BSM has 1 − S ones, the BSM matrix is designed by stacking ⌊ 1 − S ⌋ numbers of ⌊ As the structure of the BSM changes according to , , it must be transmitted to the recovery process at every time step along with the obtained measurements for recovery. However, the deterministic construction of the BSM avoids transmitting such overhead, as the recovery process performs instantaneous sparsity level estimation from the measurements obtained during acquisition.

1) CASE 1: TIME-INVARIANT SPARSITY ORDER
The reduced variance of Kalman filtered estimate ¢ £ }~¤OE is verified for different time-invariant sparsity order values , as shown in Table II. Here, at every time step, for a given sparsity order , the support W of the significant coefficients alone is varied.

2) CASE 2: TIME-VARYING SPARSITY ORDER
A simulation example to establish the improvement by Kalman filtering for the time-variant sparsity order n is shown in Fig. 2. In the simulation setup, the sparsity order is varied using a Markov process controlled by the probabilities Pr[š , = 0] = 0.8, Pr[š , < 0] = 0.1, and Pr[š , > 0] = 0.1. A time-varying compressible signal with 0 = 100 significant components (above a certain threshold) and the rest with insignificant components (below the threshold) obeying power-law decay is generated. The number of significant components is allowed to vary, i.e., , : , > 0 is generated using the Markov process and is represented as the true sparsity order. Fig. 2 shows that Kalman filtered estimates ¢ £ }~¤OE , track the true sparsity order and have reduced error compared to the estimates ¢ £ }~¤OE , obtained using BSOE.
Author Name: Preparation of Papers for IEEE Access (February 2017) VOLUME XX, 2017 13

VII. PRACTICAL IMPLEMENTATION ASPECTS
The practical real-time composite CS acquisition hardware for the proposed SOE method is shown in Fig. 3. There are identical and independent modulator circuits working in parallel.
The hardware components of a modulator circuit are (i) a multiplexer to select between the sensing basis g / h (continuous-time version of rows of GSM multiplied by Ψ ]# ) and the sensing basis b / h (continuous-time version of rows of BSM multiplied by Ψ ]# ) and (ii) a multiplier and an Integrate and Dump (I&D) circuit to multiply and integrate the compressible signal with the sensing basis for i seconds to output a measurement.
The select signal / h of the multiplexer takes the value 0 for ≤ }~& to select the basis b / h and 1 for }~& < ≤ to select basis g / h . From the obtained measurements, the proposed KBSOE method estimates the sparsity order and determines / h and for the next i seconds.
The proposed architecture is similar to the practical RMPI hardware [36], with a difference of multiplexer and select signal for multiplexing the sensing bases b / h and g / h . The real-time determination of the total number of measurements based on the estimated sparsity order is explained as follows.

A. DETERMINING THE NUMBER OF MEASUREMENTS FOR THE COMPOSITE SENSING SYSTEM
As we discussed in Section II, the problem of simultaneous estimation of the sparsity order , from the measurement vector and the size of the measurement vector, i.e., the number of measurements , using the sparsity order result in a Chicken-Egg problem. Hence, we estimate the number of measurements based on the previously estimated sparsity order j , − 1 and we obtain the measurement vector , . After obtaining the measurement vector, the sparsity order , is estimated to determine the number of measurements , + 1 for the next time step. As the composite sensing system is built using BSM and GSM, the number of measurements required for each is computed as follows.

2) NUMBER OF GSM-BASED MEASUREMENTS
The number of GSM-based measurements •~& , are computed using (2), and is given as

B. PROCEDURE FOR DETERMINING THE NUMBER OF MEASUREMENTS
While starting acquisition at , = 0, the previous estimate j −1 is not available. Hence, it is assumed that ÀÂÝ . Thus the number of measurements obtained initially is, Once 0 measurements are obtained, the proposed SOE method provides the estimate j 0 which in turn determines 1 , and this process continues. Thus both the number of measurements , and the sparsity order are estimated sequentially, i.e., j , − 1 determines and , measurements provide the estimate j

C. IMPACT OF COMPOSITE SENSING ON THE HARDWARE COMPLEXITY
As each measurement is obtained using an independent hardware component, the hardware complexity is directly proportional to the number of measurements. The sparsity aware CS system obtains , = •~& , measurements whereas the proposed sparsity-unaware CS system obtains , = }~& , + •~& , measurements. Thus }~& , measurements are excessive for the proposed system. However, considering (57) and (58), 0.1 •~& , , i.e., the impact of the additional BSM measurements is minimal, as shown in Fig. 4. In addition we show in subsequent sections that measurements are useful for CS recovery, as they provide an initial estimate of the support of the underlying compressible signal. The initial estimate reduces the number of iterations required for the greedy CS recovery algorithm to achieve a faster recovery. Thus, the impact of the additional BSM measurements is compensated significantly during recovery.

INING THE NUMBER OF
, the previous estimate it is assumed that j −1 = Thus the number of measurements obtained initially measurements are obtained, the proposed SOE which in turn , and this process continues. Thus both the and the sparsity order , determines , j , .

SENSING ON THE
using an independent hardware component, the hardware complexity is directly ber of measurements. The sparsitymeasurements, unaware CS system obtains measurements. Thus, for the proposed }~& , < additional BSM . In addition, subsequent sections that }~& , as they provide the support of the underlying compressible signal. The initial estimate reduces the for the greedy CS recovery the impact of compensated

VIII. THE PROPOSED CS RECOVERY SYSTEM
We propose a BSM aided CS recovery system as shown in Fig. 5. The composite sensing matrices used du recovery are the same as those used during CS acquisition. Our proposed recovery system uses BSM measurements along with GSM-based measurements. It differs from the conventional CS recovery system GSM-based measurements alone. BSM measurements are used for the SOE as recovery. The SOE techniques used during recovery are the same as those used during acquisition. The estimated sparsity order j n is the input for the proposed BSM Aided OMP (BAOMP) recovery algorithm. We choose OMP for recovery because it is simple to implement and has robust recovery performance [6]. In OMP, the probable support indices are identified one by one in each iteration. As there are support indices f compressible signal, there are iterations. As the BSM based measurements provide a few initial estimates o support indices, the OMP algorithm must remaining support indices.

A. THE PROPOSED BAOMP METHOD
Some BSM measurements have magnitudes greater than the threshold Ï = 38 1 − S 6 q $ + 6 7 $ (| " for identifying the measurements having no contributions from the significant coefficients as discussed in Section IV), i.e., | " / }~& | > Ï indicates that each of such measurements has at least one significant component's contribution. The corresponding rows of such measurements provide information about the probable support indices based on the locations of ones in those rows. For example, suppose the dimension of BSM is 10 × 250, we construct the BSM by horizontally stacking 25 identity matrices of dimension 10 × measurement has a magnitude greater than the threshold then the compressible signal may have significant components probably located either at 1 location or 21 n_ location or so on up to location or so on up to 241 n_ location.
where is the original support of the compressible signal,

IX. PERFORMANCE COMPARISON OF PROPOSED KBSOE AND BAOMP METHODS WITH OTHER EXISTING METHODS
The proposed KBSOE method is used for SOE during CS acquisition and recovery. The proposed BAOMP method is used for CS recovery. In this section, performance measures such as SOE Error (SOEE), NRE, computational complexity, and CR of the proposed methods are compared with existing methods using synthetic signals and realworld vibration signals. The SOEE is given as, The Signal to Noise Ratio (SNR) setting for evaluating the SOEE and NRE performances is calculated at the acquisition side, and is given as, For all the simulations shown here, a Windows 7 operating system-based PC with a processor running at a 3 GHz clock speed and 4 GB RAM was used.
In the simulation, an = 2500 −dimensional synthetic compressible signal is generated and the sparsity order is maintained constant at = 250. At every time step, the number of BSM measurements taken is ⌊ # #]-⌋ ≈ 157 for an optimal value of S = 0.9936. For the Lopes method, 2 Cauchy sensing matrix-based measurements are obtained at each time step to compute the ℓ # norm for the SOE. For the trace-based method, 2 GSM-based measurements are obtained. The total number of time steps considered is /2. Thus, the total number of measurements obtained by the KBSOE method is 78 compared to the $ measurements obtained by other methods. Throughout the simulation, the support set remains the same, whereas the amplitudes of significant coefficients vary according to a normal distribution. The estimated sparsity order in every step is averaged for the Lopes, 2-GMM, TS-ACSS, SPAMP, RTCE, and proposed KBSOE methods. For the trace-based method, only a single estimate is available after $ measurements. For the SPAMP method, the weak-matching parameter is chosen as 0.5, and the estimation factor is kept at 0.2 for better results. For the RTCE method, the correctrate parameter is optimally chosen according to the SNR.
The performance is evaluated in terms of SOEE for different SNR values, as shown in Fig. 7. The simulation results show that the KBSOE method has a better SOEE performance than the other methods. It is observed that the performance of the Lopes method is inferior and substantially invariant to the SNR, owing to the use of a random Cauchy sensing matrix for which the variance is infinite. The 2-GMM SOE method requires knowledge of the energy of the significant coefficients to construct a sparse Gaussian matrix, which is seldom known a priori. It also has an Expectation-Maximization (EM) algorithm that adds to the complexity. The trace-based method requires at least $ measurements, which are very expensive compared to other existing methods. The TS-ACSS performs a twostep SOE, where the first step performs a coarse SOE whose performance deteriorates with larger sparsity order values, and the second step refines the coarse SOE with the help of signal recovery, which is a time-consuming process. In addition, the first step is accurate, with only additional measurements. Compared with other existing methods, the proposed KBSOE method has the advantage of requiring three times fewer measurements with better performance.

1) PERFORMANCE COMPARISON WITH OTHER BSM METHODS FOR DIFFERENT SPARSITY ORDER VALUES
A set of synthetic compressible signals of dimension = 5000 with various sparsity order values = 50 to = 400 is generated. The generated signals are acquired and recovered using: (i) the proposed composite sensing-KBSOE-BAOMP method, (ii) random sparse BSM sensing followed by the BAOMP method, and (iii) DBBD matrix sensing [14] followed by the modified Kronecker-based CS recovery [15]. For a given sparsity order of the compressible signal, the proposed composite sensing matrix is designed using (57) and (58). The random sparse BSM is designed with the same S value of the proposed deterministic BSM. However, the ones in each row are randomly distributed. Both the random sparse BSM and DBBD methods obtain fixed = 1800 measurements for < 300 and fixed = 2500 measurements for 300 ≤ ≤ 400, which is greater than the number of measurements obtained by composite sensing for a given . The NRE performance for the 10 dB SNR settings is shown in Fig. 8.
The simulation results show that the proposed composite sensing matrix outperforms both the random sparse BSM and DBBD sensing matrices, with fewer measurements for all given sparsity order values. The probability of missing a significant component during acquisition is higher for the random sparse BSM, resulting in poor NRE performance. Similarly, the high probability of incorrect support selection during recovery results in an inferior NRE performance for the DBBD matrix-based method. The better performance of the composite sensing matrix is due to the use of GSM, which has better RIP. When < 300, the lower variance of KBSOE results in minimal and invariant NRE. For random sparse BSM and DBBD methods, the NRE remains invariant to as the number of measurements = 1800 is adequate for < 300. When ≥ 300, the NRE performance starts degrading as the variance of KBSOE increases for the composite sensing matrix. However, even if ≥ 300, the NRE performance of KBSOE is better than other BSM-based methods. For ≥ 300, the sensing matrix becomes too sparse for random sparse BSM and DBBD methods resulting in degraded NRE performance, and they require more measurements ( > 2500) for the improved NRE performance.

2) PERFORMANCE COMPARISON: FOR DIFFERENT SNR VALUES
A time-varying synthetic compressible signal with a dimension =2500 is simulated for different SNR settings. Throughout the simulation, the sparsity order is kept constant at = 250 with varying support and amplitude. After acquisition and recovery using different CS methods, the NRE performance is compared.
The simulation results show the (i) improved performance of the proposed KBSOE-based SOE followed by the BAOMP-based recovery method compared to the 2-GMM-based SOE method followed by OMP-based recovery, sparsity adaptive matching pursuit algorithms: AS-SaMP, OAMP, MCoSaMP, and SAStOMP-based recovery, and DBBD-Kronecker-based CS recovery, VOLUME XX, 2017 and (ii) comparable performance with GSM followed by Basis Pursuit, especially for low SNR conditions in Fig. 9. The high probability of support estimation errors in the DBBD-Kronecker method results in poor NRE performance. The inaccuracy in estimating the statistics of the signal for the 2-GMM method affects performance. The sparsity adaptive matching pursuit methods are fed with the optimal parameters for sparsity order = 250. Thus, their NRE performance is similar to the proposed method. However, tuning the parameters in real-time is a challenging task for time varying sparsity order. Although Basis Pursuit NRE performance, its higher computational complexity and recovery time are unsuitable for real-time recovery.

C. PERFORMANCE COMPARISON USING RECOVERY RUNNING TIME
A set of synthetic compressible signals of dimension = 2500 with various sparsity order values = 400 is generated, and the SNR is kept at 10 dB. The generated signals are acquired and recovered using ( proposed composite sensing followed by the KBSOE BAOMP method, (ii) GSM sensing followed by sparsity-aware OMP method, and (iii) GSM sensing followed by the sparsity adaptive matching pursuit algorithms: AS-SaMP, OAMP, MCoSaMP, and SAStOMP The running time for recovering the compressible signal is shown in Fig. 10, which shows that KBSOEfaster and outperforms all existing methods for all sparsity order values.

D. PERFORMANCE COMPARISON USING REAL WORLD VIBRATION SIGNALS
CS has recently been investigated for vibration signals recently in [37], [38], [39], [40], [41], and it has been shown that vibration signals can be acquired efficiently using the CS method. Hence, the proposed KBSOE method is applied to real-world vibration signals available from Mide Technologies [42] for the performance evaluation.
Here, the vibration signals acquired during the climb of an aircraft and the transit of a semi-trailer truck are analyzed. However, tuning the time is a challenging task for time-Pursuit has better NRE performance, its higher computational complexity and time recovery.

ON USING RECOVERY
A set of synthetic compressible signals of dimension order values = 50 to SNR is kept at 10 dB. The generated signals are acquired and recovered using (i) the KBSOE-SM sensing followed by the ) GSM sensing followed by the sparsity adaptive matching pursuit P, MCoSaMP, and SAStOMP. The running time for recovering the compressible signal is -BAOMP is l existing methods for all sparsity ON USING REALbeen investigated for vibration signals and it has been acquired efficiently the proposed KBSOE method world vibration signals available from performance evaluation. the vibration signals acquired during the climb of an trailer truck are analyzed. A snapshot of the smoothly varying vibration signal acquired using an accelerometer mounted on the surface of an aircraft during its climb is shown in Fig. 1 signal is sampled at 2500 samples per second ( Time-Frequency spectrogram is shown spectrogram reveals that the vibration signal is compressible in the frequency domain significant frequency components varies with time. The time-varying sparsity order is shown in Fig. 1 vibration signal when it is represented using different orthonormal transforms. It is observed that Discrete Cosine Transform (DCT) represents the vibration signal with lesser sparsity order than Discrete Fourier Transform (DFT), and the Daubechies x:4 and Coiflet wavelets-based Discrete Wavelet Transform (DWT).

Vibration signal measured outside of an aircraft during its
The Time-Frequency spectrogram shows that only a few significant coefficients are having varying sparsity order on different basis functions (1 second) of vibration It is analyzed that DCT than those of DFT and DWT.
smoothly varying vibration signal acquired using an accelerometer mounted on the surface of an aircraft during its climb is shown in Fig. 11(a). The samples per second (sps), and its Frequency spectrogram is shown in Fig. 11(b). The spectrogram reveals that the vibration signal is and the number of significant frequency components varies with time. The varying sparsity order is shown in Fig. 12 for the esented using different orthonormal transforms. It is observed that Discrete Cosine Transform (DCT) represents the vibration signal with a lesser sparsity order than Discrete Fourier Transform and Coiflet -;4 Discrete Wavelet Transform (DWT). Hence DCT is considered here as the sparse representation matrix V when analyzing the vibration signal.
Similarly, the entire vibration signal acquired from a semi-trailer truck during transit is shown in Fig. 13 along with its frequency spectrum. It is observed that this vibration signal is also compressible.
These vibration signals are analyzed using the methods listed in Table III. The 2-GMM method is provided with the knowledge of the energy of significant coefficients for constructing the sparse GSM. The estimated sparsity order is the input for the CS recovery of vibration signals for the proposed BAOMP method and the OMP algorithm for the 2-GMM method. In the traditional CS method, the OMP algorithm is provided with the original sparsity order value. For the SPAMP method, the weak-matching parameter is chosen as 0.5, and the estimation factor is kept at 0.2 for better results. For the vibration signal measured outside the aircraft, each analysis segment contains 2500 samples. For the vibration signal measured from a semi-trailer truck, each analysis segment has 5000 samples. The analysis segment is normalized to unit energy. For KBSOE, 2-GMM, and SPAMP methods, the sparsity order is estimated for each analysis segment. Based on the estimated sparsity order j , , measurements are obtained for the recovery at the , + 1 _` time step. Using these measurements, the vibration signal is reconstructed using the respective recovery methods. Subsequently, the NRE performance measure is compared with the traditional random GSMbased CS method, DBBD-Kronecker-based CS method, and DCT-based compression method. For the traditional CS and DBBD methods, = 4 ÀÂÝ measurements are obtained.
The NRE results for both vibration signals for every 20 analysis segments are plotted in Fig. 14 and Fig. 15. A snapshot of the reconstructed vibration signal obtained using the KBSOE method followed by the BAOMP method for the vibration signal acquired outside the aircraft during its climb is shown in Fig. 16 along with the original signal. During recovery, the BAOMP method denoises the insignificant coefficients. Hence, the reconstructed signal is smooth over time compared with the original signal. Although the proposed KBSOE-based CS method exhibits a slight degradation in NRE compared to the classical DCT method, its hardware complexity of acquiring a compressible signal is I (due to the × sensing matrix), which is less than that of the DCT method's I $ (due to the × DCT matrix). Thus, it requires fewer hardware resources, storage, and power for Author Name: Preparation of Papers for IEEE Access (February 2017) VOLUME XX, 2017 20 the acquisition of time-varying compressible signals with a slightly tolerable degradation in the recovery performance compared to the DCT method. It provides a good CR, as it obtains a minimal number of measurements based on the estimated sparsity order compared with other CS methods.
The existing SOE methods are not optimal during acquisition and result in less CR as they either (i) obtain a fixed number of measurements based on the conservative assumption of having maximum sparsity order ÀÂÝ , or (ii) obtain an additional set of measurements for the SOE.

E. COMPUTATIONAL COMPLEXITY COMPARISON
The computational complexities of the existing SOE and recovery methods are compared with the proposed KBSOE and BAOMP methods. The computational complexity of the SOE methods is based on the number of measurements and iterations involved. The KBSOE method estimates the sparsity order directly from }~& < measurements. The computational complexity of the proposed KBSOE depends on (i) the probability ¡ computation, (ii) the iterations involved in the joint estimation of the statistics of significant coefficients and sparsity order, and (iii) the Kalman Filtering (KF) process. The probability ¡ is estimated by identifying the measurements from the }~& measurements that are devoid of significant coefficients. This identification requires a computational complexity of I }~& . The joint estimation of probability Ð, statistics of significant coefficients, and sparsity order using the BSOE requires 1 − S computations per iteration. The simulations show that a maximum of ten iterations are required for the convergence of the joint estimation. Since the number of iterations is fixed independent of }~& measurements, the total computational complexity depends only on 1 − S , and it is I 1 − S for the joint estimation step. The KF used for refining the BSOE estimate is a scalar one, and the computational complexity is I 1 , which is less than the complexity of BSOE. Thus, the overall computational complexity of the KBSOE is I }~& + 1 − S .
Other existing SOE methods use all measurements for SOE. Given measurements, the trace-based method requires complex matrix operations with a computational complexity I . The computational complexity of the EM algorithm in the 2-GMM method is I . The Lopes method computes the median of the Cauchy sensed measurements and the mean of the energy of Gaussian sensed measurements with a computational complexity I log to estimate the ℓ # and ℓ $ norms for the SOE. Since }~& + 1 − S ≪ , the proposed KBSOE method is very advantageous for SOE in terms of lower computational complexity compared to other existing methods.
The computational complexity for the OMP and SPAMP based CS recovery methods is I j , whereas for the proposed BAOMP method is I j − | # | . As | j − # | < j , the BAOMP method requires fewer computations.
The runtime complexities of the proposed KBSOE and BAOMP methods are evaluated and compared with that of other existing SOE methods for real-world aircraft vibration signal analysis. The average runtime for the SOE and recovery on analyzing each segment of the vibration signal are listed in Table IV and Table V, respectively.  The runtime for the proposed KBSOE and BAOMP methods is significantly less than that of other existing methods. Considering the acquisition of fewer measurements, reduced computational and run-time complexity, and better recovery performance, the proposed methods are the best candidates for the efficient acquisition, compression, and recovery of vibration and similar compressible signals.

X. DISCUSSIONS ON RESULTS
The following discussion shows how the proposed methods enhance CS.

A. KBSOE DURING CS ACQUISITION
The major problem in CS is estimating the sparsity order of the compressible signal to reduce the number of measurements . However, real-time applications require that the sparsity order be estimated from the measurements themselves. Our proposed method of composite sensing followed by KBSOE provides a solution to this chicken-egg Author Name: Preparation of Papers for IEEE Access (February 2017) VOLUME XX, 2017 21 problem of sparsity order determining the number of measurements, and the measurements determining the sparsity order. We combined sparse BSM and dense GSM for the composite sensing of compressible signals during CS acquisition. As the sparse BSM has very few ones in each of its columns, it exhibits a weak RIP and is unsuitable for the perfect recovery of the compressed signal with limited measurements. However, we exploited its week RIP in Section IV for the SOE. The challenge in the design of the BSM is to cater to the requirement of estimating the timevarying sparsity order of the compressible signal with limited measurements. Our solution for this challenge resulted in a BSM adjusting its dimensions and entries according to the time-varying sparsity order. The proposed BSM is deterministic suiting practical implementation of CS acquisition and recovery systems.
We derived a blind BSOE method that does not require any a priori knowledge of signal and noise statistics. These statistics are estimated from the statistics of the measurements and BSM entries. We proposed KBSOE, i.e., Kalman Filtering to reduce the variance of BSOE and improve SOE performance. Thus, the proposed KBSOE resulted in an optimal number of CS measurements, which determined the efficient use of the CS acquisition hardware. A simulation of the SOEE performance (Fig. 7) showed that the proposed KBSOE method performed better for all SNR conditions even with three times fewer measurements than other SOE methods.

B. KBSOE-BAOMP DURING CS RECOVERY
The KBSOE method performs the SOE from the CS measurements obtained during CS recovery. The SOE during recovery is the same as that during acquisition. The better SOEE performance of KBSOE resulted in better NRE performance compared to other existing SOE and support estimation methods. The better NRE performance owing to the composite sensing and KBSOE methods was demonstrated by analyzing synthetic and real-world signals, as shown in Fig. 8 , 9, 14, and 15. It can be observed from both Fig. 14 and 15 that CSbased compression methods resulted in a slightly higher NRE compared to the traditional DCT-based compression method. The reason is as follows. DCT method knows the support and amplitude of significant coefficients among the N coefficients and approximates the insignificant coefficients to zeros. Hence, NRE is the energy of the insignificant coefficients, which is 0.05. The CS recovery methods do not have a priori knowledge of the significant coefficients, and they estimate the support and amplitude from the available M<N measurements. There are some weakest significant DCT coefficients whose amplitudes are very close to the threshold value, distinguishing significant and insignificant coefficients. The CS recovery algorithms detect stronger information-bearing significant coefficients well above the threshold without any failure. However, sometimes they may not select the weakest significant coefficients very near the threshold, as the CS sensing matrices are not perfect orthonormal matrices for identifying the support of such coefficients. This effect is observed in Fig. 17 which shows the DCT spectrum plot of one of the analysis segments of the vibration signal measured outside an aircraft. Here, the DCT coefficients before and after sparse approximations and the estimated DCT coefficients using the proposed KBSOE-BAOMPbased CS recovery method are shown. It is observed that the DCT coefficients with indices from 83 to 87 and 100 to 102 have magnitudes slightly below and above the threshold, respectively, and are not detected during CS recovery, contributing to the slightly higher NRE compared to DCT-based compression and recovery. However, among the CS methods, the proposed KBSOE-BAOMP method performs better than the 2-GMM, DBBD-Kronecker, SPAMP, and AS-SaMP methods and is comparable to the traditional GSM-based method.

C. IMPACT OF KBSOE ON CS ACQUISITION AND RECOVERY
As the sparsity order determines the number of measurements, the SOE must be accurate for efficient CS acquisition and recovery in terms of optimal use of hardware resources and quality of the recovered signal. Suppose the error j − = b , the error in the number of measurements is, − Ò< é = 2.63 log − 2.63 + b log Ò< é ; where is the number of measurements required for the true value and Ò< é is the number of measurements required for the estimate j = + b . Thus, the difference in the number of measurements is linearly proportional to the SOE error b .
Author Name: Preparation of Papers for IEEE Access (February 2017) VOLUME XX, 2017 22 • Case 1: If b > 0, then Ò< é > resulting in excessive measurements and the inefficient use of hardware resources. However, the recovery performance is not affected. • Case 2: If b < 0, then Ò< é < resulting in a fewer number of measurements than the required degrading the recovery quality. Section VII shows that Kalman filtering reduces the variance of the BSOE which reduces the estimation error b ≪ , to result in a minimal error in and a negligible impact on CS acquisition and recovery.

1) IMPACT ON CS ACQUISITION
During acquisition, the CS results in ≪ compressed samples for a given time interval i. Since the proposed KBSOE method determines based on the previously estimated sparsity order j , − 1 , the time taken for SOE should be < =. The time complexity of the KBSOE method depends on the number of iterations involved in estimating the probability Ð, as given in Step 5 of Algorithm 1. Typically, ten iterations are sufficient for the convergence of the KBSOE method, and the time required for convergence is significantly less than < = seconds. For example, Section IX shows that the KBSOE method requires 42 ms compared to i = 1X while analyzing the real-world vibration signal. Thus, until the time required for SOE is less than a fixed interval i, the KBSOE method does not reduce the CS acquisition rate.

2) IMPACT ON CS RECOVERY
For greedy recovery algorithms, the sparsity order is an input. For real-time applications, when is unknown and is assumed to be ÚÛÜ , the iteration involved in greedy algorithms puts the constraint that ÚÛÜ > < =, where > is the time elapsed per iteration. Using the proposed KBSOE and BAOMP methods, the constraint becomes j − | # | > < =. Since j − | # | > < ÚÛÜ >, the KBSOE method is fast, and real-time recovery of fast varying compressible signals is possible. For applications requiring offline recovery and analysis, CS execution becomes fast by ?@A j ]| # | times using the proposed methods.

E. APPLICABILITY OF THE KBSOE METHOD
The KBSOE method is applied to sparse and compressible signals. It should be noted that some signals such as noise are neither sparse nor compressible. Sometimes, compressible signals become non-sparse due to disturbances. For example, the vibration signal of a rocket is almost random during the transonic regime, which is rich in significant components, making the signal non-sparse. Here, we do not consider such non-sparse signals or conditions. These signals or conditions will be understood while performing SOE using the KBSOE method. When the sparsity order of the compressible signal is increased, the number of measurements less than the threshold Ï decreases, i.e., the probability ¡ = S decreases. If the maximum sparsity order for a signal to be considered compressible is ÚÛÜ , then the minimum probability ¡ Ú/w = S ?@A . Thus, the acquired signal is neither sparse nor compressible when the estimated probability ̂¡ < ¡ Ú/w . In such cases, the sensing matrix dimension matches with the signal's dimension satisfying Nyquist sampling conditions.

XI. CONCLUSION
In this paper, a composite CS system is presented, where (i) a deterministic sparse BSM-based SOE method (BSOE) with Kalman filtering (KBSOE) is proposed for the SOE of compressible signals and (ii) the BAOMP method is proposed for the recovery. The BSOE estimator is ML optimal, and the Kalman filter, which refines the BSOE estimates, is LMMSE optimal. Hence, we present an optimal tracking algorithm for the time-varying sparsity order. The KBSOE method provides a better estimate of the time-varying sparsity order on the fly for the efficient CS acquisition and recovery of compressible signals, including real-world vibration signals. The BAOMP method reduces the recovery time by at least 25% compared with other existing methods. The proposed KBSOE and BAOMP methods show better performance in terms of various metrics than the other methods published thus far. THIRUPPATHIRAJAN