By Topic

Signal Processing, IEEE Transactions on

Issue 7 • Date April1, 2014

Filter Results

Displaying Results 1 - 25 of 33
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (366 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Signal Processing publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (135 KB)  
    Freely Available from IEEE
  • Table of Contents

    Page(s): 1615 - 1616
    Save to Project icon | Request Permissions | PDF file iconPDF (210 KB)  
    Freely Available from IEEE
  • Table of Contents

    Page(s): 1617 - 1618
    Save to Project icon | Request Permissions | PDF file iconPDF (212 KB)  
    Freely Available from IEEE
  • Two-Way Range Estimation Utilizing Uplink and Downlink Channels Dependency

    Page(s): 1619 - 1633
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3438 KB) |  | HTML iconHTML  

    Range estimation between a base station (BS) and target that are unsynchronized in time is obtained by two-way ranging where both sides are transmitting and receiving. In the conventional two-way range estimation approach, the time of arrivals at BS and target are estimated separately and thus correlation between the uplink and downlink channels is ignored. In this paper, we develop a maximum likelihood range estimator, in multipath conditions, where the received signals distribution is approximated as Gaussian and the time of arrivals at BS and target are jointly estimated. The estimator is convenient for implementation and utilizes the uplink and downlink statistical dependency and, hence, obtains a performance advantage over the conventional suboptimal approach. An analytical analysis of the estimator is also developed, which is used to gain insight on the estimator performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dependence of the Stability of the Least Mean Fourth Algorithm on Target Weights Non-Stationarity

    Page(s): 1634 - 1643
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2054 KB) |  | HTML iconHTML  

    The paper investigates a new stability problem of the least mean fourth (LMF) algorithm, which is the dependence of the algorithm stability on the time-variation of the target weights of the adaptive filter. The analysis is done in the context of tracking a Markov plant with a stationary white Gaussian input. It is found that the algorithm diverges if the mean square increment of the plant parameter vector exceeds a threshold value that depends on the step-size, input variance, and noise moments. The paper also derives a closed form of the steady-state mean square deviation without the usual assumption of a strong noise. Comparison of the tracking capabilities of the LMF and LMS algorithms is provided. The comparison is done in terms of the minimum mean square deviation attained by each algorithm over the stability range of its step-size. Gaussian, uniform, and binary distributions of the noise are considered. Conditions that make one algorithm outperform the other are determined. Analytical results are supported by simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On {l}_{q} Optimization and Sparse Inverse Covariance Selection

    Page(s): 1644 - 1654
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2942 KB) |  | HTML iconHTML  

    Graphical models are well established in providing meaningful conditional probability descriptions of complex multivariable interactions. In the Gaussian case, the conditional independencies between different variables correspond to zero entries in the precision (inverse covariance) matrix. Hence, there has been much recent interest in sparse precision matrix estimation in areas such as statistics, machine learning, computer vision, pattern recognition, and signal processing. A popular estimation method involves optimizing a penalized log-likelihood problem. The penalty is responsible for inducing sparsity and a common choice is the convex l1 norm. Even though the l0 penalty is the natural choice guaranteeing maximum sparsity, it has been avoided due to lack of convexity. As a result, in this paper we bridge the gap between these two penalties and propose the non-concave lq penalized log-likelihood problem for sparse precision matrix estimation where 0 ≤ q <; 1. A novel algorithm is developed for the optimization and we provide some of its theoretic properties that are useful in sparse linear regression. We illustrate on synthetic and real data, showing reconstruction quality comparisons of sparsity inducing penalties:l0, lq with 0 <; q <; 1, l1, and SCAD. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modal Analysis With Compressive Measurements

    Page(s): 1655 - 1670
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3605 KB) |  | HTML iconHTML  

    Structural Health Monitoring (SHM) systems are critical for monitoring aging infrastructure (such as buildings or bridges) in a cost-effective manner. Such systems typically involve collections of battery-operated wireless sensors that sample vibration data over time. After the data is transmitted to a central node, modal analysis can be used to detect damage in the structure. In this paper, we propose and study three frameworks for Compressive Sensing (CS) in SHM systems; these methods are intended to minimize power consumption by allowing the data to be sampled and/or transmitted more efficiently. At the central node, all of these frameworks involve a very simple technique for estimating the structure's mode shapes without requiring a traditional CS reconstruction of the vibration signals; all that is needed is to compute a simple Singular Value Decomposition. We provide theoretical justification (including measurement bounds) for each of these techniques based on the equations of motion describing a simplified Multiple-Degree-Of-Freedom (MDOF) system, and we support our proposed techniques using simulations based on synthetic and real data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decomposition Approach for Low-Rank Matrix Completion and Its Applications

    Page(s): 1671 - 1683
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6690 KB) |  | HTML iconHTML  

    In this paper, we describe a low-rank matrix completion method based on matrix decomposition. An incomplete matrix is decomposed into sub-matrices which are filled with a proposed trimming step and then are recombined to form a low-rank completed matrix. The divide-and-conquer approach can significantly reduce computation complexity and storage requirement. Moreover, the proposed decomposition method can be naturally incorporated into any existing matrix completion methods to attain further gain. Unlike most existing approaches, the proposed method is not based on norm minimization nor on SVD decomposition. This makes it possible to be applied beyond real domain and can be used in arbitrary fields, including finite fields. The effectiveness of our proposed method is demonstrated through extensive numerical results on randomly generated and real matrix completion problems and a concrete application-video denoising. The numerical experiments show that the algorithm can reliably solve a wide range of problems at a speed significantly faster than recent algorithms. In the proposed denoising approach, we present a patch-based video denoising algorithm by grouping similar patches and then formulating the problem of removing noise using a decomposition approach for low-rank matrix completion. Experiments show that the proposed approach robustly removes mixed noise such as impulsive noise, Poisson noise, and Gaussian noise from any natural noisy video. Moreover, our approach outperforms state-of-the-art denoising techniques such as VBM3D and 3DWTF in terms of both time and quality. Our technique also achieves significant improvement over time against other matrix completion methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical Radio Resource Optimization for Heterogeneous Networks With Enhanced Inter-Cell Interference Coordination (eICIC)

    Page(s): 1684 - 1693
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3178 KB) |  | HTML iconHTML  

    Interference is a major performance bottleneck in Heterogeneous Network (HetNet) due to its multi-tier structure. We propose almost blank resource block (ABRB) for interference control in HetNet. When an ABRB is scheduled in a macro BS, a resource block (RB) with blank payload is transmitted and this eliminates the interference from this macro BS to the pico BSs. We study a two timescale hierarchical radio resource management (RRM) scheme for HetNet with dynamic ABRB control. The long term controls, such as dynamic ABRB, are adaptive to the large scale fading at a RRM server for co-Tier and cross-Tier interference control. The short term control (user scheduling) is adaptive to the local channel state information at each BS to exploit the multi-user diversity. The two timescale optimization problem is challenging due to the exponentially large solution space. We exploit the sparsity in the interference graph of the HetNet topology and derive structural properties for the optimal ABRB control. Based on that, we propose a two timescale alternative optimization solution for user scheduling and ABRB control. The solution has low complexity and is asymptotically optimal at high SNR. Simulations show that the proposed solution has significant gain over various baselines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Greedy Algorithms for Joint Sparse Recovery

    Page(s): 1694 - 1704
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2546 KB) |  | HTML iconHTML  

    Five known greedy algorithms designed for the single measurement vector setting in compressed sensing and sparse approximation are extended to the multiple measurement vector scenario: Iterative Hard Thresholding (IHT), Normalized IHT (NIHT), Hard Thresholding Pursuit (HTP), Normalized HTP (NHTP), and Compressive Sampling Matching Pursuit (CoSaMP). Using the asymmetric restricted isometry property (ARIP), sufficient conditions for all five algorithms establish bounds on the discrepancy between the algorithms' output and the optimal row-sparse representation. When the initial multiple measurement vectors are jointly sparse, ARIP-based guarantees for exact recovery are also established. The algorithms are then compared via the recovery phase transition framework. The strong phase transitions describing the family of Gaussian matrices which satisfy the sufficient conditions are obtained via known bounds on the ARIP constants. The algorithms' empirical weak phase transitions are compared for various numbers of multiple measurement vectors. Finally, the performance of the algorithms is compared against a known rank aware greedy algorithm, Rank Aware Simultaneous Orthogonal Matching Pursuit + MUSIC. Simultaneous recovery variants of NIHT, NHTP, and CoSaMP all outperform the rank-aware algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Committee Machine Approach for Compressed Sensing Signal Reconstruction

    Page(s): 1705 - 1717
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3143 KB) |  | HTML iconHTML  

    Although many sparse recovery algorithms have been proposed recently in compressed sensing (CS), it is well known that the performance of any sparse recovery algorithm depends on many parameters like dimension of the sparse signal, level of sparsity, and measurement noise power. It has been observed that a satisfactory performance of the sparse recovery algorithms requires a minimum number of measurements. This minimum number is different for different algorithms. In many applications, the number of measurements is unlikely to meet this requirement and any scheme to improve performance with fewer measurements is of significant interest in CS. Empirically, it has also been observed that the performance of the sparse recovery algorithms also depends on the underlying statistical distribution of the nonzero elements of the signal, which may not be known a priori in practice. Interestingly, it can be observed that the performance degradation of the sparse recovery algorithms in these cases does not always imply a complete failure. In this paper, we study this scenario and show that by fusing the estimates of multiple sparse recovery algorithms, which work with different principles, we can improve the sparse signal recovery. We present the theoretical analysis to derive sufficient conditions for performance improvement of the proposed schemes. We demonstrate the advantage of the proposed methods through numerical simulations for both synthetic and real signals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Single-Site Localization via Maximum Discrimination Multipath Fingerprinting

    Page(s): 1718 - 1728
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2924 KB) |  | HTML iconHTML  

    A novel approach to single-site localization based on maximum discrimination multipath fingerprinting is presented. In contrast to the existing approach, which extracts each fingerprint only from the data of that location, the new approach uses also the data of all the other locations in the database, and leverages it to extract a fingerprint that is as different as possible from the other fingerprints in the database. The performance of this approach, validated with both simulated and real data, is superior to the existing approach, demonstrating single-site localization accuracy of 1 m in typical indoor environments. The new approach has also a lower computational complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of IIR Digital Differentiators Using Constrained Optimization

    Page(s): 1729 - 1739
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2168 KB) |  | HTML iconHTML  

    A new optimization method for the design of fullband and lowpass IIR digital differentiators is proposed. In the new method, the passband phase-response error is minimized under the constraint that the maximum passband amplitude-response relative error is below a prescribed level. For lowpass IIR differentiators, an additional constraint is introduced to limit the average squared amplitude response in the stopband so as to minimize any high-frequency noise that may be present. Extensive experimental results are included, which show that the differentiators designed using the proposed method have much smaller maximum phase-response error for the same passband amplitude-response error and stopband constraints when compared with several differentiators designed using state-of-the-art competing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deterministic Blind Identification of IIR Systems With Output-Switching Operations

    Page(s): 1740 - 1749
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2658 KB) |  | HTML iconHTML  

    In this paper, a deterministic blind identification approach is proposed for linear output-switching systems, which are modeled by multiple infinite impulse-response (IIR) dynamic functions. By adopting a new over-sampling strategy, the concerned single-input-single-output (SISO) output-switching system is equivalently transformed into a time-invariant multi- input-multi-output (MIMO) system. Further, by exploring the mutual relations among the multiple inputs, the time-invariant MIMO system model and subsequently the output-switching system model are identified uniquely up to a scalar constant using the proposed identification approach. Sufficient identifiability conditions are provided for output-switching systems and numerical simulations are carried out to validate the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Linear Convergence of the ADMM in Decentralized Consensus Optimization

    Page(s): 1750 - 1761
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3235 KB) |  | HTML iconHTML  

    In decentralized consensus optimization, a connected network of agents collaboratively minimize the sum of their local objective functions over a common decision variable, where their information exchange is restricted between the neighbors. To this end, one can first obtain a problem reformulation and then apply the alternating direction method of multipliers (ADMM). The method applies iterative computation at the individual agents and information exchange between the neighbors. This approach has been observed to converge quickly and deemed powerful. This paper establishes its linear convergence rate for the decentralized consensus optimization problem with strongly convex local objective functions. The theoretical convergence rate is explicitly given in terms of the network topology, the properties of local objective functions, and the algorithm parameter. This result is not only a performance guarantee but also a guideline toward accelerating the ADMM convergence. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Smoothing and Decomposition for Analysis Sparse Recovery

    Page(s): 1762 - 1774
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5072 KB) |  | HTML iconHTML  

    We consider algorithms and recovery guarantees for the analysis sparse model in which the signal is sparse with respect to a highly coherent frame. We consider the use of a monotone version of the fast iterative shrinkage-thresholding algorithm (MFISTA) to solve the analysis sparse recovery problem. Since the proximal operator in MFISTA does not have a closed-form solution for the analysis model, it cannot be applied directly. Instead, we examine two alternatives based on smoothing and decomposition transformations that relax the original sparse recovery problem, and then implement MFISTA on the relaxed formulation. We refer to these two methods as smoothing-based and decomposition-based MFISTA. We analyze the convergence of both algorithms and establish that smoothing-based MFISTA converges more rapidly when applied to general nonsmooth optimization problems. We then derive a performance bound on the reconstruction error using these techniques. The bound proves that our methods can recover a signal sparse in a redundant tight frame when the measurement matrix satisfies a properly adapted restricted isometry property. Numerical examples demonstrate the performance of our methods and show that smoothing-based MFISTA converges faster than the decomposition-based alternative in real applications, such as MRI image reconstruction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decentralized Data Reduction With Quantization Constraints

    Page(s): 1775 - 1784
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2321 KB) |  | HTML iconHTML  

    A guiding principle for data reduction in statistical inference is the sufficiency principle. This paper extends the classical sufficiency principle to decentralized inference, i.e., data reduction needs to be achieved in a decentralized manner. We examine the notions of local and global sufficient statistics and the relationship between the two for decentralized inference under different observation models. We then consider the impact of quantization on decentralized data reduction, which is often needed when communications among sensors are subject to finite capacity constraints. The central question we intend to ask is: if each node in a decentralized inference system has to summarize its data using a finite number of bits, is it still optimal to implement data reduction using global sufficient statistics prior to quantization? We show that the answer is negative using a simple example and proceed to identify conditions under which sufficiency based data reduction followed by quantization is indeed optimal. They include the well known case when the data at decentralized nodes are conditionally independent as well as a class of problems with conditionally dependent observations that admit conditional independence structure through the introduction of an appropriately chosen hidden variable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance Analysis and Optimization for Interference Alignment Over MIMO Interference Channels With Limited Feedback

    Page(s): 1785 - 1795
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2925 KB) |  | HTML iconHTML  

    In this paper, we address the problem of interference alignment (IA) over MIMO interference channels with limited channel state information (CSI) feedback based on quantization codebooks. Due to limited feedback and, hence, imperfect IA, there are residual interferences across different links and different data streams. As a result, the performance of IA is greatly related to the CSI accuracy (namely number of feedback bits) and the number of data streams (namely transmission mode). In order to improve the performance of IA, it makes sense to optimize the system parameters according to the channel conditions. Motivated by this, we first give a quantitative performance analysis for IA under limited feedback and derive a closed-form expression for the average transmission rate in terms of feedback bits and transmission mode. By maximizing the average transmission rate, we obtain an adaptive feedback allocation scheme, as well as a dynamic mode selection scheme. Furthermore, through asymptotic analysis, we obtain several clear insights on the system performance and provide some guidelines on the system design. Finally, simulation results validate our theoretical claims and show that obvious performance gain can be obtained by adjusting feedback bits dynamically or selecting transmission mode adaptively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sub-Nyquist Radar via Doppler Focusing

    Page(s): 1796 - 1811
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3032 KB) |  | HTML iconHTML  

    We investigate the problem of a monostatic pulse-Doppler radar transceiver trying to detect targets sparsely populated in the radar's unambiguous time-frequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem but either do not address sample rate reduction, impose constraints on the radar transmitter, propose CS recovery methods with prohibitive dictionary size, or perform poorly in noisy conditions. Here, we describe a sub-Nyquist sampling and recovery approach called Doppler focusing, which addresses all of these problems: it performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a CS dictionary with size, which does not increase with increasing number of pulses P. Furthermore, in the presence of noise, Doppler focusing enjoys a signal-to-noise ratio (SNR) improvement, which scales linearly with P, obtaining good detection performance even at SNR as low as - 25 dB. The recovery is based on the Xampling framework, which allows reduction of the number of samples needed to accurately represent the signal, directly in the analog-to-digital conversion process. After sampling, the entire digital recovery process is performed on the low rate samples without having to return to the Nyquist rate. Finally, our approach can be implemented in hardware using a previously suggested Xampling radar prototype. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Pilot Carrier Placement in Multicarrier-Based Systems

    Page(s): 1812 - 1821
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1743 KB) |  | HTML iconHTML  

    In traditional multicarrier systems, the pilot carriers used to estimate the channel are placed as uniform as possible over the bandwidth. However, with the raise of cognitive radio systems, where the multicarrier system is used by the secondary users, parts of the bandwidth are not available for transmission as primary users are active in these bands. Therefore, the multicarrier system must introduce guard bands in which the carriers may not be used. Hence, a problem might occur when placing the pilot carriers. In this paper, we investigate the effect of the positions of the pilot carriers on the MSE performance of channel estimation, and look for the pilot carrier placement that minimizes the MSE. We do not restrict our attention to multicarrier systems with a cyclic prefix, but we also consider other types of guard interval that are used for multicarrier transmission. It is known that an equidistantly-spaced distribution of the pilot carriers is in general not the optimal placement of the pilot carriers, as the corresponding MSE can become very high. In this paper, we use a heuristic algorithm to search for the best pilot placement, which is able to deliver a pilot carrier placement that outperforms the maximum distance distribution from [S. Song and A. C. Singer, “Pilot-Aided OFDM Channel Estimation in the Presence of the Guard Band,” IEEE Trans. Commun., vol. 55, no. 8, pp. 1459-1465, Aug. 2008] in terms of the MSE, and results in an MSE that is close to the case where no guard bands are present. Based on the results of the algorithm, we can derive some simple rules of thumb to select the positions of the pilot carriers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Linear-Quadratic Blind Source Separation Using NMF to Unmix Urban Hyperspectral Images

    Page(s): 1822 - 1833
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2814 KB) |  | HTML iconHTML  

    In this work, we propose algorithms to perform Blind Source Separation (BSS) for the linear-quadratic mixing model. The linear-quadratic model is less studied in the literature than the linear one. In this paper, we propose original methods that are based on Non-negative Matrix Factorization (NMF). This class of methods is well suited to many applications where the data are non-negative. We are here particularly interested in spectral unmixing (extracting reflectance spectra of materials present in pixels and associated abundance fractions) for urban hyperspectral images. The originality of our work is that we developed extensions of NMF, which is initially suited to the linear model, for the linear-quadratic model. The proposed algorithms are tested with simulated hyperspectral images using real reflectance spectra and the obtained results are very satisfactory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Beamforming by Linear Programming

    Page(s): 1834 - 1849
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3639 KB) |  | HTML iconHTML  

    In this paper, a robust linear programming beamformer (RLPB) is proposed for non-Gaussian signals in the presence of steering vector uncertainties. Unlike most of the existing beamforming techniques based on the minimum variance criterion, the proposed RLPB minimizes the ℓ-norm of the output to exploit the non-Gaussianity. We make use of a new definition of the ℓp-norm (1 ≤ p ≤ ∞) of a complex-valued vector, which is based on the lp-modulus of complex numbers. To achieve robustness against steering vector mismatch, the proposed method constrains the ℓ-modulus of the response of any steering vector within a specified uncertainty set to exceed unity. The uncertainty set is modeled as a rhombus, which differs from the spherical or ellipsoidal uncertainty region widely adopted in the literature. The resulting optimization problem is cast as a linear programming and hence can be solved efficiently. The proposed RLPB is computationally simpler than its robust counterparts requiring solution to a second-order cone programming. We also address the issue of appropriately choosing the uncertainty region size. Simulation results demonstrate the superiority of the proposed RLPB over several state-of-the-art robust beamformers and show that its performance can approach the optimal performance bounds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Secrecy Wireless Information and Power Transfer With MISO Beamforming

    Page(s): 1850 - 1863
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3870 KB) |  | HTML iconHTML  

    The dual use of radio signal for simultaneous wireless information and power transfer (SWIPT) has recently drawn significant attention. To meet the practical requirement that the energy receiver (ER) operates with significantly higher received power as compared to the conventional information receiver (IR), ERs need to be deployed in more proximity to the transmitter than IRs in the SWIPT system. However, due to the broadcast nature of wireless channels, one critical issue arises that the messages sent to IRs can be eavesdropped by ERs, which possess better channels from the transmitter. In this paper, we address this new physical-layer security problem in a multiuser multiple-input single-output (MISO) SWIPT system where one multi-antenna transmitter sends information and energy simultaneously to an IR and multiple ERs, each with one single antenna. Two problems are investigated with different practical aims: the first problem maximizes the secrecy rate for the IR subject to individual harvested energy constraints of ERs, while the second problem maximizes the weighted sum-energy transferred to ERs subject to a secrecy rate constraint for IR. We solve these two non-convex problems optimally by a general two-stage procedure. First, by fixing the signal-to-interference-plus-noise ratio (SINR) target for ERs or IR, we obtain the optimal transmit beamforming and power allocation solution by applying the technique of semidefinite relaxation (SDR). Then, each of the two problems is solved by a one-dimension search over the optimal SINR target for ERs or IR. Furthermore, for each problem, suboptimal solutions of lower complexity are proposed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian Tracking in Underwater Wireless Sensor Networks With Port-Starboard Ambiguity

    Page(s): 1864 - 1878
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3555 KB) |  | HTML iconHTML  

    Port-starboard ambiguity is an important issue in underwater tracking systems with anti-submarine warfare applications, especially for wireless sensor networks based upon autonomous underwater vehicles. In monostatic systems this ambiguity leads to a ghost track of the target symmetrically displaced with respect to the sensor. Removal of such artifacts is usually made by rough and heuristic approaches. In the context of Bayesian filtering approximated by means of particle filtering techniques, we show that optimal disambiguation can be pursued by deriving the full Bayesian posterior distribution of the target state. The analysis is corroborated by simulations that show the effectiveness of the particle-filtering tracking. A full validation of the approach relies upon real-world experiments conducted by the NATO Science and Technology Organization - Centre for Maritime Research and Experimentation during the sea trials Generic Littoral Interoperable Network Technology 2011 and Exercise Proud Manta 2012, results which are also reported. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Zhi-Quan (Tom) Luo
University of Minnesota