By Topic

Signal Processing, IEEE Transactions on

Issue 11 • Date June1, 2013

Filter Results

Displaying Results 1 - 25 of 33
  • [Front cover]

    Publication Year: 2013 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (289 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Signal Processing publication information

    Publication Year: 2013 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (135 KB)  
    Freely Available from IEEE
  • Table of Contents

    Publication Year: 2013 , Page(s): 2741 - 2742
    Save to Project icon | Request Permissions | PDF file iconPDF (213 KB)  
    Freely Available from IEEE
  • Table of Contents

    Publication Year: 2013 , Page(s): 2743 - 2744
    Save to Project icon | Request Permissions | PDF file iconPDF (214 KB)  
    Freely Available from IEEE
  • Enhanced Adaptive Volterra Filtering by Automatic Attenuation of Memory Regions and Its Application to Acoustic Echo Cancellation

    Publication Year: 2013 , Page(s): 2745 - 2750
    Cited by:  Papers (5)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1152 KB)  

    This paper presents a novel scheme for nonlinear acoustic echo cancellation based on adaptive Volterra Filters with linear and quadratic kernels, which automatically prefers those diagonals contributing most to the output of the quadratic kernel with the goal of minimizing the overall mean-square error. In typical echo cancellation scenarios, not all coefficients will be equally relevant for the modeling of the nonlinear echo, but coefficients close to the main diagonal of the second-order kernel will describe most of the nonlinear echo distortions, such that not all diagonals need to be implemented. However, it is difficult to decide the most appropriate number of diagonals a priori, since there are many factors that influence this decision, such as the energy of the nonlinear echo, the shape of the room impulse response, or the step size used for the adaptation of kernel coefficients. Our proposed scheme incorporates adaptive scaling factors that control the influence of each group of adjacent diagonals contributing to the quadratic kernel output. An appropriate selection of these factors serves to emphasize or neglect diagonals of the model as required by the present situation. We provide adaptation rules for these factors based on previous works on combination of adaptive filters, and comprehensive simulations showing the reduced gradient noise reached by the new echo canceller. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Wireless Communications With Imperfect Channel State Information

    Publication Year: 2013 , Page(s): 2751 - 2766
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5177 KB) |  | HTML iconHTML  

    This paper studies optimal transmission over wireless channels with imperfect channel state information available at the transmitter side in the context of point-to-point channels, multiuser orthogonal frequency division multiplexing, and random access. Terminals adapt transmitted power and coding mode to channel estimates in order to maximize expected throughput subject to average power constraints. To reduce the likelihood of packet losses due to the mismatch between channel estimates and actual channel values, a backoff function is further introduced to enforce the selection of more conservative coding modes. Joint determination of optimal power allocations and backoff functions is a nonconvex stochastic optimization problem with infinitely many variables that despite its lack of convexity is part of a class of problems with null duality gap. Exploiting the resulting equivalence between primal and dual problems, we show that optimal power allocations and channel backoff functions are uniquely determined by optimal dual variables. This affords considerable simplification because the dual problem is convex and finite dimensional. We further exploit this reduction in computational complexity to develop iterative algorithms to find optimal operating points. These algorithms implement stochastic subgradient descent in the dual domain and operate without knowledge of the probability distribution of the fading channels. Numerical results corroborate theoretical findings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sparse Spatial Spectral Estimation: A Covariance Fitting Algorithm, Performance and Regularization

    Publication Year: 2013 , Page(s): 2767 - 2777
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2473 KB) |  | HTML iconHTML  

    In this paper, the sparse spectrum fitting (SpSF) algorithm for the estimation of directions-of-arrival (DOAs) of multiple sources is introduced, and its asymptotic consistency and effective regularization under both asymptotic and finite sample cases are studied. Specifically, through the analysis of the optimality conditions of the method, we prove the asymptotic, in the number of snapshots, consistency of SpSF estimators of the DOAs and the received powers of uncorrelated sources in a sparse spatial spectra model. Along with this result, an explicit formula of the best regularization parameter of SpSF estimator with infinitely many snapshots is obtained. We then build on these results to investigate the problem of selecting an appropriate regularization parameter for SpSF with finite snapshots. An automatic selector of such regularization parameter is presented based on the formulation of an upper bound on the probability of correct support recovery of SpSF, which can be efficiently evaluated by Monte Carlo simulations. Simulation results illustrating the effectiveness and performance of this selector are provided, and the application of SpSF to direction-finding for correlated sources is discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cooperative Cognitive Networks: Optimal, Distributed and Low-Complexity Algorithms

    Publication Year: 2013 , Page(s): 2778 - 2790
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2684 KB) |  | HTML iconHTML  

    This paper considers the cooperation between a cognitive system and a primary system where multiple cognitive base stations (CBSs) relay the primary user's (PU) signals in exchange for more opportunity to transmit their own signals. The CBSs use amplify-and-forward (AF) relaying and coordinated beamforming to relay the primary signals and transmit their own signals. The objective is to minimize the overall transmit power of the CBSs given the rate requirements of the PU and the cognitive users (CUs). We show that the relaying matrices have unity rank and perform two functions: Matched filter receive beamforming and transmit beamforming. We then develop two efficient algorithms to find the optimal solution. The first one has a linear convergence rate and is suitable for distributed implementation, while the second one enjoys superlinear convergence but requires centralized processing. Further, we derive the beamforming vectors for the linear conventional zero-forcing (CZF) and prior zero-forcing (PZF) schemes, which provide much simpler solutions. Simulation results demonstrate the improvement in terms of outage performance due to the cooperation between the primary and cognitive systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time Varying Autoregressive Moving Average Models for Covariance Estimation

    Publication Year: 2013 , Page(s): 2791 - 2801
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2342 KB) |  | HTML iconHTML  

    We consider large scale covariance estimation using a small number of samples in applications where there is a natural ordering between the random variables. The two classical approaches to this problem rely on banded covariance and banded inverse covariance structures, corresponding to time varying moving average (MA) and autoregressive (AR) models, respectively. Motivated by this analogy to spectral estimation and the well known modeling power of autoregressive moving average (ARMA) processes, we propose a novel time varying ARMA covariance structure. Similarly to known results in the context of AR and MA, we address the completion of an ARMA covariance matrix from its main band, and its estimation based on random samples. Finally, we examine the advantages of our proposed methods using numerical experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of Sum-Weight-Like Algorithms for Averaging in Wireless Sensor Networks

    Publication Year: 2013 , Page(s): 2802 - 2814
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3302 KB) |  | HTML iconHTML  

    Distributed estimation of the average value over a Wireless Sensor Network has recently received a lot of attention. Most papers consider single variable sensors and communications with feedback (e.g., peer-to-peer communications). However, in order to use efficiently the broadcast nature of the wireless channel, communications without feedback are advocated. To ensure the convergence in this feedback-free case, the recently-introduced Sum-Weight-like algorithms which rely on two variables at each sensor are a promising solution. In this paper, the convergence towards the consensus over the average of the initial values is analyzed in depth. Furthermore, it is shown that the squared error decreases exponentially with the time. In addition, a powerful algorithm relying on the Sum-Weight structure and taking into account the broadcast nature of the channel is proposed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variational Bayesian Algorithm for Quantized Compressed Sensing

    Publication Year: 2013 , Page(s): 2815 - 2824
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2790 KB) |  | HTML iconHTML  

    Compressed sensing (CS) is on recovery of high dimensional signals from their low dimensional linear measurements under a sparsity prior and digital quantization of the measurement data is inevitable in practical implementation of CS algorithms. In the existing literature, the quantization error is modeled typically as additive noise and the multi-bit and 1-bit quantized CS problems are dealt with separately using different treatments and procedures. In this paper, a novel variational Bayesian inference based CS algorithm is presented, which unifies the multi- and 1-bit CS processing and is applicable to various cases of noiseless/noisy environment and unsaturated/saturated quantizer. By decoupling the quantization error from the measurement noise, the quantization error is modeled as a random variable and estimated jointly with the signal being recovered. Such a novel characterization of the quantization error results in superior performance of the algorithm which is demonstrated by extensive simulations in comparison with state-of-the-art methods for both multi-bit and 1-bit CS problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of FARIMA Parameters in the Case of Negative Memory and Stable Noise

    Publication Year: 2013 , Page(s): 2825 - 2835
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3206 KB) |  | HTML iconHTML  

    In this paper, we extend a method of estimation of parameters of the fractional autoregressive integrated moving average (FARIMA) process with stable noise to the case of negative memory parameter d. We construct an estimator that is a modification of that of Kokoszka and Taqqu and prove its consistency for -1/2 <; d <; 0. We show that the estimator is accurate and possesses a low variance for FARIMA time series with both light- and heavy-tailed noises. It is illustrated by means of Monte Carlo simulations. Finally, we compare the introduced method of estimation of d with classical methods like the R/S, modified R/S and variance. The results show that the proposed estimator is vastly superior to them. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Achieving Asymptotic Efficient Performance for Squared Range and Squared Range Difference Localizations

    Publication Year: 2013 , Page(s): 2836 - 2849
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2958 KB) |  | HTML iconHTML  

    The estimation of a source location using directly the range or range difference measurements is difficult and requires numerical solution, which is caused by the highly non-linear relationship between the measurements and the unknown. We can obtain a computationally efficient and non-iterative algebraic solution by squaring the measurements first before solving for the unknown. However, a recent study has shown that such a solution is suboptimum in reaching the CRLB performance and the localization accuracy could be significantly worse in some localization geometries. This paper demonstrates that when range weighting factors are introduced to the squared measurements, the resulting solution will be able to reach the CRLB accuracy. Both the squared range and squared range difference cases are considered, and the mean-square error (MSE) and the bias of the resulting solutions are derived. The asymptotic efficiency of the proposed cost functions are proven theoretically and validated by simulations. The effects of range weighting factors on the localization performance under different sensor number, noise correlation, and localization geometry are examined. Introducing range weightings to the squared range measurements increases the bias but it is negligible in the MSE. Having range weightings in the squared range difference measurements improves both the MSE and bias. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identifying Infection Sources and Regions in Large Networks

    Publication Year: 2013 , Page(s): 2850 - 2865
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4035 KB) |  | HTML iconHTML  

    Identifying the infection sources in a network, including the index cases that introduce a contagious disease into a population network, the servers that inject a computer virus into a computer network, or the individuals who started a rumor in a social network, plays a critical role in limiting the damage caused by the infection through timely quarantine of the sources. We consider the problem of estimating the infection sources and the infection regions (subsets of nodes infected by each source) in a network, based only on knowledge of which nodes are infected and their connections, and when the number of sources is unknown a priori. We derive estimators for the infection sources and their infection regions based on approximations of the infection sequences count. We prove that if there are at most two infection sources in a geometric tree, our estimator identifies the true source or sources with probability going to one as the number of infected nodes increases. When there are more than two infection sources, and when the maximum possible number of infection sources is known, we propose an algorithm with quadratic complexity to estimate the actual number and identities of the infection sources. Simulations on various kinds of networks, including tree networks, small-world networks and real world power grid networks, and tests on two real data sets are provided to verify the performance of our estimators. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multi-Input Single-Output Nonlinear Blind Separation of Binary Sources

    Publication Year: 2013 , Page(s): 2866 - 2873
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1977 KB) |  | HTML iconHTML  

    The problem of blindly separating multiple binary sources from a single nonlinear mixture is addressed through a novel clustering approach without the use of any optimization procedure. The method is based on the assumption that the source probabilities are asymmetric in which case the output probability distribution can be expressed as a linear mixture of the sources. We are then able to solve the problem by using a known linear Multiple-Input Single-Output (MISO) blind separation method. The overall procedure is very fast and, in theory, it works for any number of independent binary sources and for a wide range of nonlinear functions. In practice, the accuracy of the method depends on the estimation accuracy of the output probabilities and the cluster centers. It can be quite sensitive to noise especially as the number of sources increases or the number of data samples is reduced. However, in our experiments we have been able to demonstrate successful separation of up to four sources. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dual-Domain Adaptive Beamformer Under Linearly and Quadratically Constrained Minimum Variance

    Publication Year: 2013 , Page(s): 2874 - 2886
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3984 KB) |  | HTML iconHTML  

    In this paper, a novel adaptive beamforming algorithm is proposed under a linearly and quadratically constrained minimum variance (LQCMV) beamforming framework, based on a dual-domain projection approach that can efficiently implement a quadratic-inequality constraint with a possibly rank-deficient positive semi-definite matrix, and the properties of the proposed algorithm are analyzed. As an application, relaxed zero-forcing (RZF) beamforming is presented which adopts a specific quadratic constraint that bounds the power of residual interference in the beamformer output with the aid of interference-channel side-information available typically in wireless multiple-access systems. The dual-domain projection in this case plays a role in guiding the adaptive algorithm towards a better direction to minimize the interference and noise, leading to considerably faster convergence. The robustness issue against channel mismatch and ill-posedness is also addressed. Numerical examples show that the efficient use of interference side-information brings considerable gains. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Projection Matrix Optimization for Compressive Sensing Systems

    Publication Year: 2013 , Page(s): 2887 - 2898
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2672 KB) |  | HTML iconHTML  

    This paper considers the problem of designing the projection matrix Φ for a compressive sensing (CS) system in which the dictionary Ψ is assumed to be given. The optimal projection matrix design is formulated in terms of finding those Φ such that the Frobenius norm of the difference between the Gram matrix of the equivalent dictionary ΦΨ and the identity matrix is minimized. A class of the solutions is derived in a closed-form, which is a generalization of the existing results. More interestingly, it is revealed that this solution set is characterized by an arbitrary orthonormal matrix. This freedom is then used to further enhance the performance of the CS system by minimizing the coherence between the atoms of the equivalent dictionary. An alternating minimization-based algorithm is proposed for solving the corresponding minimization problem. Experiments are carried out and simulations show that the projection matrix obtained by the proposed approach significantly improves the signal recovery accuracy of the CS system and outperforms those by existing algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transmit Optimization With Improper Gaussian Signaling for Interference Channels

    Publication Year: 2013 , Page(s): 2899 - 2913
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3656 KB) |  | HTML iconHTML  

    This paper studies the achievable rates of Gaussian interference channels with additive white Gaussian noise (AWGN), when improper or circularly asymmetric complex Gaussian signaling is applied. For the Gaussian multiple-input multiple-output interference channel (MIMO-IC) with the interference treated as Gaussian noise, we show that the user's achievable rate can be expressed as a summation of the rate achievable by the conventional proper or circularly symmetric complex Gaussian signaling in terms of the users' transmit covariance matrices, and an additional term, which is a function of both the users' transmit covariance and pseudo-covariance matrices. The additional degrees of freedom in the pseudo-covariance matrix, which is conventionally set to be zero for the case of proper Gaussian signaling, provide an opportunity to further improve the achievable rates of Gaussian MIMO-ICs by employing improper Gaussian signaling. To this end, this paper proposes widely linear precoding, which efficiently maps proper information-bearing signals to improper transmitted signals at each transmitter for any given pair of transmit covariance and pseudo-covariance matrices. In particular, for the case of two-user Gaussian single-input single-output interference channel (SISO-IC), we propose a joint covariance and pseudo-covariance optimization algorithm with improper Gaussian signaling to achieve the Pareto-optimal rates. By utilizing the separable structure of the achievable rate expression, an alternative algorithm with separate covariance and pseudo-covariance optimization is also proposed, which guarantees the rate improvement over conventional proper Gaussian signaling. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Point-Process Nonlinear Models With Laguerre and Volterra Expansions: Instantaneous Assessment of Heartbeat Dynamics

    Publication Year: 2013 , Page(s): 2914 - 2926
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2514 KB) |  | HTML iconHTML  

    In the last decades, mathematical modeling and signal processing techniques have played an important role in the study of cardiovascular control physiology and heartbeat nonlinear dynamics. In particular, nonlinear models have been devised for the assessment of the cardiovascular system by accounting for short-memory second-order nonlinearities. In this paper, we introduce a novel inverse Gaussian point process model with Laguerre expansion of the nonlinear Volterra kernels. Within the model, the second-order nonlinearities also account for the long-term information given by the past events of the nonstationary non-Gaussian time series. In addition, the mathematical link to an equivalent cubic input-output Wiener-Volterra model allows for a novel instantaneous estimation of the dynamic spectrum, bispectrum and trispectrum of the considered inter-event intervals. The proposed framework is tested with synthetic simulations and two experimental heartbeat interval datasets. Applications on further heterogeneous datasets such as milling inserts, neural spikes, gait from short walks, and geyser geologic events are also reported. Results show that our model improves on previously developed models and, at the same time, it is able to provide a novel instantaneous characterization and tracking of the inherent nonlinearity of heartbeat dynamics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling and State Estimation for Dynamic Systems With Linear Equality Constraints

    Publication Year: 2013 , Page(s): 2927 - 2939
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2853 KB) |  | HTML iconHTML  

    The problem of modeling and estimation for linear equality constrained (LEC) systems is considered. The exact constrained dynamic model usually is not readily available or is too complicated, and hence in many studies an auxiliary dynamic model is employed in which the state does not necessarily obey the constraint strictly. Based on the understanding that the constraints, as prior information about the state, should be incorporated into the dynamics modeling, an LEC dynamic model (LECDM) is constructed first. The model optimally fuses the linear equality constraint (LEC) and the auxiliary dynamics. Some of its superior properties are presented. Next, the linear minimum mean squared error (LMMSE) estimate of the LEC state is proved to satisfy the constraint. The LMMSE estimator for linear systems, called the LEC Kalman filter (LECKF), and two approximate LMMSE estimators for nonlinear systems are presented. The LECKF is compared with other constrained estimators, and a sufficient condition is also provided under which the estimate projection method mathematically equals the LECKF. Furthermore, extensions of the LECDM for the LEC systems with uncertain or unknown constraint parameters are discussed. Finally, illustrative examples are provided to show the effectiveness and efficiency of the LECKF and to verify the theoretical results given in the paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A High-Throughput Trellis-Based Layered Decoding Architecture for Non-Binary LDPC Codes Using Max-Log-QSPA

    Publication Year: 2013 , Page(s): 2940 - 2951
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2675 KB) |  | HTML iconHTML  

    This paper presents a high-throughput decoder architecture for non-binary low-density parity-check (LDPC) codes, where the q-ary sum-product algorithm (QSPA) in the log domain is considered. We reformulate the check-node processing such that an efficient trellis-based implementation can be used, where forward and backward recursions are involved. In order to increase the decoding throughput, bidirectional forward-backward recursion is used. In addition, layered decoding is adopted to reduce the number of iterations based on a given performance. Finally, a message compression technique is used to reduce the storage requirements and hence the area. Using a 90-nm CMOS process, a 32-ary (837,726) LDPC decoder was implemented to demonstrate the proposed techniques and architecture. This decoder can achieve a throughput of 233.53 Mb/s at a clock frequency of 250 MHz based on the post-layout results. Compared to the decoders presented in previous literature, the proposed decoder can achieve the highest throughput based on a similar/better error-rate performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Fixed Point Iterative Method for Low n-Rank Tensor Pursuit

    Publication Year: 2013 , Page(s): 2952 - 2962
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3393 KB) |  | HTML iconHTML  

    The linearly constrained tensor n -rank minimization problem is an extension of matrix rank minimization. It is applicable in many fields which use the multi-way data, such as data mining, machine learning and computer vision. In this paper, we adapt operator splitting technique and convex relaxation technique to transform the original problem into a convex, unconstrained optimization problem and propose a fixed point iterative method to solve it. We also prove the convergence of the method under some assumptions. By using a continuation technique, we propose a fast and robust algorithm for solving the tensor completion problem, which is called FP-LRTC (Fixed Point for Low n -Rank Tensor Completion). Our numerical results on randomly generated and real tensor completion problems demonstrate that this algorithm is effective, especially for “easy” problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MIMO Radar Transmit Beampattern Design With Ripple and Transition Band Control

    Publication Year: 2013 , Page(s): 2963 - 2974
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2333 KB) |  | HTML iconHTML  

    The waveform diversity of multiple-input-multiple-output (MIMO) radar systems offers many advantages in comparison to the phased-array counterpart. One of the advantages is that it allows one to design MIMO radar with flexible transmit beampatterns, which has several useful applications. Many researchers have proposed solutions to this problem in the recent decade. However, these designs pay little attention to certain aspects of the performance such as the ripples within the energy focusing section, the attenuation of the sidelobes, the width of the transition band, the angle step-size, and the required number of transmit antennas. In this paper, we first propose several methods that can indirectly or directly control the ripple levels within the energy focusing section and the transition bandwidth. These methods are based on existing problem formulations. More importantly, we reformulate the design as a feasibility problem (FP). Such a formulation enables a more flexible and efficient design that achieves the most preferable beampatterns with the least system cost. Using this formulation, an empirical MIMO radar beampattern formula is obtained. This MIMO radar beampattern formula is similar to Kaiser's formula in conventional finite-impulse-response (FIR) filter design. The performances of the proposed methods and formulations are evaluated via numerical examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collaborative Human Decision Making With Random Local Thresholds

    Publication Year: 2013 , Page(s): 2975 - 2989
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4385 KB) |  | HTML iconHTML  

    This paper considers a collaborative human decision making framework in which local decisions made at the individual agents are combined at a moderator to make the final decision. More specifically, we consider a binary hypothesis testing problem in which a group of n people makes individual decisions on which hypothesis is true based on a threshold based scheme and the thresholds are modeled as random variables. We assume that, in general, the decisions are not received by the moderator perfectly and the communication errors are modeled via a binary asymmetric channel. Assuming that the moderator does not have the knowledge of exact values of thresholds used by the individual decision makers but has probabilistic information, the performance in terms of the probability of error of the likelihood ratio based decision fusion scheme is derived when there are two agents in the decision making system. We show that the statistical parameters of the threshold distributions have optimal set of values which result in the minimum probability of error and we analytically derive these optimal values under certain conditions. We further provide detailed performance comparison to the case where the likelihood ratio based decision fusion is performed at the moderator with exact knowledge of the thresholds used by individual agents. For an arbitrary number of human agents n( > 2), we derive the performance of decision fusion with majority rule using certain approximations when the individual thresholds are modeled as random variables. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Algorithms and Bounds for Dynamic Causal Modeling of Brain Connectivity

    Publication Year: 2013 , Page(s): 2990 - 3001
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2530 KB) |  | HTML iconHTML  

    Recent advances in neurophysiology have led to the development of complex dynamical models that describe the connections and causal interactions between different regions of the brain. These models are able to accurately mimic the event-related potentials observed by EEG/MEG measurement systems, and are considered to be key components for understanding brain functionality. In this paper, we focus on a class of nonlinear dynamic causal models (DCM) that are described by a set of connectivity parameters. In practice, the DCM parameters are inferred using data obtained by an EEG or MEG sensor array in response to a certain event or stimulus, and then used to analyze the strength and direction of the causal interactions between different brain regions. The usefulness of these parameters in this process will depend on how accurately they can be estimated, which in turn will depend on noise, the sampling rate, number of data samples collected, the accuracy of the source localization and reconstruction steps, etc. The goals of this paper are to present several algorithms for DCM parameter estimation, derive Cramér-Rao performance bounds for the estimates, and compare the accuracy of the algorithms against the theoretical performance limits under a variety of circumstances. The influence of noise and sampling rate will be explicitly investigated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Sergios Theodoridis
University of Athens