By Topic

Signal Processing, IEEE Transactions on

Issue 3 • Date March 2003

Filter Results

Displaying Results 1 - 25 of 29
  • Spatial modulation over partially coherent multiple-input/multiple-output channels

    Page(s): 794 - 804
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (485 KB) |  | HTML iconHTML  

    Communication over multiple-input/multiple-output (MIMO) channels of arbitrary coherence is considered in light of a mean square estimation error (MSEE) criterion. Earlier work in the field has focused on fully coherent channels and determined that use of a singular value decomposition (SVD) of the channel transfer function matrix can realize the capacity of the MIMO channel. More recently, research has shown that the use of arbitrary orthonormal channel excitation vectors can maximize expected capacity over fully incoherent Rayleigh fading MIMO channels. Partially coherent channels have generally been examined only in terms of their degrading influence on capacity. In this discussion, channel excitation techniques are proposed that minimize an MSEE criterion over an ensemble of MIMO channels of arbitrary coherence. The algorithms rely on only the second-and fourth-order moments of the channel transfer function. Two experiments were conducted to examine the new strategies. Using measured MIMO channel transfer function ensembles-one from an underwater acoustic channel and others from RF wireless channels-the performance of the strategies are compared. The new techniques outperform orthonormal signaling based on SINR or capacity metrics while requiring substantially less channel feedback than needed by a channel decomposition approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fundamental theorem of algebra, spectral factorization, and stability of two-dimensional systems

    Page(s): 853 - 863
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1028 KB) |  | HTML iconHTML  

    In his doctoral dissertation in 1797, Gauss proved the fundamental theorem of algebra, which states that any one-dimensional (1-D) polynomial of degree n with complex coefficients can be factored into a product of n polynomials of degree 1. Since then, it has been an open problem to factorize a two-dimensional (2-D) polynomial into a product of basic polynomials. Particularly for the last three decades, this problem has become more important in a wide range of signal and image processing such as 2-D filter design and 2-D wavelet analysis. In this paper, a fundamental theorem of algebra for 2-D polynomials is presented. In applications such as 2-D signal and image processing, it is often necessary to find a 2-D spectral factor from a given 2-D autocorrelation function. In this paper, a 2-D spectral factorization method is presented through cepstral analysis. In addition, some algorithms are proposed to factorize a 2-D spectral factor finely. These are applied to deriving stability criteria of 2-D filters and nonseparable 2-D wavelets and to solving partial difference equations and partial differential equations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Oversampled linear-phase perfect reconstruction filterbanks: theory, lattice structure and parameterization

    Page(s): 744 - 759
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1281 KB) |  | HTML iconHTML  

    The paper presents the theory, lattice structure, and parameterization for a general class of P-channel oversampled linear-phase perfect reconstruction filterbanks (OLPPRFBs) - systems with sampling factor M (P≥M) and filter length of L=KM (K≥1) each. For these OLPPRFBs, the necessary existence conditions on the number of symmetric filters, nβ, and antisymmetric filters, nα, (i.e., symmetry polarity) are first investigated. VLSI-friendly lattice structures are then developed for two types of OLPPRFBs, type I system (nβ=nα) and type II system (nβ≠nα). The completeness and minimality of each type of lattice are also analyzed. Compared with existing work, the proposed lattices are the most general and efficient ones for OLPPRFBs. Besides, through the lattice structures, the sufficiency of the existence conditions is also verified. Next, lifting-based structures are proposed to parameterize a left invertible matrix and all of its left inverses, which leads to unconstrained optimization as well as robust implementation of OLPPRFBs. Finally, several design examples are presented to confirm the validity of the theory and demonstrate the versatility of synthesis filterbanks in the oversampled system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Peak value estimation of bandlimited signals from their samples, noise enhancement, and a local characterization in the neighborhood of an extremum

    Page(s): 771 - 780
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (776 KB) |  | HTML iconHTML  

    The paper addresses the problem of estimating the peak value of bandlimited signals from their samples with and without oversampling. This problem has significant relevance to orthogonal frequency-division multiplexing (OFDM) signal processing and system design. In particular, an upper bound on the peak value is established given the peak value of the samples and the oversampling rate. Moreover, it is shown that the bounds are sharp for all practical rates by constructing bandlimited signals taking on this bound. The proof also provides a local characterization of bandlimited signals in the neighborhood of an extremum. A different analysis examines the effect of small errors in the samples. It is shown that oversampling can provide robust recovery in the sense that small errors in the samples lead to small errors in the reconstructed signal. Again, an upper bound is derived relating the peak error in the samples and the peak error in the signals. Furthermore, both problems are shown to be coupled and put in a unifying context. The bounds are compared and applied to problems concerning OFDM. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-speed and low-power split-radix FFT

    Page(s): 864 - 874
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (787 KB) |  | HTML iconHTML  

    This paper presents a novel split-radix fast Fourier transform (SRFFT) pipeline architecture design. A mapping methodology has been developed to obtain regular and modular pipeline for split-radix algorithm. The pipeline is repartitioned to balance the latency between complex multiplication and butterfly operation by using carry-save addition. The number of complex multiplier is minimized via a bit-inverse and bit-reverse data scheduling scheme. One can also apply the design methodology described here to obtain regular and modular pipeline for the other Cooley-Tukey-based algorithms. For an N(= 2n)-point FFT, the requirements are log4 N - 1 multipliers, 4log4 N complex adders, and memory of size N - 1 complex words for data reordering. The initial latency is N + 2 · log2 N clock cycles. On the average, it completes an N-point FFT in N clock cycles. From post-layout simulations, the maximum clock rate is 150 MHz (75 MHz) at 3.3 V (2.7 V), 25°C (100°C) using a 0.35-μm cell library from Avant!. A 64-point SRFFT pipeline design has been implemented and consumes 507 mW at 100 MHz, 3.3 v, and 25°C. Compared with a radix-22 FFT implementation, the power consumption is reduced by an amount of 15%, whereas the speed is improved by 14.5%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the equivalence of set-theoretic and maxent MAP estimation

    Page(s): 698 - 713
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1362 KB) |  | HTML iconHTML  

    We establish an equivalence between two conceptually different methods of signal estimation under modeling uncertainty, viz., set-theoretic (ST) estimation and maximum entropy (maxent) MAP estimation. The first method assumes constraints on the signal to be estimated, and the second assumes constraints on a probability distribution for the signal. We provide broad conditions under which these two estimation paradigms produce the same signal estimate. We also show how the maxent formalism can be used to provide solutions to three important problems: how to select sizes of constraint sets in ST estimation (the analysis highlights the role of shrinkage); how to choose the values of parameters in regularized restoration when using multiple regularization functionals; how to trade off model complexity and goodness of fit in a model selection problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coherent 3-D echo detection for ultrasonic imaging

    Page(s): 592 - 601
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1427 KB) |  | HTML iconHTML  

    The purpose of the present paper is to present an ultrasonic processing set-up by which three-dimensional (3-D) echo location can be computed more efficiently than by other one-dimensional (1-D) methods. This set-up contains three successive tasks. The first one deals with a model for representing echoes. This model is based on a generic wavelet, which is a cosine function with variable amplitude and phase. To estimate the wavelet, we propose to use a spline representation of its complex envelope in order to reduce amplitude and phase dimension. The second task deals with 1-D detection and is conducted within a Bayesian framework. Using an Ascan decomposition on a family of wavelets resulting from the first task, we propose a specific procedure to carry on constrained least-squares in order to alleviate the bias inherent to this criterion. The third task deals with the spatial regularization of the detected echo location field resulting from the second task. We propose a Bayes-Markov model for removing isolated wrong detections and simultaneously improving, under regularization constraint, the spatial location of the detected echoes. In fact, this model deals with the general problem of nonorganized point approximation. All the proposed techniques are illustrated on real ultrasonic data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Subset selection in noise based on diversity measure minimization

    Page(s): 760 - 770
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (571 KB) |  | HTML iconHTML  

    We develop robust methods for subset selection based on the minimization of diversity measures. A Bayesian framework is used to account for noise in the data and a maximum a posteriori (MAP) estimation procedure leads to an iterative procedure which is a regularized version of the focal underdetermined system solver (FOCUSS) algorithm. The convergence of the regularized FOCUSS algorithm is established and it is shown that the stable fixed points of the algorithm are sparse. We investigate three different criteria for choosing the regularization parameter: quality of fit; sparsity criterion; L-curve. The L-curve method, as applied to the problem of subset selection, is found not to be robust, and we propose a novel modified L-curve procedure that solves this problem. Each of the regularized FOCUSS algorithms is evaluated through simulation of a detection problem, and the results are compared with those obtained using a sequential forward selection algorithm termed orthogonal matching pursuit (OMP). In each case, the regularized FOCUSS algorithm is shown to be superior to the OMP in noisy environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Locally stationary covariance and signal estimation with macrotiles

    Page(s): 614 - 627
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3069 KB) |  | HTML iconHTML  

    A macrotile estimation algorithm is introduced to estimate the covariance of locally stationary processes. A macrotile algorithm uses a penalized method to optimize the partition of the space in orthogonal subspaces, and the estimation is computed with a projection operator. It is implemented by searching for a best basis among a dictionary of orthogonal bases and by constructing an adaptive segmentation of this basis to estimate the covariance coefficients. The macrotile algorithm provides a consistent estimation of the covariance of locally stationary processes, using a dictionary of local cosine bases. This estimation is computed with a fast algorithm. Macrotile algorithms apply to other estimation problems such as the removal of additive noise in signals. This simpler problem is used as an intuitive guide to better understand the case of covariance estimation. Examples of removal of white noise from sounds illustrate the results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive transmitter optimization for blind and group-blind multiuser detection

    Page(s): 825 - 838
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (732 KB) |  | HTML iconHTML  

    The linear subspace-based blind and group-blind multiuser detectors recently developed represent a robust and efficient adaptive multiuser detection technique for code-division multiple-access (CDMA) systems. In this paper, we consider adaptive transmitter optimization strategies for CDMA systems operating in fading multipath environments in which these detectors are employed. We make use of more recent results on the analytical performance of these blind and group-blind receivers in the design and analysis of the transmitter optimization techniques. In particular, we develop a maximum-eigenvector-based method of optimizing spreading codes for given channel conditions and a utility-based power control algorithm for CDMA systems with blind or group-blind multiuser detection. We also design a receiver incorporating joint optimization of spreading codes and transmitter power by combining these algorithms in an iterative configuration. We will see that the utility-based power control algorithm allows us to efficiently set performance goals through utility functions for users in heterogeneous traffic environments and that spreading code optimization allows us to achieve these goals with lower transmit power. The signal processing algorithms presented here maintain the blind (or group-blind) nature of the receiver and are distributed, i.e., all power and spreading code adjustments can be made using only locally available information. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coordination failure as a source of congestion in information networks

    Page(s): 875 - 885
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (721 KB) |  | HTML iconHTML  

    Coordination failure, or agents' uncertainty about the action of other agents, may be an important source of congestion in large decentralized systems. The El Farol problem provides a simple paradigm for congestion and coordination problems that may arise with over utilization of the Internet. This paper reviews the El Farol problem and surveys previous approaches, which typically involve complex deterministic learning algorithms that exhibit chaotic-like trajectories. This paper recasts the problem in a stochastic framework and derives a simple adaptive strategy that has intriguing optimization properties; a large collection of decentralized decision makers, each acting in their own best interests and with limited knowledge, converge to a solution that (optimally) solves a complex congestion and social coordination problem. A variation in which agents are allowed access to full information is not nearly as successful. The algorithm, which can be viewed as a kind of habit formation, is analyzed using a weak convergence approach, and simulations illustrate the major results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using the matrix pencil method to solve phase unwrapping

    Page(s): 886 - 888
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB)  

    This correspondence presents a phase unwrapping (PU) algorithm based on the matrix pencil (MP) method. A brief discussion is given on the relationship between PU and instantaneous frequency estimation. The PU algorithm and its properties are presented in detail. Results show a significant improvement in PU with respect to previous algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Second-order analysis of improper complex random vectors and processes

    Page(s): 714 - 725
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (713 KB) |  | HTML iconHTML  

    We present a comprehensive treatment of the second-order theory of complex random vectors and wide-sense stationary (WSS) signals. The main focus is on the improper case, in which the complementary covariance does not vanish. Accounting for the information present in the complementary covariance requires the use of widely linear transformations. Based on these, we present the eigenanalysis of complex vectors and apply it to the problem of rank reduction through principal components. We also investigate joint properties of two complex vectors by introducing canonical correlations, which paves the way for a discussion of the Wiener filter and its rank-reduced version. We link the concepts of propriety and joint propriety to eigenanalysis and canonical correlation analysis, respectively. Our treatment is extended to WSS signals. In particular, we give a result on the asymptotic distribution of eigenvalues and examine the connection between WSS, proper, and analytic signals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On parameter estimation of MIMO flat-fading channels with frequency offsets

    Page(s): 602 - 613
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (991 KB) |  | HTML iconHTML  

    We address the frequency offsets and channel gains estimation problem for a multi-input multi-output (MIMO) flat-fading channel using a training sequence. The general case where the frequency offsets are possibly different for each transmit antenna is considered. The Cramer-Rao bound (CRB) for the problem at hand is derived. Additionally, we present a simple, closed-form expression for the large-sample CRB and show that it depends in a simple way on the channel parameters. Next, the parameters estimation issue is investigated. First, the maximum likelihood estimator (MLE), which entails solving an n-dimensional maximization problem where n is the number of transmit antennas, is derived. Then, we show that the likelihood function can be written as the product of n one-dimensional (1-D) functions if a suitable choice of the training sequence is made. Based on this fact, we suggest two computationally simpler methods. Numerical examples that illustrate the performance of the estimators and compare it with the CRB are provided. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind equalization - case of an unknown symbol period

    Page(s): 781 - 793
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (917 KB)  

    We address the problem of estimating blindly a linearly modulated sequence of unknown rate transmitted over an unknown frequency selective channel. We achieve the goal by extending the concept of deconvolution to a cyclo-stationary context and present a generic class of functionals, the minimization of which achieves the equalization. This defines estimates of the symbol rate: by construction, they are insensitive to a lack of excess bandwidth, bestowing a clear advantage on them over the estimates of the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A method for the discrete fractional Fourier transform computation

    Page(s): 889 - 891
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (227 KB)  

    A new method for the discrete fractional Fourier transform (DFRFT) computation is given in this paper. With the help of this method, the DFRFT of any angle can be computed by a weighted summation of the DFRFTs with the special angles. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The PDF projection theorem and the class-specific method

    Page(s): 672 - 685
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (915 KB) |  | HTML iconHTML  

    We present the theoretical foundation for optimal classification using class-specific features and provide examples of its use. A new probability density function (PDF) projection theorem makes it possible to project probability density functions from a low-dimensional feature space back to the raw data space. An M-ary classifier is constructed by estimating the PDFs of class-specific features, then transforming each PDF back to the raw data space where they can be fairly compared. Although statistical sufficiency is not a requirement, the classifier thus constructed becomes equivalent to the optimal Bayes classifier if the features meet sufficiency requirements individually for each class. This classifier is completely modular and avoids the dimensionality curse associated with large complex problems. By recursive application of the projection theorem, it is possible to analyze complex signal processing chains. We apply the method to feature sets, including linear functions of independent random variables, cepstrum, and Mel cepstrum. In addition, we demonstrate how it is possible to automate the feature and model selection process by direct comparison of log-likelihood values on the common raw data domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nested Newton's method for ICA and post factor analysis

    Page(s): 839 - 852
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (972 KB) |  | HTML iconHTML  

    Two distinct topics are dealt with. First, a new method for independent component analysis (ICA) has been constructed that exploits the invariance of criteria under component-wise scaling, which is intrinsic to ICA. This practical and simple ICA method is called the nested Newton's method. When the number of the channel of observation is less than a certain level, factor analysis (FA) is ineffective (bound for FA). The target of this paper is these cases. Three of many concrete advantages of the nested Newton's method are addressed. i) It is robust against Gaussian noise and outperforms existing methods, such as JADE and Fast ICA, especially under Gaussian noise conditions. ii) It is highly stable globally. iii) Each step resolves itself into two-dimensional (2-D) matrix problems. There is thus no need to deal with gigantic matrices, which means that fewer computational resources are required. Second, a method called "post factor analysis (post-FA)" is described that is aimed to be useful as post-processing for ICA. Although it is functionally similar to conventional FA, post-FA is a completely new method and is more powerful than conventional FA in compensation for its stronger assumption that there are mutually independent sources behind observations. By fully making use of this assumption, post-FA is capable of estimating the noise variance beyond the known limit for FA. Furthermore, it improves the accuracy of ICA to a considerable extent. Any ICA algorithm without prewhitening (pre-WH) or pre-factor-analysis (pre-FA) can be used for preprocessing, although the nested method is a good candidate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Covariance shaping least-squares estimation

    Page(s): 686 - 697
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (774 KB) |  | HTML iconHTML  

    A new linear estimator is proposed, which we refer to as the covariance shaping least-squares (CSLS) estimator, for estimating a set of unknown deterministic parameters, x, observed through a known linear transformation H and corrupted by additive noise. The CSLS estimator is a biased estimator directed at improving the performance of the traditional least-squares (LS) estimator by choosing the estimate of x to minimize the (weighted) total error variance in the observations subject to a constraint on the covariance of the estimation error so that we control the dynamic range and spectral shape of the covariance of the estimation error. The presented CSLS estimator is shown to achieve the Cramer-Rao lower bound for biased estimators. Furthermore, analysis of the mean-squared error (MSE) of both the CSLS estimator and the LS estimator demonstrates that the covariance of the estimation error can be chosen such that there is a threshold SNR below which the CSLS estimator yields a lower MSE than the LS estimator for all values of x. As we show, some of the well-known modifications of the LS estimator can be formulated as CSLS estimators. This allows us to interpret these estimators as the estimators that minimize the total error variance in the observations, among all linear estimators with the same covariance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal mean-square prediction of the mobile-radio fading envelope

    Page(s): 819 - 824
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (441 KB) |  | HTML iconHTML  

    Long-range prediction of the mobile-radio fading envelope is an enabling technology for many fading compensation approaches. Because the fading envelope is well modeled as a bandlimited process, it has special predictability properties. In this paper, we find a linear predictor that is optimal in the mean-square sense when the predictor impulse response is energy constrained. This solution may be used to determine the minimum mean squared error of a prediction based on past values that are corrupted with estimation errors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An improved statistical analysis of the least mean fourth (LMF) adaptive algorithm

    Page(s): 664 - 671
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (707 KB) |  | HTML iconHTML  

    The paper presents an improved statistical analysis of the least mean fourth (LMF) adaptive algorithm behavior for a stationary Gaussian input. The analysis improves previous results in that higher order moments of the weight error vector are not neglected and that it is not restricted to a specific noise distribution. The analysis is based on the independence theory and assumes reasonably slow learning and a large number of adaptive filter coefficients. A new analytical model is derived, which is able to predict the algorithm behavior accurately, both during transient and in steady-state, for small step sizes and long impulse responses. The new model is valid for any zero-mean symmetric noise density function and for any signal-to-noise ratio (SNR). Computer simulations illustrate the accuracy of the new model in predicting the algorithm behavior in several different situations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind constant modulus equalization via convex optimization

    Page(s): 805 - 818
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (763 KB)  

    In this paper, we formulate the problem of blind equalization of constant modulus (CM) signals as a convex optimization problem. The convex formulation is obtained by performing an algebraic transformation on the direct formulation of the CM equalization problem. Using this transformation, the original nonconvex CM equalization formulation is turned into a convex semidefinite program (SDP) that can be efficiently solved using interior point methods. Our SDP formulation is applicable to baud spaced equalization as well as fractionally spaced equalization. Performance analysis shows that the expected distance between the equalizer obtained by the SDP approach and the optimal equalizer in the noise-free case converges to zero exponentially as the signal-to-noise ratio (SNR) increases. In addition, simulations suggest that our method performs better than standard methods while requiring significantly fewer data samples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transient analysis of data-normalized adaptive filters

    Page(s): 639 - 652
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (937 KB) |  | HTML iconHTML  

    This paper develops an approach to the transient analysis of adaptive filters with data normalization. Among other results, the derivation characterizes the transient behavior of such filters in terms of a linear time-invariant state-space model. The stability, of the model then translates into the mean-square stability of the adaptive filters. Likewise, the steady-state operation of the model provides information about the mean-square deviation and mean-square error performance of the filters. In addition to deriving earlier results in a unified manner, the approach leads to stability and performance results without restricting the regression data to being Gaussian or white. The framework is based on energy-conservation arguments and does not require an explicit recursion for the covariance matrix of the weight-error vector. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of minimum-phase digital filters as the sum of two allpass functions using the cepstrum technique

    Page(s): 726 - 731
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (451 KB) |  | HTML iconHTML  

    A new and practical approach using the cepstrum technique is proposed in the design of minimum-phase digital filters as the sum of two allpass functions. The desired magnitude response is specified in the frequency domain. Its corresponding minimum-phase response is then obtained from the desired magnitude response. The desired phases for the two allpass filters are obtained from the magnitude and phase responses. For both filters to be stable, the corresponding denominator polynomials are minimum phase. The filter coefficients are obtained from the desired phases using the cepstrum technique. Design examples show that the method works well for both classical filter specification and general magnitude specification in the frequency domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transient analysis of adaptive filters with error nonlinearities

    Page(s): 653 - 663
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (895 KB) |  | HTML iconHTML  

    The paper develops a unified approach to the transient analysis of adaptive filters with error nonlinearities. In addition to deriving earlier results in a unified manner, the approach also leads to new performance results without restricting the regression data to being Gaussian or white. The framework is based on energy-conservation arguments and avoids the need for explicit recursions for the covariance matrix of the weight-error vector. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Zhi-Quan (Tom) Luo
University of Minnesota