By Topic

Neural implementation of unconstrained minimum L1-norm optimization-least absolute deviation model and its application to time delay estimation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Zhishun Wang ; Lab. of GI Res., Univ. of Texas Med. Branch, Galveston, TX, USA ; Cheung, J.Y. ; Xia, Y.S. ; Chen, J.D.Z.

Least absolute deviation (LAD) optimization model, also called the unconstrained minimum L1-norm optimization model, has found extensive applications in linear parameter estimations. L1-norm model is superior to Lp-norm (p>1) models in non-Gaussian noise environments or even in chaos, especially for signals that contain sharp transitions (such as biomedical signals with spiky series or motion artifacts) or chaotic dynamic processes. However, its implementation is more difficult due to discontinuous derivatives, especially compared with the least-squares model (L2-norm). In this paper, neural implementation of LAD optimization model is presented, where a new neural network is constructed and its performance in LAD optimization is evaluated theoretically and experimentally. Then, the application of the proposed LAD neural network (LADNN) to time delay estimation (TDE) is presented. In TDE, a given signal is modeled using the moving average (MA) model. The MA parameters are estimated by using the LADNN and the time delay corresponds to the time index at which the MA coefficients have a peak. Compared with higher order spectra (HOS)-based TDE methods, the LADNN-based method is free of the assumption that the signal is non-Gaussian and the noises are Gaussian, which is closer to real situations. Experiments under three different noise environments, Gaussian, non-Gaussian and chaotic, are conducted to compare the proposed TDE method with the existing HOS-based method.

Published in:

Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on  (Volume:47 ,  Issue: 11 )