By Topic

Sparse vector linear prediction with near-optimal matrix structures

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
D. Petrinovic ; Fac. of Electr. Eng. & Comput., Zagreb Univ., Croatia ; D. Petrinovic

Vector linear prediction (VLP) is frequently used in speech and image coding. This paper addresses a technique of reducing the complexity of VLP, named the sparse VLP (sVLP), by decreasing the number of nonzero elements of prediction matrices used for prediction. The pattern of zero and nonzero elements in a matrix, i.e. the matrix structure, is not restricted in the design procedure but is a result of the correlation properties of the input vector process. Mathematical formulations of several criteria for obtaining near-optimal matrix structures are given. The consequent decrease of the sVLP performance compared to the full predictor case can be kept as low as possible by re-optimizing the values of matrix non-zero elements for a resulting sparse structure. Effectiveness of the sVLP is illustrated on vector prediction of the line spectrum frequencies (LSF) vectors and compared to the full predictor VLP

Published in:

Image and Signal Processing and Analysis, 2000. IWISPA 2000. Proceedings of the First International Workshop on

Date of Conference: