By Topic

On Context-Tree Prediction of Individual Sequences

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Jacob Ziv ; Dept. of Electr. Eng., Technion-Israel Inst. of Technol., Haifa ; Neri Merhav

Motivated by the evident success of context-tree based methods in lossless data compression, we explore, in this correspondence, methods of the same spirit in universal prediction of individual sequences. By context-tree prediction, we refer to a family of prediction schemes, where at each time instant t, after having observed all outcomes of the data sequence x1,...,xt-1, but not yet xt , the prediction is based on a "context" (or a state) that consists of the k most recent past outcomes xt-k,...,xt-1, where the choice of k may depend on the contents of a possibly longer, though limited, portion of the observed past, xt-kmax,...,xt-1. This is different from the study reported in the paper by Feder, Merhav, and Gutman (1992), where general finite-state predictors as well as "Markov" (finite-memory) predictors of fixed order, were studied in the regime of individual sequences. Another important difference between this study and the work of Feder is the asymptotic regime. While in their work, the resources of the predictor (i.e., the number of states or the memory size) were kept fixed regardless of the length N of the data sequence, here we investigate situations where the number of contexts, or states, is allowed to grow concurrently with N. We are primarily interested in the following fundamental question: What is the critical growth rate of the number of contexts, below which the performance of the best context-tree predictor is still universally achievable, but above which it is not? We show that this critical growth rate is linear in N. In particular, we propose a universal context-tree algorithm that essentially achieves optimum performance as long as the growth rate is sublinear, and show that, on the other hand, this is impossible in the linear case

Published in:

IEEE Transactions on Information Theory  (Volume:53 ,  Issue: 5 )