Skip to Main Content
Given training sequences generated by two distinct, but unknown, distributions on a common alphabet, we study the problem of determining whether a third sequence was generated according to the first or second distribution. To model sources such as natural language, for which the underlying distributions are difficult to learn from realistic amounts of data, we allow the alphabet size to grow and therefore the probability distributions to change with the block length. Our primary focus is the situation in which the underlying probabilities are all of the same order, and in this regime, we show that consistent classification is possible if and only if the alphabet grows subquadratically with the block length. We also show that some commonly used statistical tests are suboptimal in that they are consistent only if the alphabet grows sublinearly.