Skip to Main Content
Universally achievable error exponents pertaining to certain families of channels (most notably, discrete memoryless channels (DMCs)), and various ensembles of random codes, are studied by combining the competitive minimax approach, proposed by Feder and Merhav, and Gallager's techniques for the analysis of error exponents. In particular, we derive a singleÂ¿letter expression for a lower bound to the largest, universally achievable fraction Â¿ of the optimum error exponent pertaining to the optimum ML decoding. To demonstrate the tightness of this lower bound, we show that Â¿ = 1, for the binary symmetric channel (BSC), when the random coding distribution is uniform over: (i) all codes (of a given rate), and (ii) all linear codes, in agreement with wellÂ¿known results. We also show that Â¿ = 1 for the uniform ensemble of systematic linear codes, and for that of timeÂ¿varying convolutional codes in the bit-errorÂ¿rate sense. For the latter case, we also show how the corresponding universal decoder can be efficiently implemented using a slightly modified version of the Viterbi algorithm which employs two trellises.