Information theory obtains efficient codes by encoding messages in large blocks. The code design requires block probabilities that are often hard to measure accurately. This paper studies the effect of inaccuracies in the block probabilities and gives coding procedures that anticipate some of the worst errors. For an efficient code, the mean numberdof digits per letter must be kept small. In some cases the expected value ofdcan be related to the size of the sample on which probability estimates are based. To underestimate badly the probability of a common letter or block is usually a serious error. To ensure against this possibility, some coding procedures are given that avoid extremely long codewords. These codes provide a worthwhile insurance but are still very efficient if the probability estimates happen to be correct.