Skip to Main Content
An explanation is provided for the prevalence of apparently convergent relative frequencies in random sequences. The explanation is based upon the computational-complexity characterization of a random sequence. Apparent convergence is shown to be attributable to a surprising consequence of the selectivity with which relative frequency arguments are applied; it is a consequence of data handling rather than an underlying law or good fortune. The consequences of this understanding for probability and its applications are indicated.