We have entered an era where chip yields are decreasing with scaling. A new concept called intelligible testing has been previously proposed with the goal of reversing this trend for classes of systems which do not require completely error-free operation. Such error tolerant applications include audio, speech, graphics, video, and digital communications. Analyses of such applications have identified error rate as one of the key metrics for error severity, where error rate is defined as the percentage of vectors for which the value at outputs deviates from the corresponding error-free value. In error-rate testing, every fault with an error rate less than a threshold specified by the application is called an acceptable fault; all other faults are called unacceptable. The objective of error-rate testing is to detect every unacceptable fault while detecting none of the acceptable faults. In this paper we develop a theory of error-rate testing. First we study fanout-free circuits with primitive gates and identify new relationships between error rates and fault equivalence and dominance, develop a new test generation procedure, and prove that in such circuits it is possible to detect every unacceptable fault without detecting any acceptable fault. We then analyze more general circuits, including those containing complex gates and fanouts, and show that the above result may not hold for such circuits. We then use a modified version of a classical test generator and a classical fault simulator to obtain empirical data that show that even in arbitrary circuits, it is possible to detect every unacceptable fault while detecting only a fraction of acceptable faults.