Skip to Main Content
Adaptive algorithms for designing two-category linear pattern classifiers have been widely used on nonseparable pattern sets even though they do not directly minimize the number of classification errors and their optimality for pattern classification is not completely known. Many of these algorithms have been shown to be asymptotically optimal for patterns from Gaussian distributions with equal-covariance matrices. However, their relative efficiencies for design with a finite number of patterns have not been known. This paper uses truncated Taylor series expansions to evaluate the misadjustment, or extra probability of error, that results when these algorithms are used to design a linear classifier with a finite number of patterns. The expressions have been evaluated for three algorithms-- the fixed-increment error-correction algorithm, the relaxation error-correction algorithm, and the least-mean-square (LMS) algorithm--used with patterns from Gaussian distributions with equal-covariance matrices.