Skip to Main Content
A decision-directed error correcting scheme for non-paramnetric imperfectly supervised learning is proposed. The proposed procedure is based on a nearest neighbor rule with a reject option and combines the knowledge accumulated by the learning scheme with that provided by the supervision. An asymptotic analysis of the error correction scheme shows that relabeling of training measurements can have a lower probability of error than the supervision provided Bayes' probability of error is less thani the supervisions probability of error. Computer simulations are used to compare the performance of the proposed scheme with the performance of the k nearest neighbor rule without error correction.