Skip to Main Content
An outstanding problem in classification and recognition is that of dealing with random errors on one or more features of the feature vectors. This makes it difficult to train a supervised learning system such as NNs and SVMs that are trained on input-output pairs, which learn the noise and are thus unreliable. One way to get the training pairs from the input data is to cluster the feature vectors into clusters and assign an output codeword to each. But here too the problem of errors appears when a distance is used as the similarity measure. We devise here a way to deal with the noise problem by letting each feature in an input feature vector vote for the class it is most like. The most votes determines the winner. Thus unbounded noise on a minority of features does not affect the outcome. We make comparisons on the notoriously difficult iris dataset and analyze why other methods fail. The results are quite good.