Skip to Main Content
In this paper, a method of classification referred to as the Bayesian data reduction algorithm (BDRA) is developed. The algorithm is based on the assumption that the discrete symbol probabilities of each class are a priori uniformly Dirichlet distributed, and it employs a "greedy" approach (which is similar to a backward sequential feature search) for reducing irrelevant features from the training data of each class. Notice that reducing irrelevant features is synonymous here with selecting those features that provide best classification performance; the metric for making data-reducing decisions is an analytic for the probability of error conditioned on the training data. To illustrate its performance, the BDRA is applied both to simulated and to real data, and it is also compared to other classification methods. Further, the algorithm is extended to deal with the problem of missing features in the data. Results demonstrate that the BDRA performs well despite its relative simplicity. This is significant because the BDRA differs from many other classifiers; as opposed to adjusting the model to obtain a "best fit" for the data, the data, through its quantization, is itself adjusted.