Abstract:
We consider the trainable fusion rule design problem when the expert classifiers provide crisp outputs and the behavior space knowledge method is used to fuse local exper...Show MoreMetadata
Abstract:
We consider the trainable fusion rule design problem when the expert classifiers provide crisp outputs and the behavior space knowledge method is used to fuse local experts' decisions. If the training set is utilized to design both the experts and the fusion rule, the experts' outputs become too self-assured. In small sample situations, "optimistically biased" experts' outputs bluffs the fusion rule designer. If the experts differ in complexity and in classification performance, then the experts' boasting effect and can severely degrade the performance of a multiple classification system. Theoretically-based and experimental procedures are suggested to reduce the experts' boasting effect.
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: 25, Issue: 9, September 2003)
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Fusion Rule ,
- Training Set ,
- Estimation Error ,
- Data Model ,
- Pattern Recognition ,
- Maximum Likelihood Estimation ,
- Gaussian Noise ,
- Feature Space ,
- Multilayer Perceptron ,
- Base Classifiers ,
- Classification Error ,
- Majority Voting ,
- Fusion Method ,
- Mahalanobis Distance ,
- Increase In Error ,
- Training Set Size ,
- Generalization Error ,
- Artificial Data ,
- Statistical Independence ,
- Training Vectors ,
- Noise Injection ,
- Gaussian Data ,
- Unbiased ,
- Biased Estimates ,
- Probability Estimates ,
- Learning Settings
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Fusion Rule ,
- Training Set ,
- Estimation Error ,
- Data Model ,
- Pattern Recognition ,
- Maximum Likelihood Estimation ,
- Gaussian Noise ,
- Feature Space ,
- Multilayer Perceptron ,
- Base Classifiers ,
- Classification Error ,
- Majority Voting ,
- Fusion Method ,
- Mahalanobis Distance ,
- Increase In Error ,
- Training Set Size ,
- Generalization Error ,
- Artificial Data ,
- Statistical Independence ,
- Training Vectors ,
- Noise Injection ,
- Gaussian Data ,
- Unbiased ,
- Biased Estimates ,
- Probability Estimates ,
- Learning Settings