Skip to Main Content
Ensemble methods that train multiple learners and then combine their predictions have been shown to be very effective in supervised learning. But bagging does not work very well in some case, such as k-nearest neighbor (kNN). At the same time, query learning strategies using bagging is also not work very well. From features view, we introduce bagging features active learning (ALBF) for kNN and apply this method to ML-kNN. Experiments in UCI data set show that prediction accuracy could be significantly improved by ALBF.