Skip to Main Content
Recently, bias-variance decomposition of error has been used as a tool to study the behavior of learning algorithms and to develop new ensemble methods well suited to the bias-variance characteristics of base learners. We propose methods and procedures, based on Domingo's unified bias-variance theory, to evaluate and quantitatively measure the bias-variance decomposition of error in ensembles of learning machines. We apply these methods to study and compare the bias-variance characteristics of single support vector machines (SVMs) and ensembles of SVMs based on resampling techniques, and their relationships with the cardinality of the training samples. In particular, we present an experimental bias-variance analysis of bagged and random aggregated ensembles of SVMs in order to verify their theoretical variance reduction properties. The experimental bias-variance analysis quantitatively characterizes the relationships between bagging and random aggregating, and explains the reasons why ensembles built on small subsamples of the data work with large databases. Our analysis also suggests new directions for research to improve on classical bagging.