Skip to Main Content
Meta-learning templates are data-tailored algorithms that produce supervised models. When a template is evolved on a particular dataset, it is supposed to generate good models not only on this data set but also on similar data. In this paper, we will investigate one possible way of measuring the similarity of datasets and whether it can be used to estimate if meta-learning templates produce good models. We performed experiments on several well known data sets from the UCI machine learning repository and analyzed both the similarity of datasets and templates in the space of performance meta-features (landmarking). Our results show that the most universal algorithms (in terms of average performance) for supervised learning are the complex hierarchical templates evolved by our SpecGen approach.