Skip to Main Content
Most existing algorithms for ordinal regression usually seek an orientation for which the projected samples are well separated, and seriate intervals on that orientation to represent the ranks. However, these algorithms only make use of one dimension in the sample space, which would definitely lose some useful information in its complementary subspace. As a remedy, we propose an algorithm framework for ordinal regression which consists of two phases: recursively extracting features from the decreasing subspace and learning a ranking rule from the examples represented by the new features. In this framework, every algorithm that projects samples onto a line can be used as a feature extractor and features with decreasing ranking ability are extracted one by one to make best use of the information contained in the training samples. Experiments on synthetic and benchmark datasets verify the usefulness of our framework.