Skip to Main Content
For a fixed sample size, a common phenomenon is that the error of a designed classifier decreases and then increases as the number of features grows. Historically this peaking phenomenon has been studied without taking into account feature selection, which is commonplace in high-dimensional settings. This paper revisits the peaking phenomenon in the presence of feature selection. The error curves tend to fall into three categories: peaking, settling into a plateau, or falling very slowly over a long range of feature-set sizes. It can be concluded that one should be wary of applying peaking results found in the absence of feature selection to settings in which feature selection is employed.