Skip to Main Content
Data preprocessing is an indispensable step in effective data analysis. It prepares data for data mining and machine learning, which aim to turn data into business intelligence or knowledge. Feature selection is a preprocessing technique commonly used on high-dimensional data. Feature selection studies how to select a subset or list of attributes or variables that are used to construct models describing data. Its purposes include reducing dimensionality, removing irrelevant and redundant features, reducing the amount of data needed for learning, improving algorithms' predictive accuracy, and increasing the constructed models' comprehensibility. This article considers feature-selection overfitting with small-sample classifier design; feature selection for unlabeled data; variable selection using ensemble methods; minimum redundancy-maximum relevance feature selection; and biological relevance in feature selection for microarray data.