Skip to Main Content
In this paper, we address two aspects, which influence the performance of multilayer perceptrons (MLP). 1) dimensionality reduction with PCA: the number of principal components are optimized; 2) complexity control: we investigate three different methods: model order selection, early stopping and regularization. We considered two electronic nose datasets of different size and learning difficulty. Measurements have been performed with the pico electronic nose based on thin film gas sensors. It turns out that: 1) (test set) performance depends strongly on the number of principal components and that even components with less than 1% of the global variance enhance classification; 2) if complexity control is performed with early stopping or regularization, then overfitting is avoided whatever the number of hidden units (and hence of network weights) is.