When learning a support vector machine (SVM) from a set of labeled development patterns, the ultimate goal is to get a classifier attaining a low error rate on new patterns. This so-called generalization ability obviously depends on the choices of the learning parameters that control the learning process. Model selection is the method for identifying appropriate values for these parameters. In this paper, a novel model selection method for SVMs with a Gaussian kernel is proposed. Its aim is to find suitable values for the kernel parameter γ and the cost parameter C with a minimum amount of central processing unit time. The determination of the kernel parameter is based on the argument that, for most patterns, the decision function of the SVM should consist of a sufficiently large number of significant contributions. A unique property of the proposed method is that it retrieves the kernel parameter as a simple analytical function of the dimensionality of the feature space and the dispersion of the classes in that space. An experimental evaluation on a test bed of 17 classification problems has shown that the new method favorably competes with two recently published methods: the classification of new patterns is equally good, but the computational effort to identify the learning parameters is substantially lower.