The scoring model used in automated speaking assessment systems is critical for achieving accurate and robust scoring of speaking skills automatically. In the automated speaking assessment research field, using a single classifier model is still a dominant approach. However, ensemble learning, which relies on a committee of classifiers to predict jointly (to overcome each individual classifier's weakness) has been actively advocated by the machine learning researchers and widely used in many machine learning tasks. In this paper, we investigated applying a special ensemble learning method, feature-bagging, on the task of automatically scoring non-native spontaneous speech. Our experiments show that this method is superior to the method of using a single classifier in terms of scoring accuracy and the robustness to cope with possible feature variations.