The computational cost of support vector regression in the training phase is O (N3), which is very expensive for a large scale problem. In addition, the solution of support vector regression is of parsimoniousness, which has relation to a part of the whole training data set. Hence, it is reasonable to reduce the training data set. Aiming at the scheme based on k-nearest neighbors to reduce the training data set with the computational complexity O (kM N2), an improved scheme is proposed to accelerate the reducing phase, which cuts down the computational complexity from O (kM N2) to O (M N2). Finally, experimental results on benchmark data sets validate the effectiveness of the improved scheme.