It has been argued on an empirical basis that ensemble classification by bagging cannot improve the performance of stable classification rules, such as linear discriminant analysis. We have proved that this this is indeed the case: the expected classification error of the bagged linear discriminant is always larger or equal than that of the original linear discriminant, for all sample sizes. This result was proved for the univariate case, under a general Gaussian assumption. In the multivariate case, we provide an exact expression for the expected error of the bagged classifier, which is compared to the exact expected error of the original classifier for several different model parameters. For all models and sample sizes considered, bagging produced a larger expected classification error than the original classifier. We believe that this is the first time that such results are established for bagging in continuous feature spaces.