Hyperspectral data lossy compression has not yet achieved global acceptance in the remote sensing community, mainly because it is generally perceived that using compressed images may affect the results of posterior processing stages. This possible negative effect, however, has not been accurately characterized so far. In this letter, we quantify the impact of lossy compression on two standard approaches for hyperspectral data exploitation: spectral unmixing, and supervised classification using support vector machines. Our experimental assessment reveals that different stages of the linear spectral unmixing chain exhibit different sensitivities to lossy data compression. We have also observed that, for certain compression techniques, a higher compression ratio may lead to more accurate classification results. Even though these results may seem counterintuitive, this work explains these observations in light of the spatial regularization and/or whitening that most compression techniques perform and further provides recommendations on best practices when applying lossy compression prior to hyperspectral data classification and/or unmixing.