In the traditional signal model, signal is assumed to be deterministic, and noise is assumed to be random, additive and uncorrelated to the signal component. A hyperspectral image has high spatial and spectral correlation, and a pixel can be well predicted using its spatial and/or spectral neighbors; any prediction error can be considered from noise. Using this concept, several algorithms have been developed for noise estimation for hyperspectral images. However, these algorithms have not been rigorously analyzed with a unified scheme. In this paper, we conduct a comparative study for such linear regression-based algorithms using simulated images with different signal-to-noise ratio (SNR) and real images with different land cover types. Based on experimental results, instructive guidance is concluded for their practical applications.