Generalized likelihood ratio tests (GLRTs) are derived for the problem of detecting targets in hyperspectral images. These detectors are derived under the assumptions that the signals from the materials in the image mix linearly and that the noise in the system is Gaussian. It is also assumed that the abundances of the signals from the various materials in a pixel must sum to one. This constraint models the fact that the material abundances in a pixel are just the fraction of the pixel that they occupy. Under these assumptions, detectors are derived which outperform the detectors derived without the sum-to-one constraint.