Most current algorithm evaluation protocols use large image databases, but give little consideration to imaging characteristics used to create the data sets. This paper evaluates the effects of camera shutter speed and voltage gain under simultaneous changes in illumination and demonstrates significant differences in the sensitivities of popular vision algorithms under variable illumination, shutter speed, and gain. These results show that offline data sets used to evaluate vision algorithms typically suffer from a significant sensor specific bias which can make many of the experimental methodologies used to evaluate vision algorithms unable to provide results that generalize in less controlled environments. We show that for typical indoor scenes, the different saturation levels of the color filters are easily reached, leading to the occurrence of localized saturation which is not exclusively based on the scene radiance but on the spectral density of individual colors present in the scene. Even under constant illumination, foreshortening effects due to surface orientation can affect feature detection and saliency. Finally, we demonstrate that active and purposive control of the shutter speed and gain can lead to significantly more reliable feature detection under varying illumination and nonconstant viewpoints.