Skip to Main Content
Many computer vision applications have to cope with large dynamic range and changing illumination conditions in the environment. Any attempt to deal with these conditions at the algorithmic level alone are inherently difficult because of the following: (1) conventional image sensors cannot completely capture wide dynamic range radiances without saturation or underexposure; (2) the quantization process destroys small signal variations especially in shadows; and (3) all possible illumination conditions cannot be completely accounted for. The paper proposes a computational model for brightness perception that deals with issues of dynamic range and noise. The model can be implemented on-chip in analog domain before the signal is saturated or destroyed through quantization. The model is "unified" because a single mathematical formulation addresses the problem of shot and thermal noise, and normalizes the signal range to simultaneously compress the dynamic range, minimize appearance variations due to changing illumination, and minimize quantization noise. The model strongly mimics brightness perception processes in early biological vision.