Skip to Main Content
This work outlines an approach to measure the influence of input pattern resolution on classification performance for appearance-based object detection algorithms. Signal theory is utilized to determine a reasonable pattern or image resolution before the time-consuming training process is considered. For this reason the energy for a given low resolution image is assessed with respect to the optimal case of high resolution. The approach is justified using an AdaBoost algorithm with Haar-like features in the context of vehicle detection. Furthermore, the transfer function of a Haar-like feature is examined in the context of the framework. Tests of classifiers, trained with different resolutions, are performed and the results are presented. These results reveal that a reasonable trade-off between computational load and classification performance can be made.