Skip to Main Content
We describe a novel architecture for automotive vision organized on five levels of abstraction, i.e., sensor, data, semantic, reasoning, and resource allocation levels, respectively. Although we implement and evaluate processes to detect and classify other participants within the immediate environment of a moving vehicle, our main emphasis is on the allocation of computational resource and attentive processing by the sensor suite. To that end, an efficient multiobjective resource allocation method is formalized and implemented. This includes a decision-making process dependent upon the environment, the current goal, the available sensors and computational resource, and the time available to make a decision. We evaluate our approach on road traffic test sequences acquired by a test vehicle provided by Audi. This vehicle includes lidar, video, radar, and sonar sensors, in addition to conventional global positioning system (GPS) navigation, but our evaluation is confined to lidar and video data alone.