Skip to Main Content
Using multiple sensors in the context of environment perception for autonomous vehicles is quite common these days. Perceived data from these sensors can be fused at different levels like: before object detection, after object detection and finally after tracking the moving objects. In this paper we detail our object detection level fusion between laser and stereo vision sensors as opposed to pre-detection or track level fusion. We use the output of our laser processing to get a list of objects with position and dynamic properties for each object. Similarly we use the stereo vision output of another team which consists of a list of detected objects with position and classification properties for each object. We use Bayesian fusion technique on objects of these two lists to get a new list of fused objects. This fused list of objects is further used in tracking phase to track moving objects in an intersection like scenario. The results obtained on data sets of INTERSAFE-2 demonstrator vehicle show that this fusion has improved data association and track management steps.