Abstract:
In this paper, we investigate the link between machine perception and human perception for highly/fully automated driving. We compare the classification results of a came...Show MoreMetadata
Abstract:
In this paper, we investigate the link between machine perception and human perception for highly/fully automated driving. We compare the classification results of a camera-based frame-by-frame semantic segmentation model Machine with a well-established visual saliency model Human on the Cityscapes dataset. The results show that Machine classifies foreground objects better if they are more salient, indicating a similarity with the human visual system. For background objects, the accuracy drops when the saliency increases, giving evidence for the assumption that Machine has an implicit concept of saliency.
Published in: 2018 IEEE/ACM 1st International Workshop on Software Engineering for AI in Autonomous Systems (SEFAIAS)
Date of Conference: 28-28 May 2018
Date Added to IEEE Xplore: 02 September 2018
ISBN Information:
Conference Location: Gothenburg, Sweden