Abstract:
Understanding decision-making in autonomous driving models is essential for real-world applications. Attribution explanation is a primary research direction for interpret...Show MoreMetadata
Abstract:
Understanding decision-making in autonomous driving models is essential for real-world applications. Attribution explanation is a primary research direction for interpreting neural network decisions. However, in the context of autonomous driving, numerical attributions fail to interpret the complex semantic information and often result in explanations that are difficult to understand. This paper introduces a novel semantic attribution approach that both identifies where important features appear and provides intuitive information about what they represent. To establish the semantic correspondences for attributions, we propose an interpreting framework that integrates unsupervised differentiable semantic representations with the attribution computational model. To further enhance the accuracy of the attribution computation while ensuring strong semantic correspondence, we design a Semantic-Informed Aumann-Shapley (SIAS) method, which defines a novel integration path solution using constraints from semantic scores and discrete gradients. Extensive experiments confirm that our method outperforms state-of-the-art explanation techniques both qualitatively and quantitatively in autonomous driving scenarios.
Published in: IEEE Transactions on Intelligent Transportation Systems ( Volume: 26, Issue: 1, January 2025)