GSAN: Graph Self-Attention Network for Learning Spatial–Temporal Interaction Representation in Autonomous Driving | IEEE Journals & Magazine | IEEE Xplore

GSAN: Graph Self-Attention Network for Learning Spatial–Temporal Interaction Representation in Autonomous Driving


Abstract:

Modeling interactions among vehicles is critical in improving the efficiency and safety of autonomous driving since complex interactions are ubiquitous in many traffic sc...Show More

Abstract:

Modeling interactions among vehicles is critical in improving the efficiency and safety of autonomous driving since complex interactions are ubiquitous in many traffic scenarios. To model interactions under different traffic scenarios, most existing works consider interaction information implicitly in their specific tasks with hand-crafted features and predefined maneuvers. Extracting interaction representation, which can be commonly used among different downstream tasks, is not explored. In this article, we propose a general and novel graph self-attention network (GSAN) to learn the spatial–temporal interaction representation among vehicles by a framework consisting of pretraining and fine-tuning. Specifically, in the pretraining step, we construct the GSAN module based on a graph self-attention layer and a gated recurrent unit layer, and use trajectory autoregression to learn the interaction information among vehicles. In the fine-tuning step, we propose two different adaptation schemes to utilize the learned interaction information in various downstream tasks and fine-tune the entire model with only a few steps. To illustrate the effectiveness and generality of our spatial–temporal interaction model, we conduct extensive experiments on two typical interaction-related tasks, namely, lane-changing classification and trajectory prediction. The experiment results demonstrate that our approach significantly outperforms the state-of-the-art solutions of these two tasks. We also visualize the impact of surrounding vehicles on the ego vehicle in different interaction scenes. The visualization offers an intuitive explanation on how our model captures the dynamic changing interactions among vehicles and makes good predictions in various interaction-related tasks.
Published in: IEEE Internet of Things Journal ( Volume: 9, Issue: 12, 15 June 2022)
Page(s): 9190 - 9204
Date of Publication: 05 July 2021

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.