Abstract:
The rapid growth of vehicles as countries become more developed has brought great challenges to traffic prediction. Recent works model only local or global spatial-tempor...Show MoreMetadata
Abstract:
The rapid growth of vehicles as countries become more developed has brought great challenges to traffic prediction. Recent works model only local or global spatial-temporal features via graph neural networks (GNNs). Furthermore, the explicit graph structure information may contain bias, in particular, the lack of connections among multiple nodes when in fact, they are interdependent. This results in the inability to accommodate information interaction and the underutilization of high-quality information. In this article, we design an adaptive spatial-temporal graph convolution networks (ASTGCNs) to collaboratively learn local-global spatial-temporal information for traffic prediction. Specifically, we obtain different local spatial-temporal information (i.e. spatial-temporal information of each temporal point) by dividing the global spatial-temporal information along the temporal dimension. For local spatial-temporal information, we establish an adaptive graph convolution to enhance the ability of graph convolution networks (GCNs) in managing bias in the explicit graph structure. We then employ an attention mechanism to learn the local summarization of dynamic node neighborhoods to obtain high-quality information. For global spatial-temporal information, a temporal convolution network (TCN) block and the ordinary differential equation (ODE) are utilized in our model. In essence, our proposed ASTGCNs integrates adaptive graph convolution, attention mechanism, TCN block and ODE to collaboratively learn local-global spatial-temporal information. Experimental results show that our ASTGCNs is superior to state-of-art (SOTA) methods when applied to four real-world datasets.
Published in: IEEE Transactions on Vehicular Technology ( Volume: 72, Issue: 10, October 2023)