ACT: Adversarial Convolutional Transformer for Time Series Forecasting | IEEE Conference Publication | IEEE Xplore

ACT: Adversarial Convolutional Transformer for Time Series Forecasting


Abstract:

Time series forecasting is an important problem involving many fields, including the prediction of extreme weather early warning, electricity consumption planning, and lo...Show More

Abstract:

Time series forecasting is an important problem involving many fields, including the prediction of extreme weather early warning, electricity consumption planning, and long-term traffic congestion. Compared with one-step-ahead prediction, multi-horizon forecasting demands high prediction capacity of the model. Recent studies have shown the great potential of Transformer to improve the prediction accuracy. However, there are three problems with Transformer that restrict its performance, i.e. error accumulation, short-term and long-term dependencies. First, due to the teacher forcing strategy, the ground truth of target values are given during training and replaced by previous step output during testing. This difference between training and testing can lead to error accumulation. Second, time series data have a strong dependence on their local time information. But in classical Transformer architecture, the dot-product self-attention is computed by point-wise values, which are insensitive to local context. Thus, they may fail to distinguish between a turning point, an outlier and the part of patterns. Third, most methods optimize only one objective function and don't model the distributions of data, which is difficult to capture the long-term intricate patterns of time series. To solve these issues, we propose a Transformer-based time series forecasting model in this paper, named Adversarial Convolutional Transformer(ACT). First, we change the decoding mode from step-by-step way to one-step way, which can predict the entire sequence at one forward step to relieve the error accumulation issue. Next, we propose the convolutional attention block, which incorporates local context into the self-attention mechanism and captures the short-term dependencies of data. Then, we introduce adversarial training to the model to capture the long-term repeating patterns. Experiments on five challenging datasets demonstrate that ACT can bring solid improvements in accuracy.
Date of Conference: 18-23 July 2022
Date Added to IEEE Xplore: 30 September 2022
ISBN Information:

ISSN Information:

Conference Location: Padua, Italy

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.