Abstract:
Time series widely exist in the real world, and a large part of them are long time series, such as weather information records and industrial production information recor...Show MoreMetadata
Abstract:
Time series widely exist in the real world, and a large part of them are long time series, such as weather information records and industrial production information records. The inherent long-term data dependence of long-time series has extremely high requirements on the feature extraction ability of the model. The sequence length of long time series also directly causes high computational cost, which requires the model to be more efficient. This paper proposes Concatenation-Informer containing a Pre-distilling operation and a Concatenation-Attention operation to predict long time series. The pre-distilling operation reduces the length of the series and effectively extracts context-related features. The Concatenation-Attention operation concatenates the attention mechanism's input and output to improve the efficiency of parameters. The total space complexity of the Concatenation-Informer is less than the complexity and usage of the Informer.
Date of Conference: 30 August 2023 - 01 September 2023
Date Added to IEEE Xplore: 16 October 2023
ISBN Information:
Funding Agency:
References is not available for this document.
Select All
1.
B. Lim and S. Zohren, “Time-series forecasting with deep learning: a survey,” Philosophical Transactions of the Royal Society A, vol. 379, no. 2194, p. 20200209, 2021.
2.
R. Hyndman, A. B. Koehler, J. K. Ord, and R. D. Snyder, Forecasting with exponential smoothing: the state space approach. Springer Science & Business Media, 2008.
3.
G. E. Box, G. M. Jenkins, G. C. Reinsel, and G. M. Ljung, Time series analysis: forecasting and control. John Wiley & Sons, 2015.
4.
J. M. P. Menezes Jr and G. A. Barreto, “Long-term time series prediction with the narx network: An empirical evaluation,” Neurocomputing, vol. 71, no. 16–18, pp. 3335–3343, 2008.
5.
S. Du, T. Li, Y. Yang, and S.-J. Horng, “Multivariate time series forecasting via attention-based encoder–decoder framework,” Neuro-computing, vol. 388, pp. 269–279, 2020.
6.
K. Wang, K. Li, L. Zhou, Y. Hu, Z. Cheng, J. Liu, and C. Chen, “Multiple convolutional neural networks for multivariate time series prediction,” Neurocomputing, vol. 360, pp. 107–119, 2019.
7.
M. Khashei and M. Bijari, “A novel hybridization of artificial neural networks and arima models for time series forecasting,” Applied soft computing, vol. 11, no. 2, pp. 2664–2675, 2011.
8.
H. H. Nguyen and C. W. Chan, “Multiple neural networks for a long term time series forecast,” Neural Computing & Applications, vol. 13, no. 1, pp. 90–98, 2004.
9.
A. Sagheer and M. Kotb, “Time series forecasting of petroleum production using deep lstm recurrent networks,” Neurocomputing, vol. 323, pp. 203–213, 2019.
10.
J. Cao, Z. Li, and J. Li, “Financial time series forecasting model based on ceemdan and lstm,” Physica A: Statistical mechanics and its applications, vol. 519, pp. 127–139, 2019.
11.
A. Sagheer and M. Kotb, “Unsupervised pre-training of a deep lstm-based stacked autoencoder for multivariate time series forecasting problems,” Scientific reports, vol. 9, no. 1, pp. 1–16, 2019.
12.
A. Sagheer and M. Kotb, “Time series forecasting of petroleum production using deep lstm recurrent networks,” Neurocomputing, vol. 323, pp. 203–213, 2019.
13.
J. L. Elman, “Finding structure in time,” Cognitive science, vol. 14, no. 2, pp. 179–211, 1990.
14.
S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
15.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
16.
R. Mohammadi Farsani and E. Pazouki, “A transformer self-attention model for time series forecasting,” Journal of Electrical and Computer Engineering Innovations (JECEI), vol. 9, no. 1, pp. 1–10, 2020.
17.
H. Zhang, Y. Zou, X. Yang, and H. Yang, “A temporal fusion transformer for short-term freeway traffic speed multistep prediction,” Neurocomputing, 2022.
18.
Y. Jin, L. Hou, and Y. Chen, “A time series transformer based method for the rotating machinery fault diagnosis,” Neurocomputing, vol. 494, pp. 379–395, 2022.
19.
L. Shen and Y. Wang, “Tcct: Tightly-coupled convolutional transformer on time series forecasting,” Neurocomputing ; vol. 480, pp. 131–145, 2022.
20.
N. Wu, B. Green, X. Ben, and S. O'Banion, “Deep transformer models for time series forecasting: The influenza prevalence case,” arXiv preprint arXiv: 2001.08317, 2020.
21.
L. Cai, K. Janowicz, G. Mai, B. Yan, and R. Zhu, “Traffic transformer: Capturing the continuity and periodicity of time series for traffic forecasting,” Transactions in GIS, vol. 24, no. 3, pp. 736–755, 2020.
22.
B. Lim, S. O. Ank, N. Loeff, and T. Pfister, “Temporal fusion transformers for interpretable multi-horizon time series forecasting,” International Journal of Forecasting, vol. 37, no. 4, pp. 1748–1764, 2021.
23.
N. Kitaev, L. Kaiser, and A. Levskaya, “Reformer: The efficient transformer,” arXiv preprint arXiv: 2001.04451, 2020.
24.
S. Wang, B. Z. Li, M. Khabsa, H. Fang, and H. Ma, “Linformer: Self-attention with linear complexity,” arXiv preprint arXiv: 2006.04768, 2020.
25.
S. Li, X. Jin, Y. Xuan, X. Zhou, W. Chen, Y.-X. Wang, and X. Yan, “Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting,” Advances in neural information processing systems, vol. 32, 2019.
26.
I. Beltagy, M. E. Peters, and A. Cohan, “Longformer: The long-document transformer,” arXiv preprint arXiv: 2004.05150, 2020.
27.
H. Zhou, S. Zhang, J. Peng, S. Zhang, J. Li, H. Xiong, and W. Zhang, “Informer: Beyond efficient transformer for long sequence time-series forecasting,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 12, 2021, pp. 11 106–11 115.
28.
A. A. Ariyo, A. O. Adewumi, and C. K. Ayo, “Stock price prediction using the arima model,” in 2014 UKSim-AMSS 16th international conference on computer modelling and simulation. IEEE, 2014, pp. 106–112.
29.
S. J. Taylor and B. Letham, “Forecasting at scale,” The American Statistician, vol. 72, no. 1, pp. 37–45, 2018.
30.
D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv: 1409.0473, 2014.