Abstract:
Transformers have demonstrated remarkable efficacy in forecasting time series. However, their dependence on self-attention mechanisms demands significant computational re...Show MoreMetadata
Abstract:
Transformers have demonstrated remarkable efficacy in forecasting time series. However, their dependence on self-attention mechanisms demands significant computational resources, thereby limiting their applicability across diverse tasks. Here, we propose the perceiver-CDF for modeling cumulative distribution functions (CDF) of time series. Our model combines the perceiver architecture with copula-based attention for multimodal time series prediction. By leveraging the perceiver, our model transforms multimodal data into a compact latent space, thereby significantly reducing computational demands. We implement copula-based attention to construct the joint distribution of missing data for future prediction. To mitigate error propagation and enhance efficiency, we introduce output variance testing and midpoint inference for the local attention mechanism. This enables the model to efficiently capture dependencies within nearby imputed samples without considering all previous samples. The experiments on the various benchmarks demonstrate a consistent improvement over other methods while utilizing only half of the resources.
Published in: 2024 Winter Simulation Conference (WSC)
Date of Conference: 15-18 December 2024
Date Added to IEEE Xplore: 20 January 2025
ISBN Information: