Probabilistic Load Forecasting With Reservoir Computing

Some applications of deep learning require not only to provide accurate results but also to quantify the amount of confidence in their prediction. The management of an electric power grid is one of these cases: to avoid risky scenarios, decision-makers need both precise and reliable forecasts of, for example, power loads. For this reason, point forecasts are not enough hence it is necessary to adopt methods that provide an uncertainty quantification. This work focuses on reservoir computing as the core time series forecasting method, due to its computational efficiency and effectiveness in predicting time series. While the reservoir computing literature mostly focused on point forecasting, this work explores the compatibility of some popular uncertainty quantification methods with the reservoir setting. Both Bayesian and deterministic approaches to uncertainty assessment are evaluated and compared in terms of their prediction accuracy, computational resource efficiency and reliability of the estimated uncertainty, based on a set of carefully chosen performance metrics.


Introduction
Models for time-series forecasting are crucial tools in many fields, such as climate modelling, finance, meteorology and energy analytics [39].For example, an accurate prediction of the energy demand is fundamental to managing a power grid and preventing disruptions in the energy supply, which might cause serious and costly consequences [19].Especially with a larger penetration of intermittent energy resources [3,55], load forecasting plays a key role in the efficient planning of electricity supplies [40].
Other than producing a forecast, it is important to quantify the uncertainty in the predictions, since that affects how reliable the forecast is for critical decision-making [22].For example, if the model is fed with an input that lies far from the distribution of training data, it is desirable that it provides a high uncertainty about its prediction, to alert the user about a possibly unreliable prediction.In this regard, a method that computes point estimates is not enough.On the other hand, probabilistic forecasting methods provide the whole distribution of the forecast or, at least, some confidence intervals [34,17].
Recurrent Neural Networks (RNN) are one of the most effective models for processing data with causal dependencies such as time series, but they come with some drawbacks.Those include the difficulty of modelling long-term dependencies when performing gradient descent through time and the long training times due to their sequential structure that prevents parallelisation on modern hardware [7].Reservoir computing overcomes these problems by leaving the recurrent part of the model untrained [42], which allows bypassing the propagation of the gradient through time, greatly reducing the training time.Remarkably, reservoir computing models perform as well as deep learning methods on several tasks [26].This work uses an Echo State Network (ESN) as a core forecasting model for power demand time series.An ESN is a reservoir computing (RC) model particularly suited for systems that display complex and chaotic behaviour [50].As explained in more detail later, the ESN embeds the input in a high dimensional vector state which can then be fitted to the desired output.
So far, most of the existing forecasting approaches based on ESNs have focused on computing point estimates, limiting their use in real-world applications that require estimating confidence intervals for the predictions.To fill this gap, we couple the ESN with popular methods of uncertainty quantification and we adapt them to the reservoir computing paradigm, especially by studying how to deal with the high-dimensional embeddings generated by the reservoir.We provide an extensive comparison in terms of forecasting accuracy, computational costs and quality of the produced uncertainty using multiple metrics.In particular, we carried out our experiments on two real-world datasets, representing the electric load of two power grids.Our findings suggest that popular Bayesian methods, such as Markov chain Monte Carlo and variational inference, suffer considerably from the high dimensionality of the reservoir embeddings, while quantile regression proves to be accurate and lightweight.To the best of our knowledge, a comprehensive study that thoroughly evaluates uncertainty quantification in reservoir computing does not exist and it could provide valuable insights to a practitioner looking for models with good trade-offs between performance, training time and uncertainty estimation.
The paper is organised as follows.Section 2 presents a summary of the most recent literature on the subject, then a background section introduces reservoir computing 3.1 and the problem of quantifying uncertainties 3.2.All the methods we are comparing are presented in Section 4, finally in Sections 5 and 6 we describe the experiments and provide an evaluation of the results.

Related works
In the last few decades, machine learning methods earned increasing popularity for time series forecasting.In particular, RNNs became one of the most popular options when working with data characterised by time dependencies [12].Gated RNNs, such as LSTM [30], were introduced to address some limitations of vanilla RNNs [21].The reader can find comparisons of machine learning methods for time series forecasting in [12,39].Reservoir computing, introduced in [32], proves to perform better than other machine learning approaches on time series stemming from complex dynamical systems [50].
The deep learning models mentioned so far were originally designed to perform point forecasting, which is the prediction of the most likely value, given a particular choice of the loss function.The simplest approach to model uncertainty is to train an ensemble of RNNs and build confidence intervals from the statistics of the predictions in the ensemble [54].This approach makes strong assumptions about the underlying data distribution and often generates overconfident intervals if the models in the ensemble are not different enough [36].A more elegant approach to model the uncertainty is quantile regression, which was first introduced in [35] and later further developed in [25].Quantile regression predicts pre-defined quantiles of the output distribution and can be easily implemented in a neural network trained by minimising a particular loss function [56].Another approach, called conformal prediction, generates prediction intervals from the statistics obtained by a trained model on a calibration set.Originally designed for independent data [2], conformal prediction has been recently extended to handle time series data [52] and paired with RNNs [57,34].
The interest in probabilistic forecasting with deep learning has recently increased, as evidenced by the presence of open-source libraries such as GluonTS [1] and PyTorch Forecasting1 that provide several models out of the box.One of the deep learning models for probabilistic forecasting that gained popularity in recent years is DeepAR [48].DeepAR uses an LSTM that produces an embedding of the input time series and is paired with a probabilistic model chosen by the user, which forecasts future values in the form of Monte Carlo samples.A similar approach is followed by DeepTCN [15], which uses temporal convolutional networks and outputs the parameters of a chosen probability distribution from which one can sample the forecast.In [46], the authors use a Bayesian model trained with Markov chain Monte Carlo and a sparsity-inducing peak-and-slab prior over regression parameters.A recent overview of probabilistic forecasting based on deep learning models can be found in the related works section of [20].
Despite the prolific research on probabilistic forecasting with deep learning, only a few efforts have been devoted to implementing probabilistic forecasting with RC approaches.ESGP [14] combines ESN with Gaussian processes, in [38] Bayesian regression with Gaussian and Laplace priors is used to train the ESN, while [13] employs stochastic variational inference.Another work that goes in this direction is [44], where the authors introduced a Bayesian deep ESN [33] and used MCMC to account for the prediction's uncertainty.To try to make the model lighter, they also make use of stochastic search variable selection priors.They also compare their proposed framework against ensemble forecast methods.These four works consider a single Bayesian approach to model the uncertainty in the prediction of an ESN.However, a systematic analysis and comparison between popular Bayesian and non-Bayesian uncertainty quantification techniques within an RC framework are still missing.The aim of the present work is to fill this gap.

Reservoir computing
The most popular RC model in machine learning is the ESN; the terms RC and ESN are often used interchangeably [32].An RC model is composed of three elements: the input layer and the recurrent layer called reservoir, which are randomly initialised and remain untrained, and a linear output layer, which is the only part that is trained, usually by linear regression.Denoting with x t ∈ R K , y t ∈ R L and s t ∈ R N respectively the input, output and state of the reservoir at time t, the ESN dynamics is governed by the following equations: where W in ∈ R N ×K and W ∈ R N ×N are fixed and R ∈ R L×N is the only trainable part; the two functions f and g introduce non-linearities.In general, an ESN could also include feedback from the output in the equation that updates the reservoir state, but we are not considering it.
In order for the ESN to produce meaningful embeddings, the random matrix W must be initialised ensuring that the echo state property is satisfied [24,58].Loosely speaking, this requires that the state of the reservoir s t asymptotically should not depend on its initial state, but only on the input sequence.
A widely used rule of thumb to initialise the reservoir is to set its spectral radius to less than one.However, being the ESN a non-autonomous system, it has been proved that this criterion is neither necessary nor sufficient to ensure the echo state property and it could also be far from the optimal choice [58].For this reason, more advanced methods have been proposed to encourage the reservoir to generate rich dynamics of the input [5,6].Nevertheless, it has been proven that limiting the spectral radius to be less than one is "sufficient in practice" since a reservoir built in this way satisfies the echo state property with high probability [59].Therefore, even if potentially sub-optimal, in this work we use the aforementioned criterion for the sake of simplicity and practicality.
Finally, while the vanilla ESN implements g(s t ; R) as a linear readout, more recent works showed that a non-linear readout implemented by a multi-layer perceptron (MLP) usually achieves better performances [8].In order to train g to accurately map each state s t into an output y t = g(s t ; R), where R represents the model's trainable parameters, a training dataset must be constructed.The input time-series {x t } T t=1 , with x t ∈ R, is used by the reservoir f to update its inner state S = {s t } T t=1 ; hence, the training dataset is defined as {(s t , y t )} T t=1 , where y t = x t+h , with h the forecast horizon.As it is, g is a deterministic function and deterministic models return only one data point for each input, so they can't possibly produce an estimate for the uncertainty.

Bayesian approach
In the present work, we focus on the readout g(s t ; R) and how we can change it so as to provide uncertainties.One approach is to modify the above setting so that the output of the model provides y together with information on its uncertainty.This can be done, for example, with quantile regression, which will be introduced in Section 4.4.Alternatively, one can resort to Bayesian Neural Networks (BNN), which naturally embody information about uncertainty by generating entire distributions over the output domain.
In a BNN, R is described by a distribution that represents what are the values of R that most likely generate data.In this probabilistic setting, we are interested in searching for the posterior distribution p(R|S, Y ), where S and Y are random variables representing respectively the reservoir state and the output, which we could then use to sample an output y * for a previously unseen state s * from where Ω is the domain of R and p(y|s, R) is the likelihood, and it is given by the model itself. 2ayes' theorem gives the explicit form of the posterior as where at the numerator we have the likelihood and the prior p(R), which are known and selected by us.A standard option for the likelihood that we use in our work is a Gaussian with variance Σ As priors for the parameters R, 3 it is common to use a Gaussian or uniform distribution, according to the standard approach to use semi-informative priors [27,49].A more specialised choice for p(R) is the horseshoe distribution, discussed in Section 4.2.
The problem about (4) lies in the normalisation constant at the denominator which is often intractable and cannot be computed analytically.This issue can be overcome by approximating the posterior with variational inference (see Section 4.1) or by using numerical routines, like Markov chain Monte Carlo (see Section 4.2).Another way to create a BNN and avoid computing ( 6) is by transforming a deterministic model into a probabilistic one using dropout (see Section 4.3).This latter approach sacrifices the fine control over the priors, which naturally allows the embedding of previous knowledge and acts as regularisation when dealing with small datasets [27].

Methods
What follows is a description of the methods to produce an uncertainty estimate that we compare in this work.We start with the three approaches for training a BNN, then we present QR (Section 4.4), and we conclude by discussing the problem of calibration in Section 4.5.Refer to Figure 1 for a schematic view of the framework.Each point x t is fed into the reservoir and used to update its internal state s t .The series of states is then used to train the readout g(s t ; R) using one of the represented methods.On the right, we have the outcome, with ground truth values in orange, and a prediction represented by a solid blue line, with its uncertainty pictured as a shaded band around it.

Variational inference
Variational Inference (VI) offers a way to tackle the intractability of ( 4).The idea is to substitute the true posterior p(R|S, Y ) with an approximation q θ (R), called variational distribution, which depends on some parameters θ.In order to identify among all the possible q θ (R) the one that best approximates the posterior, one has to minimise the Kullback-Leibler divergence between the true and approximate posterior One can prove that where L LL := q θ (R) log p(Y |S, R) dR is the expected log-likelihood.Hence an equivalent approach to minimising the KL on the left is to minimise the right-hand side (which does not depend anymore on p(R|S, Y )), i.e. maximising the Evidence Lower Bound (ELBO), defined as where, by maximising the first term, we are pushing q θ (R) to better approximate the data distribution, while the second term keeps it close to the prior p(R).VI, in other words, translates the task of solving (4) into the optimisation problem (8), which requires computing derivatives rather than integrals.
As variational distribution q θ (R), we use a multivariate Gaussian distribution on the whole parameter space, constructed using the Cholesky decomposition, where we don't assume any independence between parameters.
In our case, the parameters R are mainly stemming from the MLP g(s; R) used as a readout of the ESN, so their number N increases with the number of units within each layer and with the number of layers.For example, if we consider a linear layer 500 × 100, the dimension of R for that layer alone will be N = 50 000.A multivariate Gaussian as variational distribution would then be defined by a covariance matrix of size N 2 , which reserves a lot of computing memory, leading easily to its saturation.
To avoid this issue we adopted a sparsity-inducing distribution, like the low-rank multi-variate Gaussian, as variational distribution q θ (R).If we assume that R ∈ R N depends on a smaller number of latent parameters ϕ ∈ R r , we have R = Cϕ + ϵ, where C is a N × r matrix with r < N , ϕ is a r-dimensional random variable from a Gaussian distribution with 0 mean and covariance I r , and ϵ is a N -dimensional vector of independently distributed error terms with zero mean and finite variance var(ϵ) = Ψ.Then one can prove that cov(R) = CC T + Ψ, (10) so the covariance matrix can be expressed as the sum of a low-rank matrix CC T of rank r and a diagonal matrix Ψ.A common choice is r = √ N .Notice that when C = 0 the covariance matrix becomes diagonal, so the mean-field approximation, where all parameters R are assumed independent, is a special case.

Markov chain Monte Carlo
Another way to tackle the posterior in ( 4) is to use Markov Chain Monte Carlo (MCMC), which allows sampling from the posterior without the need to compute the integral in (6).Indeed, under certain assumptions (see A.1 for details), it is possible to define a stochastic process {R 0 , R 1 , . . .}, where R i is a discrete random variable describing the model's parameters R, that converges to the true posterior distribution p(R|S, Y ), by only knowing that p(R|S, Y ) ∝ p(Y |S, R)p(R).MCMC provides different routines to simulate such a stochastic process and allows the collection of samples from the posterior without knowing explicitly its normalisation.In particular, in this work, we use the No-U-Turn Sampler (NUTS) [31], which is a variant of the Hamiltonian Monte Carlo (HMC) [18].
One of the most important issues in MCMC is understanding whether the chain has converged or not to the posterior distribution [43].There are a few methods to evaluate convergence, but none of them is perfect, since without knowing the target distribution in advance it's impossible to guarantee that convergence has been reached.Even assuming that convergence is reached, it could take a lot of iterations depending on a number of factors, among which are the dimension of the parameters' space and the complexity of the target distribution.A large number of parameters negatively affects the use of MCMC also from a mere memory perspective, since to infer properties of the target distribution it is necessary to collect a relatively large number of samples.Given the high dimensionality of the reservoir space, a straightforward application of MCMC is difficult.
We explored two possible solutions to overcome this limitation.The first one is to apply principal component analysis (PCA) to reduce the dimension of the reservoir state space before running MCMC.Besides reducing the computational burden in training the readout, this procedure also offers a regularisation that can improve the overall performance in the downstream task [41].
The second solution is to implement Stochastic Search Variable Selection (SSVS) [28], which was introduced to speed up convergence, following the intuition that more promising parts of the parameter space should be more likely and so MCMC would converge there faster.In a Bayesian regression problem, we consider a likelihood distribution of the form p(Y |S, β, σ) = N (S T β, σ 2 ), 4 with Y real-valued, and S and β R p -valued random variables, and we are interested in the posterior distribution of σ and of the regression parameters β that best fit the inputs S to Y .The idea of SSVS is that some of the β i 's might be negligible.For this reason, the prior over all parameters is p(β, σ) = p(β|σ)p(σ), where we chose p(σ) to be a uniform distribution and p(β|σ) is a shrinking distribution [53], inducing sparsity over β.As the shrinking distribution we chose the horseshoe distribution [11], which introduces a local shrinkage parameter λ i and a global one τ , distributed like follows where C + is a half-Cauchy distribution.

Dropout
The dropout technique [51] can be used to easily morph the readout g, which in our case is an MLP, into a Bayesian model.Consider an MLP model with just one hidden layer and without the bias term, for simplicity.Let us denote Q the dimension of the input, K the number of hidden units and D the output dimension.Dropout is applied by sampling two vectors b 1 and b 2 of dimensions Q and K from a Bernoulli distribution of parameter p and by changing the action of the MLP as follows: where • stands for the Hadamard product and σ is the activation function.
Dropout is commonly used during training as a regularisation technique [51], but, when used during inference, it makes the MLP in ( 13) a fully Bayesian model [23] that can be applied iteratively to build an ensemble of values {y i }, allowing to estimate uncertainties without the need to solve (6).

Quantile Regression
Another approach, other than using BNNs, is to change the training of g so that for each reservoir state s t it outputs both the forecast y t and a measure q of its uncertainty.In Quantile Regression (QR) [35] such q can be one or more quantiles of choice.Given a probability distribution P (Y |S), the quantile of level τ is that value q τ (Y |S) such that Pr(Y ≤ q τ ) = τ .By simply redefining the loss function of a regression task, QR infers directly the quantile we are interested in.
In a standard regression problem, we are able to find the sample mean E[Y |S] = g(S; R) by minimising the sum of squared residuals, so we solve In QR the optimisation problem is modified as follows where ρ τ is called check loss (or pinball loss) and is defined as and the regression parameters R(τ ) will depend on the chosen quantile.By performing QR for multiple values of τ , we are able to infer a more detailed view of the distribution P (Y |S), than just its mean value.

Calibration
In all the above scenarios the model does not yield a simple real value, but it has a more structured output containing information about uncertainty.For this reason, we can't simply evaluate its performance by some accuracy measure alone.Indeed, the mean value or the median of the output can accurately reproduce y t and yet the output distribution can still be far off from the values spanned in the dataset.
The idea behind calibration is that if the output says that I ⊂ R is a 90% confidence interval for ỹt , then y t should fall in I roughly 90% of the time [37,29].See Figure 2a for an example.Ideally, we need to check that the condition above holds for every possible confidence interval.In practice, we pick a set of K quantile levels {τ i } K i=1 , that span the interval [0, 1] and check the calibration on each of them.To do so we first need to compute for each time step the quantiles q τ,t , that is those values that satisfy where F t is the cumulative density distribution (CDF).Notice that quantiles need to be computed from the sampled values of ỹt for VI, MCMC and dropout, while they are provided directly as output for QR.Ideally, for each τ i , y t should be smaller than q τi,t approximately with a frequency of τ i , that is  where | • | represents the cardinality of a set.The left-hand side of ( 18), that we called τ , is the empirical cumulative density distribution, while on the right-hand side we have the theoretical one.
We can quantify how uncalibrated our model is by computing the calibration error where w i are weights that can be used to give less importance to some quantiles; in our case, we used w i = 1.
In order to calibrate a model, we use the method described in [37]: with a separate portion of the dataset, we create a recalibration dataset {(τ i , τi )} K i=1 and we use it to train a calibration model µ : [0, 1] → [0, 1], which is then used to correct the uncalibrated quantiles.Consider for example Figure 2b: the horizontal axis represents the predicted CDF τ , while the vertical axis is the empirical CDF τ .The purple points are the couples {(τ i , τi )} K i=1 obtained from the test set before calibration, which mostly lay far from the dashed line where τ = τ .After applying the calibration model µ, trained on the calibration set so that µ(τ ) approximates τ , the results are much more calibrated, as shown in red in Figure 2b.

Experimental framework
Our experiments aim at comparing in a time series forecasting task the performance of a probabilistic readout implemented with different uncertainty quantification techniques.The methods we employed are VI, MCMC (with and without the use of PCA to reduce the dimensionality of the reservoir state), SSVS, dropout and quantile regression.Our software implementation5 makes use of PyTorch and the probabilistic programming library Pyro [9].The different methods are compared according to common criteria quantifying accuracy in the prediction, quality of the uncertainty and computational complexity.For each method, we also discuss the simplicity of usage in terms of expertise required by the practitioner.
All experiments were run on NVIDIA RTX A6000 graphics cards.

Datasets
To carry out the experiments, two datasets of univariate time series have been used.The first time series represents the electric load measured by Azienda Comunale Energia e Ambiente (ACEA), the company managing the electricity distribution in Rome.Data were collected every 10 minutes for 3 years of activity, from 2009 to 2011 [4] with a precision up to the second decimal digit.From the original dataset, we took a point every 6, so to have an hourly resolution, and removed 11 weeks (being careful not to disrupt the pattern of the time series) due to a strong anomaly that would affect disproportionately the statistics of the training set compared to the calibration and test sets.The final dataset consists of 21048 time steps.
The second dataset (Spain) represents the daily electric demand in the Spanish market between 2014 and 2018, with a precision up to the fourth decimal digit. 6The dataset has a length of 1825 time steps.
In both cases, the training dataset was formed by the first 70% of the time series, while the test and validation datasets both take half of the remaining 30%.We normalized the datasets by subtracting the mean and by dividing them by the standard deviation of the training set.
When working on time series forecasting, a crucial element to take into consideration is the presence of seasonalities.As expected, both ACEA and Spain datasets have a strong weekly seasonality, i.e. s = 168 for ACEA and s = 7 for Spain.While seasonalities with longer lags are of course present, they are much less prominent.
The dataset will be of the form {(x t , y t )} T t=1 , with y t = x t+h , where h is the forecast horizon.The input x t is first processed by the reservoir to produce the states s t , so the readout g is trained with a dataset {(s t , y t )} T t=1 .The choice of the forecast horizon h is constrained by the particular application at hand, but it is also limited by the removed seasonality s: h must be smaller or equal to s, or, if multiple seasonalities are removed, it must be smaller than the smallest seasonality.Indeed, after removing the seasonality, each data point in the dataset will become To reconstruct the output is necessary to reintroduce the seasonality by undoing the differentiation.In particular, the reconstructed value is x t+h = xt+h + x t−s+h .If h > s we can see that t − s + h > t, so x t−s+h is part of the forecast.The uncertainty of the forecaster will affect x t+h twice, from both terms.This is an artefact stemming from the seasonal differencing procedure that inflates the uncertainty of the predictions.If, instead, h ≤ s, then x t−s+h is part of the input, so its value is known with certainty and it will not contribute to the uncertainty of the output.For this reason, we chose h = 24 for ACEA and h = 1 for Spain, i.e. one day in both cases.

Metrics
To perform our analyses, we need metrics that • measure the capability of each method to produce accurate point forecasts and • ascertain the quality of the produced uncertainty.
To perform a comparison, we necessitate these metrics to be applicable to each method.As a measure of forecasting performance, we use the mean squared error (MSE) between the true value y t and the median of the forecast g(s t ; R).Evaluating the uncertainty would not be very meaningful if the underlying forecast is not accurate, so the MSE is the main metric considered for hyperparameter selection.
Referring to the median, instead of the average, allows the computation of the MSE homogeneously for all methods (in particular, with QR is not possible to compute the average), so it can be used to compare them.Another important gauge of performance is provided by the training time used by the different methods.It must be noted that MCMC strictly speaking does not undergo a training process.The closest notion would be the time of convergence of the chains, but assessing when those converge is a difficult problem and an open research question [16,47].For MCMC then we measure the total time elapsed while running the single chain.
To analyse the quality of the uncertainty estimate, we use four different metrics.The first one is the calibration error (cal), see equation (19), which quantifies how much the empirical cumulative distribution veers from the theoretical one (ideally, it would be zero).Then, we consider the width and the coverage of the 95% confidence interval, which measure, respectively, how wide that confidence interval is and if it actually covers 95% of the points.Namely, a smaller width reflects a less uncertain forecast, while the coverage should be close to 0.95.We notice that the coverage is redundant since the information is already contained in the calibration error on each quantile.Nevertheless, we report also the coverage on the 95% confidence interval as it is a popular performance measure that is adopted in many applications.The last metric is the mean continuous rank probability score (mCRPS), which measures the average distance between the cumulative distribution and the Heaviside function θ, The cumulative distribution F in ( 21) is known exactly when working with VI or MCMC, while is only known discretely with QR, through the predicted quantiles.In order to homogeneously compute the CRPS for all methods then, we approximate F with a step function defined using quantiles, which can be easily computed in all cases.The integral can then be approximated with a sum.Referring to (17), we have CRPS can be interpreted as a combined measure of both the accuracy (MSE) and calibration (cal).

Results
To ensure a fair comparison, the reservoir is generated once, configured with hyperparameters that satisfy the echo state property, and then used for all methods.In addition, for each method, we optimised the prior p(R) N (0, 1) Unif(0, 1) N/A N (0, 1) Unif(0, 1) Table 2: This table reports the optimal hyperparameters we used, for each method, to produce the results in Table 3.The optimal values are found by performing a grid search among all the possible combinations reported in Table 1 and by selecting the best-performing one in terms of the accuracy of the forecast (MSE) obtained on the validation set.The colour blue refers to ACEA dataset, while red refers to Spain dataset.Those in black are common to both datasets.A hyperparameter is marked as not applicable (N/A) if it doesn't apply to that particular method.
hyperparameters using cross-validation based on grid search.The optimised hyperparameters involve both the structure of the MLP used for the readout g and some method-specific features too.Table 1 contains a list of all the possible values for each hyperparameter, while Table 2 reports the optimal hyperparameter configuration for each method on the two analysed datasets.
Once the best-performing hyperparameters have been determined, we perform 10 runs for each experiment and collect the resulting means and standard deviations in Table 3: it shows the performance of each method in terms of the computational burden (training time), the accuracy of the forecast (MSE) and the robustness of the uncertainty (cal, width, coverage and mCRPS).Plots in Figures 3 and 4 present, tersely, the same results, with the additional effect of calibration on each metric.Specifically, the course of a blue streak moves through the metrics before calibration, with a solid line representing the mean and the shaded area the standard deviation, while the red band depicts those same metrics after calibration occurred.
Figures 3 and 4 show that the effect of calibration is different in each method.For example, Figure 3a shows that, for the ACEA dataset, VI takes advantage of the calibration process: the MSE gets a bit smaller even though not in a statistically significant way (from 0.394 ± 0.010 to 0.391 ± 0.009), but the calibration error drops almost to zero (from 0.568 ± 0.049 to 0.017 ± 0.004), with better coverage of the 95% confidence interval (from 0.967 ± 0.001 to 0.961 ± 0.003).
Post-calibration bands for QR are not reported in Figures 3 and 4. Since QR encompasses a discrete number of quantiles (K = 42) and the predicted CDF is discontinuous, the calibration model µ could potentially point to quantiles that were not computed.Indeed, the 95% CI is defined by (q τ ′ , q τ ′′ ), where τ ′ = 0.025 and τ ′′ = 0.975, but, after calibrating, µ(τ ′ ) and µ(τ ′′ ) will, in general, be different, so the corresponding CI will not refer to 95% anymore.All the other methods provide a distribution as output, so it is possible to compute new quantiles q τ ′ and q τ ′′ corresponding to the calibrated CI.QR instead outputs the quantiles directly, so if the calibrated quantiles were not computed beforehand, they can not be obtained afterwards (it would be necessary to re-train the model anew, but it will be uncalibrated again).
Figure 5 shows some examples of forecasts produced by each method for both datasets, after calibration     3 for ACEA dataset, before (in blue) and after calibration (in red).Notice that calibration is not performed on QR because the predicted CDF is discontinuous.3 for Spain dataset, before (in blue) and after calibration (in red).Notice that calibration is not performed on QR because the predicted CDF is discontinuous.
(except for QR).By looking at Table 3 and the plots in Figures 3, 4 and 5, we notice that it is difficult, at first glance, to identify which is the best-performing method, since there are different methods that achieve the best performance, according to some metric, on each dataset.This underlines the complexity and possible ambiguities in evaluating the performance in probabilistic forecasting.Nevertheless, there are important considerations that can be made, which are reported in the following.
• The most pronounced difference regards training times, where methods based on MCMC show times from 2 to 4 orders of magnitude longer than other methods.The difference is further stressed in Figure 6a.Although it is improper to talk about training time with MCMC, it does not change the fact that the MCMC method is extremely expensive and impractical to use in this scenario.Even by compressing the dimensionality of the reservoir states via PCA or by using SSVS, the computing times do not decrease.
• Overall, QR is the method that achieves better performance, especially on ACEA dataset.Even if post hoc calibration cannot be applied for the reasons discussed above, QR still achieves the lowest calibration error.
• All methods based on MCMC behave generally worse according to all metrics.In particular, the MSE in MCMC is higher than in other methods.The standard deviation of the results is also larger than other methods: this may be due to poor convergence of the chains.
• The fastest technique is dropout, which also achieves an MSE comparable with the other approaches.However, the calibration error, out of the box, is definitely higher due to the extremely narrow confidence intervals that yield poor coverage.Even after calibration, the uncertainty estimation does not improve by much as we can see from Figures 3e and 4e).The poor performance in terms of uncertainty quantification is apparent by looking at Figures 5i and 5j: on both datasets, the uncertainty is generally very low, meaning that the randomness induced by dropout is not strong enough.With a higher value of p the predicted uncertainty improves, but the MSE gets worse.Table 4: Results for DeepAR are obtained by averaging 10 runs after selecting the best-performing hyperparameters for each dataset.On the other hand, since there is no stochasticity in its fitting procedure, ARIMA is executed only once and we do not report standard deviation in the results.
• Figure 6b depicts the relationship between uncertainty (cal) and forecast accuracy (MSE), with datasets shown with different colours.The better-performing methods are QR and VI: QR achieves the best performance on the Spain dataset, while VI is the best-performing method on ACEA and the secondbest on Spain.
Overall, the approach that prevails when jointly considering the low training times, the MSE and the calibration error is QR.Its performance in terms of prediction accuracy and uncertainty estimation is comparable to those of VI, but its computational complexity is significantly lower.Another important advantage of QR is the simplicity of implementing it: it is sufficient to replace the standard MSE loss used for training a regression model with the pinball loss, making QR a straightforward choice to be used in machine learning applications.On the other hand, Bayesian approaches such as MCMC and VI rely on additional software libraries for probabilistic programming that adds an additional layer of complexity in terms of implementation.Additionally, while Bayesian inference offers more flexibility thanks to the possibility of specifying the data distribution and introducing prior information in the model, it also poses additional challenges.Indeed, the user is given the additional task of identifying the optimal distributions and the most meaningful priors, which is often not straightforward in many real-world applications, in addition to practical challenges such as checking the convergence of the chains in MCMC.
Comparison with baselines To understand how the reservoir-based methods perform with respect to other baselines, we compare them against two popular methods for probabilistic time-series forecasting, ARIMA [10] and DeepAR [48].To determine the ARIMA model we relied on Auto-ARIMA from the pmdarima library 7 , which identified ARIMA(3, 0, 1) and ARIMA(9, 0, 0) as the best-performing models for ACEA and Spain, respectively.For DeepAR we followed the same hyperparameters' search procedure as for the reservoir-based models.The best-performing DeepAR model has 2 RNN layers with a hidden size of 32 for ACEA and 8 RNN layers with a hidden size of 16 for Spain.Table 4 presents the same metrics obtained by ARIMA and DeepAR on the two datasets.As expected, the performance of ARIMA according to the different metrics is worse than most reservoir-based methods.The accuracy of DeepAR is comparable with the other methods but with a much longer training time.Finally, we notice how in both ARIMA and DeepAR the calibration error is aligned with those from the other methods, and that training times are much more sensitive than reservoir methods to the size of the dataset.

Conclusion
In the present paper, we delved into the task of time series forecasting in an energy analytics setting.As the core model, we deployed an Echo State Network, which, via its reservoir's dynamics, is able to unravel the possibly complex inner dependencies of the input time series and embed it in its state space.We argued that a point estimate, even if accurate, is often insufficient for policymakers in high-risk contexts like power market management, so it needs to be accompanied by a measure of how likely the forecast is.For this reason, we combined the Echo State Network with popular methods for uncertainty quantification.Specifically, we considered deterministic methods, like quantile regression, and Bayesian approaches, like dropout, variational inference and Markov chain Monte Carlo.The goal of our work was first to adapt them to handle the high-dimensional states of the reservoir and then compare them, by identifying their advantages and disadvantages.
Our experiments show that advanced methodologies like VI and MCMC, although they provide fine control over parameters' priors, do not substantiate a significant advantage in terms of forecasting accuracy or calibration, in fact proving to be demanding in terms of computational resources.Quantile regression, on the other hand, does not seem to be affected by the high dimensionality of the reservoir states, emerging as fast to train, while achieving performances on par with the other methods on other indicators, without even using a post hoc calibration step.
In this case, we can construct the whole trajectory of the chain states by iteratively applying Q and it may saturate at a stationary distribution S * i = j S * j Q ji , regardless of the initial state.A sufficient condition for a Markov chain with distribution S to reach a stationary distribution is when S satisfies detailed balance The idea behind MCMC is to define a Markov chain that satisfies detailed balance such that it reaches as stationary distribution of states exactly the posterior p(ω|X, Y ) that we need, by only knowing that p(ω|X, Y ) ∝ p(Y |X, ω)p(ω).
The first and simplest algorithm to achieve this is the Metropolis-Hasting algorithm [45], but in the present work we will use the No-U-Turn Sampler (NUTS) [31], which is a variant of the Hamiltonian Monte Carlo (HMC) [18].
In HMC the sampling process is represented by the dynamics of a Hamiltonian system, where: • the position q is the state of the Markov chain itself, i.e., ω, • an artificial momentum variable p is introduced, • the Hamiltonian of the system is H(q, p) = K(p) + U (q), where a possible choice for the kinetic energy is K(p) = p T M −1 p/2 (with M a symmetric, positive-definite "mass"), and the potential energy is given by U (q) = − log p(ω|X, Y ) = − log p(Y |X, ω)p(ω) + const, where the integral ( 6) can be removed because it will not affect the dynamics of the fictitious system above.
The equations of motion are then which can be numerically solved with the leapfrog method.The algorithm proceeds to sample as follows: 1. a momentum p is sampled from its Gaussian distribution f (p) = e K(p)/T /Z, where T is a temperature, Z is the normalisation and M is the covariance matrix; 2. a new state is proposed using Hamiltonian dynamics: from current state (q, p) we take L steps with the leapfrog method with step-size ε and we get to (q * , p * ); 3. the new state is accepted or rejected according to min[1, exp(−H(q * , p * ) + H(q, p))].
One can prove that HMC is ergodic and the stationary distribution it converges to is exactly p(ω|X, Y ).

Figure 1 :
Figure 1: Overview of the used framework.On the left, in green, is the input time series used for training.Each point x t is fed into the reservoir and used to update its internal state s t .The series of states is then used to train the readout g(s t ; R) using one of the represented methods.On the right, we have the outcome, with ground truth values in orange, and a prediction represented by a solid blue line, with its uncertainty pictured as a shaded band around it.
(a) In the upper plot, the 90% confidence interval fails to contain 90% of the data points.Below, in red, is the corresponding calibrated interval.(b)Example of the relationship between predicted τ and empirical CDF τ before calibration (in purple), after calibration (in red) and the ideal case (dashed grey line).

Figure 3 :
Figure 3: Visual representation of the results in Table3for ACEA dataset, before (in blue) and after calibration (in red).Notice that calibration is not performed on QR because the predicted CDF is discontinuous.

Figure 4 :
Figure 4: Visual representation of the results in Table3for Spain dataset, before (in blue) and after calibration (in red).Notice that calibration is not performed on QR because the predicted CDF is discontinuous.

Figure 5 :
Figure5: Examples of forecast trajectories for both datasets using all the discussed methods, after calibration.The black line represents the true value, the solid blue line is the median of the prediction and the shaded area is the 95% confidence interval.14

( a )
Comparison of the training times on the different datasets using a logarithmic scale.(b) Calibration error against MSE.Bottom-left indicates the best performance, top-right the worst.

Table 1 :
Schematic view of the possible values and intervals searched for each hyperparameter.The number of layers, units per layer and activation functions refer to the readout's configuration, so they are common to all methods.On the other hand, the prior refers only to VI, MCMC and SSVS; the dropout probability to the dropout method; the learning rate to VI, dropout and QR.

Table 3 :
Results are obtained by averaging 10 runs after selecting the best-performing hyperparameters of each method.The best values are in bold, with the colours encoding the datasets.