Predicting the Future is Like Completing a Painting: Towards a Novel Method for Time-Series Forecasting

This article is an introductory work towards a larger research framework relative to Scientific Prediction. It is a mixed between science and philosophy of science, therefore we can talk about Experimental Philosophy of Science. As a first result, we introduce a new forecasting method based on image completion, named Forecasting Method by Image Inpainting (FM2I). In fact, time series forecasting is transformed into fully images- and signal-based processing procedures. After transforming a time series data into its corresponding image, the problem of data forecasting becomes essentially a problem of image inpainting, i.e., completing missing data in the image. Extensive experimental evaluation was conducted using the shortest series of the dataset proposed by the well-known M3-competition. Results show that FM2I, despite still being in its infancy, represents an efficient and robust tool for short-term time series forecasting. It has achieved prominent results in terms of accuracy and outperforms the best M3 methods. We have also investigated the effectiveness of the FM2I against the Smyl method, the winner of the M4 competition. Using the same category of shortest series, results show a close accuracy compared to the Smyl method. The FM2I is also able to generate ensemble data forecasts, which contain the best and more accurate forecast compared to existing and considered methods.


Introduction
It is well known that good forecasts are of utmost importance in all scientific disciplines [70].However, the relevance of accurate forecasts has become increasingly palpable since the emergence of Big-Data as a new paradigm of scientific activities for many researchers [10].One of the suggested tracks was due to the distinguished statistician Leo Breiman when he wrote with reference to predictions: "If our goal as a field is to use data to solve problems, then we need to move away from exclusive dependence on data models and adopt a more diverse set of tools."[18].Therefore, we believe that understanding the nature of prediction can help creating novel forecasting methods.First of all, let us precise that forecasting is considered as a process of predicting the future according to past and present data.We emphasize that this terminology may vary according to each scientific discipline (see [19]).This paper is not mainly concerned by philosophy of science, but we want to share with the research community the methodology we have undertaken, starting by deeper philosophical considerations, with the aim is to elaborate a new forecasting method.More precisely, we first start by raising philosophical questions, which are related to the nature of prediction itself, and then translate these questions following a scientific methodology for being applied for real-sitting forecasting scenarios.We name this way of thinking the Experimental Philosophy of Science.Thus, we will switch, in this article, between science and philosophy of science as much as required in order to better describe our approach and highlight obtained results.
As a first result, we introduce a new time series forecasting method based on image completion, we name it Forecasting Method by Image Inpainting (FM2I).It represents a performing and robust tool for Time Series (TS) forecasting.Basically, the main idea behind the FM2I method is as follows.We first transform a TS into an image and then complete it using adapted image inpainting techniques.The completed image is converted back to the TS in order to obtain forecast values.Our aim is to perform these tasks without usual pre-processing (e.g., seasonality, trends, stationarity) of the original TS.The process of transforming TS into images and from the inpainted images into the original TS, including forecast values, must be possible.
In summary, the main general contributions of this work are two folds: i) interlinking philosophy of science and science, named experimental philosophy of science, ii) an extensitivity testing framework and iii) a novel forecasting method by image inpainting (FM2I).The remainder of this paper is structured as follows.Section 2 presents the formulation of the scientific prediction and philosophical problem.The scientific transformation of the problem is introduced in Section 3. In Section 4, materials and methods are described.The proposed FM2I algorithm is detailed in Section 5. Experimental results and comparison are described in Section 6. Scientific and philosophical discussions together with potential directions are given at the end of this document.An appendix is included in order to describe some notions and fundamentals related to signal theory.

Scientific prediction and philosophical problem formulation
It is well known in the philosophy of science field that Scientific Explanation and Scientific Prediction are one of the main goals of sciences [1], [5] and [6].Nevertheless, philosophers of science have focused much more on developing a theory of explanation rather than introducing a prediction theory [7], [3] and [2].Indeed, several models of explanation were arised during the past half century.The pioneering and most famous models of explanation, so-called covering-law due to Carl Gustav Hempel and Paul Oppenheim, are DN (Deductive Nomological), DS (Deductive Statistical) and IS (Inductive Statistical) [8], [9].According to their models, we get a scientific explanation of an event/phenomenon E, which occurred (called Explanandum) when E can be deduced, induced or inferenced from the Explanans (I,L), i.e., a set of initial conditions I and a set of laws L. In other words, the explanandum E must be a logical consequence of the explanans (I,L), which meet certain conditions [8].There are other subsequent alternative models of explaination; we can mention, for instance, the Causal [13] or Unificationist [14] model.However, despite the extensive work in this area, still a logical prediction model is required to formalize the structure of predictive reasoning, see [3] and [2].One of the consequences is the weakening of the explanation theory itself [7].In order to overcome the difficulty of constructing a theory of prediction, early renowned works of Hempel and Oppenheim [8] and [9] introduced the controversial ( [11] and [12]) Structural Identity Thesis (also called Symmetry Thesis) between explanation and prediction.The structural identity thesis assumes that explanation and prediction have the same logical form and are, somehow, equivalent.In other words, every adequate explanation is a potential prediction and every adequate prediction is a potential explanation.In particular, as it was recalled in [9], in the Hempel-Oppenheim model, predictions concern exclusively phenomena that occur in the future and, therefore, it seems that the difference between explanation and prediction is purely chronological or temporal.This structural identity thesis has been heavily criticized by many philosophers [12], [11], [17].
Nowadays, there is no clear consensus concerning the definition of scientific prediction, see [3] and [2].Most particularly, the notion of scientific prediction itself is still ambiguous.Finding, however, an adequate and unified definition of prediction is challenging.It must be quite restrictive in order to exclude Divination, since it is based on occultic and metaphysics process of predicting, for instance as in astrology.Moreover, it must be fairly too broad by including all forms arising from different scientific disciplines.Indeed, according to the definition of Hempel-Oppenheim's model, the evolutionary biology, palaeontology or the geology are automatically excluded, since these disciplines deal with the past, which means that they are exclusively explicative sciences.Furthermore, their model also excludes any prediction not obtained through scientific laws (see a counter-example in [20]).
Actually, in current literature, a better understanding of the notion of scientific prediction appears in two major areas of the philosophy of science, Confirmation Theory and Scientific Realism debate see [4] and [2].Briefly, the two questions asked in these two fields are respectively : should a hypothesis or a theory be better confirmed when it generates accurate predictions [23]?Are accurate predictions a reasonable criterion to assess a theory's maturity in order to affirm the reality of its assumed structures [22]?These two philosophical domains reach the same problem related to scientific prediction and the need to redefine this fundamental notion is required.The article [24] was the pioneering work, which explicitly makes the connection between these two philosophical areas and provides the current form of the issue [4].We wish to emphasize that [24] marks a certain break from the existing literature since it addresses a realistic predictive process from applied mathematics.Indeed, authors in [24] analyze in a concrete way how existing regression analysis, from statistics theory, proceeds to predict, in order to make a powerful contribution to the philosophical debate.
In this work, we want to take the research a step further by philosophically formulating one of our questions concerning the nature of predictions.We will turn it into a scientific issue, and then formalize our own methodology, test it and then contribute to engaging both philosophical and scientific debate related to the prediction.That is what we mean by experimental philosophy of science.We must stress that unlike the above-mentioned confirmation theory and scientific realism, our investigation is motivated by the prediction itself and not serving primarily another theory.Since, we are not yet able to formalize a definition of prediction, we need, however, a framework to better clarify our approach.Thus, we consider the definition given in [25] or its extended version in [2].More precisely, the prediction is a result of a predictive process.This later is considered as a series of inferences allowing setting, without measures or observations, the value of one or several variables (or the relationships between variables), variables are used here in a broad sense.However, this definition remains barely ambiguous since it is difficult to characterize a scientific predictive process [2].Thus, we start by wondering the simple following question, which we call the nature of predictions question: using the definition above, predictions are a result of a predictive process, then is this last possess a signature related to the own nature of the phenomenon, the event or the type of data we want to predict?In other words, if we dispose or establish a predictive process in a particular framework, can this process be potentially a predictive process in a completely different setting?More precisely, assume that we have an accurate predictive process arising from a theory or a model, concerning a specific scientific field, and able to predict a phenomenon A, is this process capable of predicting, in another scientific field, a phenomenon B? This question may seem to be trivial for theories, which are specifically concerned by prediction.For example, the statistical theory is able to predict a variety of different phenomena, since we can model them according to known tools or classes of statistics-based models.In our research program, since the aim is to exhibit new prediction methods, we want to go further by investigating "unthinkable" links between phenomena.
After careful considerations, it appeared suitable to reformulate this question in relationship with the notion of Predictive Capacity.First of all, we want to precise the difference between the latter and Predictive Power, see [3] for more details.The predictive power refers to the ability to derive, from a theory, at least one testable prediction.It can be obtained a priori by ensuring that the theory might have empirical consequences.While the predictive capacity can be used to evaluate the variety and the accuracy of the predictions generated by a theory, there are no ways, however, to assess it a priori since the validity of these predictions have to be checked after their future realizations (or not) and that in relationship with the confirmation theory, as stated by [3] [4].The same author gives two dimensions or two epistemic virtues of predictive capacity: the Intensivity, which refers to the accuracy of predictions with regard to a particular phenomenon, while the Extensivity refers to the ability to predict in a large variety of phenomena.Accordingly, we can formulate the following question: how can we test the extensivity of a theory's or a model's predictive capacity?

Scientific transformation of the problem
As described in the previous section, the problem is formulated in a very general way, but the power of this type of formulation is to generate potential research directions in the field of scientific prediction.More precisely, throughout an adequate correspondence procedure, the purpose is to test the extensivity of a theory's or a model's predictive capacity from a specific scientific setting or domain A. Thus, we have to choose a predictive process, which is designed, established and used in A, then test it in the frame of another specific domain B. This correspondence must enable switching, going back and forth, between the two domains or sub-domains.The results of this procedure are assessed in terms of accuracy according to real and/or existing predictions, which are, eventually, obtained by a predictive process from B. This procedure, named extensivity testing framework (ETF), can be formalized as depicted in Fig. 1.

Figure 1: Extensivity testing framework
While this extensivity testing framework could certainly bring a first answer to the previous question, it will not obviously tackle the whole above-mentioned philosophical problem.Nevertheless, getting a conclusive test, means that we have found a new, unknown and "unthinkable" prediction method and thus reaching our objective announced in the introduction.Therefore, it will be crucial to narrow the scope by only focusing, for instance, on the prediction by Extrapolation/Interpolation.So, let us recall that in practice, extrapolation or interpolation are operations, which are mainly used for predicting the values of one or several variables, denoted Y , according to known values of one or several variables, denoted X.The unique difference between the extrapolation and interpolation is as follows: the latter is concerned by the variables, which are situated inside the experimental or the studied sampling area, while the former is related to those that are outside this area [3].However, since the coming trend is not known, the extrapolation seems a difficult task than the interpolation.But, both the extrapolation and interpolation can be clearly considered as predictive processes according to the definition given in the previous section.
After careful considerations, we have focused on the extrapolation by testing the predictive capacity of Image Completion for performing Time Series Forecasts.Basically, an image is a distributed amplitude of colors [26], it is a twodimensional spatial pixels organization.Furthermore, an image possesses a specific structure [27], which is constituted, for example, of edges and texels; while a temporal series is a one-dimensional temporal sequence of successive values.As a matter of fact, there is a significant difference between these two data structures.Moreover, the image-based predictive process arises from image restoration techniques, which are designed for spatial patterns analysis and governed by Visual Logic assessment as well, such as image quality [28] and Psychovisual quality metrics [29].Thus, there is no reason to expect that this image-based predictive process could work for temporal series forecasting, since it should be based on a Chronological Logic by analysing past/present patterns.In other words, finding a way to draw past/present events and completing the painting, obtained by a purely pictorial reasoning, allows to 'see' the future!Following the above-mentioned considerations, we have reached the required ingredients that allow developing our scientific and technical framework, which is considered as an instantiation of the proposed extensivity testing framework.It is basically a fully integrated modeling and processing framework for time-series forecasting, named FM2I (forecasting method by image inpainting), as depicted in Fig. 2.

Materials and methods
The methodology presented in this section covers the entire process, from time series transformations into 2D image structure to image-based forecasting (inpainting), back to the 1D sequence representing the original times series including the forecast data.As shown in Fig. 2, the FM2I approach is composed of three main processing phases: transformation of 1D sequence representing time series into two 2D image-based representation, image extrapolation by adapting inpainting methods, and reverse data techniques to transform a 2D image-based into 1D sequence to further represent the original time series together with forecast data.
Regarding the first phase, the main question is how to transform 1D TS to 2D image-based structures by enhancing the initial information?We have investigated three main increasingly enhanced approaches.The first basic one, named naive approach, considers the plot of the TS as an input image to the adapted inpainting method.In the second one, named enhanced approach, we transform directly the TS into an augmented TS based-matrix using signal processing principles and tools.Based on this enhanced matrix, more adequate transformations are proposed in the deeper and final approach.
The second phase is dedicated to image-based forecasting using adapted image inpainting methods.It is worth noting that these methods have been mainly proposed for filling in part of an image (restoration of damaged regions).Existing inpainting methods can be classified into two main categories [62]: diffusion-based approaches and examplar-based approaches.Diffusion-based approaches have been mainly inspired by techniques from computational fluid dynamics.The pioneer work in this area was performed by Bertalmio et al. in [63,64] by using Navier-Strokes equation.Authors built an approach, at pixel level, by performing an analogy between the stream function in a 2D incompressible fluid and image intensity.As stated in [62], these approaches have provided excellent results for small inpainted regions, but they tend to introduce smooth effects in large regions.Exemplar-based approaches operate, however, at patch level by propagating information from known regions into missing ones.Unlike diffusion-based approaches, exemplar-based approaches are more adapted when inpainting large missing regions [62].
In this work, we have adapted the exemplar-based approach, in particular, the patch-based method, which is basically proposed for completing missing (or deteriorated) parts of images.In fact, for TS forecasting, after transforming the TS into an image, the missing parts (i.e., unknown region) are considered as our forecast area.More precisely, the patch-based method is diverted from its basic function and then used for completing the area, which is related to the forecast TS values.It is worth noting that we are developing a modified Navier-Stokes based method.However, in this work, we choose to focus only on the adapted patch-based method.
The rest of this section first presents the three TS transformation techniques (i.e., Naive, Enhanced, and Deeper) followed by a detailed FM2I algorithm together with issues we have tackled in order to maximize the transformation precision while considering a back and forth TS-image correspondence.

Naive approach
In this approach, we have considered the plotted TS as a 2D image (Original image), as depicted in Fig. 3.The plotted TS represents CO2 values from the dataset described in [46].As shown in this figure, the forecast area (extrapolated zone) is added to the original image as a deteriorated part for being inpainted by the patch-based method.The latter considers the extrapolated area as a regular hole to be reconstructed or inpainted, i.e., filling the missing pixels.The adapted patch-based method is able to fill on it; however, it generates several forecasts with high errors compared to the original TS (i.e., real plot).In fact, it is difficult, even impossible, for the patch-based method to infer the forecast area.It is due to the fact that the plotted TS 2D image holds less rich information.Indeed, there are much more white pixels than those representing TS values.Moreover, we have noticed, from the obtained experimental results, that the patch-based method does not deal well with such representations.This constraint compelled us to find suitable and richer 2D representations of TS, as described in the next section.Our aim is to extend the original TS with richer information, which could lead to more image patterns' structures and features.

Enhanced approach
The aim of this enhanced approach is to generate the TS's corresponding matrix that could increase the information gain against such 1D TS representation.Basically, a time series T S, having a set of values indexed in a time order, can be represented by T S = {s 0 , s 1 , s 2 , s 3 , ....s n } = (s i ) i=0,1,..n , where s 0 is the first value obtained at t = 0 while s n represents the value obtained at t = n.We have investigated existing techniques, from the signal processing theory, in order to transform a 1D signal into a 2D matrix.More precisely, we have used the Time Autocorrelation Function as a tool for 1D-to-2D transformations.Let us consider a TS as a random signal with a finite average power, the aim is to represent the majority of real-life time series.We can consider a TS as a digital signal resulting from the sampling of a random and permanent analog bounded signal x(t) having a finite average power.In fact, let us consider x(t) a signal having a finite average power, we can then define the temporal autocorrelation of this signal by the following formula, Predicting the Future is like Completing a Painting!but more details are presented in the Appendix A: It is worth noting that the function Γ(τ ) is well defined.Indeed, by applying the Cauchy-Schwarz inequality taking into consideration that x(t) is bounded, we can obtain Γ(τ ) ≤ P (x(t)) < +∞.Actually, the maximum of the function Γ is found at the origin, in fact, we have Γ(0) = P (x(t)).
This autocorrelation function measures the degree of similarity between the signal and its offset version, so it is natural that this function is maximum for τ = 0. Γ(τ ) compares the signal to itself, it is, therefore, a temporal average of the product of the signal by itself shifted by a time τ .Moreover, Γ(τ ) measures the similarity degree or internal signal's dependency.Mathematically, this function can be seen in several ways; namely, as a scalar product between the signal and its offset.Thus, if Γ(τ ) = 0, it means that the signal and its shift of one step τ are completely uncorrelated, they are two orthogonal signals.We can also see Γ as a sort of convolution of the signal with itself.This autocorrelation function does not only contain important information about the signal, but also its Fourier transform and then its spectrum.Indeed, this relationship manifests itself through the power spectral density (PSD).Thus, this autocorrelation function seems to contain valuable information regarding the properties of the signal x(t).More details regarding the PSD relationship with the auto-correlation function are presented in the Appendix B.
For a digital signal (x n ) n∈Z , the discrete version of Γ(τ ) can be written as follows: So, the autocorrelation formula for a TS, for i This allows defining a symmetric temporal autocorrelation matrix in the following form: It is worth noting that, it is not only the symmetric temporal autocorrelation matrix (STAM) that can be generated, but also the STAM version by excluding its average.Indeed, any signal x(t) can be written according to the form x(t) = x + y(t), where x is a constant, it is the mean value of the signal x(t), and y(t) is a signal of zero mean value.So, we can get the relation dx(t) dt = dy(t) dt .This will, therefore, allow using the (STAM) of the signal y(t) with zero mean.More specifically, for a TS, differentiating it by doing s i+1 − s i , provides a new TS, where the mean is eliminated before generating the STAM.Therefore, it will be easy to return back to the original TS afterwards, including the forecast area.More details about particular signals, which are stationary and ergodic are given in Appendix C.
The auto-correlation function, which is used to generate the STAM, is used, in turn, for TS forecasting.In fact, this latter is applied to the original TS in order to generate its corresponding matrix, as depicted in Fig. 4 (image generation).Then, the image, including the area to be predicted, is generated (Delimited forecast area generation).The size of this area corresponds to the size of the forecast horizon.The adapted patch-based approach is then applied on this latter to extrapolate the forecast area (image inpainting).The final phase is the backtracking from the original image, including the forecast area (i.e., short, medium and long horizon), to its corresponding TS (forecast time-series).All these transformation techniques must take into consideration two major objective functions, maximizing the transformation precision (i.e., information loss induced by the transformation), minimizing the execution time, while considering a back and forth TS-image correspondence.These techniques are described in Section 5.
In order to assess the effectiveness of this enhanced method, we have used the most common metrics, the MSE (Mean Squared Error), RMSE (Root Mean Square Error), MAE (Mean Absolute Error), the MAPE (Mean Absolute Percentage Error) and SMAPE (Symmetric Mean Absolute Percentage Error).The MSE is used to assess the quality of the estimation by computing the average squared difference between actual and estimated values.It is at all times positive, but when closer to zero the estimation is considered of high quality.Alike MSE, RMSE is used to measure the deviation of predicted values from actual values.It is computed as square root of average square deviation.The MAE is used to measure the average magnitude of the errors between forecast values and their corresponding actual ones.It is worth noting that the lower values of RMSE and MAE are better, i.e., the smaller an RMSE/MAE value is observed more the predicted values are closer.The MAPE measures the average of mean absolute percent errors, while sMAPE measures the average of symmetric mean absolute percent errors (%).The metrics are formally described as follows: where n is the forecast horizon (n-step-ahead), F i are forecast values, which are produced at time i by the established forecasting algorithm, whereas A i is the value, which is actually observed at time i.   1 shows the MSE, RMSE, MAE, MAPE and the sMAPE for short (6-step-ahead), medium (8-step-ahead) and long (18-step-ahead) forecasting.The obtained results show similar patterns, like the naive approach, but with high errors for all horizons, short, medium and long.More precisely, this enhanced method is able to generate only constant forecast values, which is an obvious behaviour since they show homogeneous patterns (i.e., diagonal lines form).In fact, the adapted patch-based method performs well by forecasting data, but the auto-correlation matrix is not suitable in this case.Moreover, despite the fact that we have used one of the best patch-based methods, its behaviour is mainly related to the image's patterns, which are related to the relevance of the selected TS-image transformations.This boils down to the fact that the information is too concentrated and needs to be broken down.The next section will introduce deeper transformation techniques including new representations of original TS.
Predicting the Future is like Completing a Painting!

Deeper approach
We have first investigated a new matrix representation, with the aim to enrich data through self-signal autocorrelation, called modified auto-correlation (MAC), by combining all possible offsets.This matrix can be performed by transforming the auto-correlation matrix into another one (MAC), for instance, by separating the sums, which are seem represented on the diagonal and the sub diagonals.We then obtain the matrix, depicted in Fig. 5a, in which the sum of the diagonal and each diagonal makes the γ(i) × (N + 1).This structure leads to a gramian matrix (GM).
It is worth noting that in [58] authors have reached similar results (Gramian Angular Field, GAF), by encoding TS (1D cartesian coordinates) as images (2D polar coordinates).In fact, images are represented by GAF in which each element is computed by the trigonometric sum between different time intervals.Furthermore, they have presented other techniques to allow transforming 1D sequence time series data into image-based representation for TS classification using traditional machine learning approaches.In fact, after transforming time series data into images, conventional neuronal approaches are used for classification purposes, e.g., learning features and identifying structure in TS.More precisely, instead of using TS data as input, these approaches use its corresponding encoded images (e.g., traditional image recognition).For instance, we can cite the gramian angular summation field (GASF), Gramian angular differenced field (GADF), markov transition field (MTF).Similarly, authors in [60] present another technique, called relative position matrix (RPM), for transforming 1D TS into image-based representations (see Fig. 5c).They mainly combined the RPM and CNN (Convolutional Neural Network) in an unified framework in order to enhance the accuracy of the time series classification.Reported results show the effectiveness of the proposed approaches against the state-of-art classification techniques.
To the best of our knowledge, all research works, which use image-based representations of TS, are dedicated to classification.However, only the work presented in [61][65] dealt with time series forecasting.However, in [61][65] authors used the same principle adapted for classification purposes, i.e., from 1D vector into 2D representation, in the form of image, to data classification using conventional neuronal approaches.Furthermore, they have used an image-based representation similar to our above-mentioned naive representation.For instance, as described in [61], the TS (i.e., the daily load data) is transformed into a graphical representation instead of a matrix-based representation.A dual-branch deep convolutional network is then fed by the generated graphical (i.e.plotted) representation in order to extract features for clustering purposes.
In order to evaluate the efficiency of these matrix representations for TS forecasting, we have computed their accuracy using a set of TS (3003 time series) from M3 competition [47].Fig. 6 represents an example of a given TS using the above-mentioned image-based representations.Extensive experiments have been conducted in order to compute the MSE, RMSE, MAE, MAPE and the sMAPE of selected TS (i.e., TS1, TS3, TS4, TS11, TS20, TS24).Obtained results are reported in Table 2 to show whether these matrices affect the forecasting accuracy.As shown in this table, none of them outperforms for all considered TS.

FM2I algorithm
The FM2I algorithm is structured into four main phases, as depicted in Fig. 7.The first phase includes all steps that are required for preparing TS.The first step splits TS into two parts, training (i.e., parameters tuning) and testing.The testing part represents the forecasting values according to the horizon's sizes.It is mainly used once the best forecasting model is specified.It is worth noting that since the above-mentioned matrix transformations can be considered as a correlation between every two values of the TS, both TS parts need to be rescaled in order to have the same data encoding (i.e., within the same data range or same data magnitude).However, rescaling the TS values, for instance to [−1, 1], represents an issue for some matrices (e.g., GASF).This could happen mainly when bringing matrices' values back to the original TS.As a solution, we paid more attention regarding each transformation in order to select the most suitable rescale.For instance, for the GASF, we have rescaled the original TS to [0, 1] instead of [−1, 1], in order to allow the transformation back and forth, i.e., between the original TS to its corresponding image.
Figure 7: FM2I Algorithm The second phase consists of an exhaustive testing of various forecasting models through a grid-search approach.In fact, we test all possible combinations enriched by a progressive exploration of the studied TS.Consequently, the matrices (2D representation) representing TS are built on an incremental way, so as to be used to select the best models for each sub-part of the TS.The 2D representation of each TS has to be generated as an image.Thus, techniques for image encoding (RGB), from the generated matrix, have to be applied.We first start by a technique proposed in [58], which uses a (RGB) scale-based solution in order to transform TS values into image encoding values.This solution, despite its encoding capabilities, shows its limits when returning back to the original TS.In order to tackle this issue, we have first introduced a dictionary-based technique.It is a RGB dictionary, which is used to establish a static mapping of each numerical value, from the scaled matrix into a unique three values (RGB), required for colored pixel creation.This solution allows image encoding without precision loss; albeit, it is a time consuming task as it requires numerous research requests in a high dimension dictionary of 256 3 features.The second technique generates a RGB dictionary with minimum features representing only the numerical values existing in the scaled matrix.Though, this solution does improve the execution time, it restricts the domain of the forecast values to previous values existing in the scaled matrix and hence hinders the generation of new values.This issue represents a huge information loss.The third technique generates a gray shaded image based on a static dictionary with only 256 values representing the possible gray levels.
This is quite similar to the second technique; it reduces drastically the image generation time, however, it induces an important information loss.The fourth technique, named, dynamic image encoding, uses a bijective function allowing encoding the scaled matrix values into three values RGB (256 3 ).Moreover, it supports the unique generation of RGB code for the new forecast values while reducing the image generation time.
The generated images are then extrapolated and inpainted using the adapted patch-based algorithm.Once images are inpainted, the reverse process is applied in order to both extract the forecast values from the image and reverse all the scaling steps, in adequacy with the applied matrices.The generated models are sorted on based on their accuracy (i.e., sMAPE) and are saved into a log file.
The third phase exploits the log file as the basic Dataset to be mined in order to extract the best model for being used for TS forecasting.For this purpose, two strategies are proposed: the first one sorts all the models according to their performance and then applies frequent items search in order to extract the most frequent configuration based on the progressive models generation.The second strategy, named the short memory, aims to reduce the TS size used for image generation and seeks to extract the frequent items based on the last exploration of the TS.Both strategies are tested and could be alternated on a case by case basis.
The fourth phase generates forecasts based on the best model configuration specified in the previous phase.The forecast is executed through an image generation according to the best matrix.This image is extrapolated and inpainted according to the best patch size.The forecasts are extracted from the image and rescaled to rescind the scaling and differencing changes.Once the forecasts are generated they are compared to the T S test part, representing the real values to forecast, extracted at the parameters initializing step.
In summary, the original TS is first scaled to [−1, 1] for equalizing its initial values taking into consideration the possible TS upper and lower bounds.The scaled TS is then used to generate its corresponding matrix by, eventually rescaling again its values to be within the interval [0, 1].This step is related to the matrix used with the aim to make easy the back-and-forth (B&F ) transformation process.The rescaled matrix is used to generate its corresponding image using the above-mentioned techniques (e.ge., dynamic image encoding).Regarding image extrapolation, the patch-based algorithm, which was proposed mainly for image inpainting in order to fill damaged images, is adapted and applied for image extrapolation.In fact, the patch-based algorithm is enhanced for image extrapolation from the borders with a partial neighborhood, which does not surround the area of the image to be completed as conventionally applied in a given image inpainting field.The reverse techniques are then applied to get forecast values and compute the accuracy metrics.

Experimental results
In order to assess the effectiveness of the FM2I, we have used the dataset from M3 competition, which is considered as the main driver for developing and comparing forecasting methods.The M3 dataset includes 3003 TS while involving more than 30 forecasting methods.TS are categorized according to their types (e.g., industry, finance) and time intervals (e.g., Monthly, quarterly, yearly).In other words, for short-, medium-, and long-term forecasting, 645 yearly TS, 756 quarterly TS, and 1428 yearly TS are used respectively.The TS, which are classified as "Other", are an additional set, which is dedicated for medium-term forecasting.Table 3 summarizes the number of TS from each category and according to the period (e.g., Monthly, quarterly, yearly).So far, two forecasting horizons are considered, 8 for quarterly and 6 for yearly.All forecasts are available in order to assess the accuracy of new forecasting methods against state-of-the-art ones.It is worth noting that we are still conducting experiments of the FM2I for long horizon using monthly data (forecasting horizon, 18).Preliminary results are promising, but the whole results will appear in the published version of this work.
In our experiments, we have used the forecasts that are made available by participants of the M3 competition.We also selected the methods, which are featured from the competition, and achieved great performance.In fact, we have considered the following top methods, from M3 competition [47], in order to assess the performance of FM2I for TS forecasting: ARARMA, Theta, ForecastPro.As stated in [47] the Theta method performs extremely well in almost all considered TS and forecasting horizons.ForecastPro was also considered as a new method, which performs well in a similar way to Theta.ARARMA is a variant of the ARIMA method, which is considered the most sophisticated statistical-based forecasting method.We have also included the Naive method as a baseline, as also described in M3 competition, in order to assess the performance of FM2I methods against this benchmark.5), as well for TS, classified as "Other" (Table 6).Similarly, we have computed the percentage and the number of time these methods are ranked best, as depicted in Table 7.As shown, the FM2I outperforms, with a large margin, the Naive, Theta, ForecastPro, ARIMA, and ANN approaches.Fig. 8 shows clearly the number of TS in which these methods are ranked the best by providing awesome accuracy.The FM2I performs exceptionally well and is ranked the best for the majority of TS.We have reached our objective by introducing a new accurate TS forecasting method, FM2I.Hence, we began by testing the extensivity and finally we reach the intensivity too, it is a pleasant astonishing results.Nevertheless, after careful considerations, it is logic that in order to augment the intensivity we have to find completely new methods rather than still elaborating other methods by only combining or hybridizing existing ones, as usually done in practice.

Remark 2 (M-Competition)
We have also compared our proposed FM2I against the top methods using the dataset from M4 competition.The dataset contains 100,000 time series, which is categorized into demographics, economic, industry and finance domains [71].We have selected a significant sample from those available M4 forecasting results.
The obtained preliminary results show that the FM2I is ranked the best in terms of accuracy (see Fig. 9 and Table 8).
Final testing results will be included in the published version of this work.It is worth noting that the M5 competition is already finished but no results are available so far.The FM2I will be also evaluated against the top M5 methods.Furthermore, we are willing to participate, with the FM2I, in the forthcoming competitions.The rapid growth and deployment of IoT (Internet of Things) and wireless sensors technologies have shown a great potential for collecting high-volume, high-velocity, and high-variety of time series data.In fact, a myriad of sensors can be deployed for gathering contextual data that could be integrated with other data, such as location, weather data, and social media data [48].The processing and analysis of these times series data allows the development of context-aware applications and services in many applications domains, such as in e-health [68], transportation [67], and energy management [49].For instance, short-term forecasting of solar power production and utility demand could allow dynamic and predictive control of micro-grid energy systems [50].
There are two main goals in analyzing and processing time series data, classification and regression.Time series data classification allows revealing patterns and features (e.g., anomalous values), while regression allows forecasting/predicting future values according to past previous ones.Time series forecasting and analysis are key elements in many applications.Their aim is to analyze time-series trends by building a forecasting model for being used for either classification or for predicting/forecasting n-steps-ahead values.
Predicting the Future is like Completing a Painting!Several approaches have been proposed in the past decades for time series data analysis and forecasting.They can be classified into three main categories, as depicted in Fig. 10: model-driven, data-driven, and data structures-driven approaches.In the model-based approach, time-series are transformed into state-space representations.The forecasting process is then performed by simply assuming that the forecast period observations are missing and hence apply existing methods to complete them [51].Kalman Filter and Particle Filter [52] are among the model-based methods that can be applied to reveal repeated patterns and for forecasting time series data [53,54].While these approaches provide very good accuracy, they remain insufficient in catching the complexity and the dynamics of complex systems.
Data-driven approaches have been proposed to circumvent this issue by exploiting the high data volumes of measured data.The aim is to establish a more complete data model, which can be used for predicting and forecasting [18].These approaches can be classified, in turn, into two main categories: statistical-based and Neural Network-based approaches.
Statistical-based approaches have been proposed to predict future values as the product of several past observations.Examples of these approaches are ARIMA [55] and exponential smoothing (ES) [51].Artificial Neural Network-based approaches have been proposed as information processing models for time series data analysis and prediction [56].
Examples are multi-layer feed forward neural network and LSTM (Long Short Term Memory).However, while these models did provide accurate forecasting results, they typically require larger training datasets in order to learn accurate models [54].
Data structures-based approaches have been proposed for time series data analysis and forecasting.These approaches can be classified into three main categories: Tree-based, Network-based and Image-based approaches.Classification and regression trees, random forests and gradient boosting trees are the main Tree-based methods that have been proposed for both classification and regression problems [56].Network-based concepts have been proposed recently for studying and analyzing time series.The aim is to extract and reveal dynamically relevant statistical properties of time series [57]., in [61], propose an approach that transforms time series data into graphical representations (in the form of 2D plotted images) and then apply deep learning methods for data forecasting.Similarly, authors in [65] introduce an approach for extracting time series features using computer vision techniques (e.g., spatial bag-of-features [69], deep CNN).Alike the approach proposed in [61], authors propsose to first transform time series into graphical representations (recurrence plots) and then use computer vision techniques to extract time series features and reveal the characteristics for being used by the CNN framework for time series forecasting.
However, despite the combination of these image-based structures with state-of-the-art deep learning methods provide the best accuracy, the relationships between extracted topological features and time series data remains unclear, since no inverse operations have been proposed to transform the produced and enhanced 2D images into the new 1D sequence of time series data [58].Furthermore, most of the approaches proposed so far use partially integrated image-based modeling and processing techniques combined with conventional neuronal approaches as depicted in Fig. 10.
In the proposed FM2I, a fully integrated modeling and processing framework, as depicted in Fig. 10, we transform the time series forecasting into fully images-and signal-based processing procedures.In fact, forecasting time series data is transformed into a problem of completing a picture (i.e., without falling back on machine learning techniques).
After transforming a time series data into its corresponding image, the problem of data forecasting becomes essentially a problem of image inpainting problem, i.e., completing the missing pixel in the image.In summary, the main contributions of this work against the above-mentioned approaches are five folds: • New matrices are proposed for 2D image-based representations of time series data, • New transformation techniques, from time series data to image-based representation, • Image-based forecasting techniques based on inpainting methods, • Re-inverse transformation techniques, from 2D representation to 1D vector representing the original time series data.
• Performance evaluation using datasets from M3 competition together with extensive comparison of the FM2I against the top M3 approaches.
It is worth noting that the work presented in this paper initiates a new branch of TS forecasting (see Fig. 10).In fact, instead of learning TS patterns and features directly from temporal dimension, the spatial representation is used along with image-based processing techniques and tools.More precisely, a 2D image representation could reveal interesting and richer patterns and features, which could not be explicitly represented in a 1D TS.

Philosophical discussion and other potential directions
In this work, we have started by pointing out philosophical considerations and reached unexpected and astonishing results: completing a painting enables us to see the future in color!A 'visual' 2D predictive procedure works for a 1D chronological data!Even after completing this work, we are not yet able to answer completely the central question concerning the nature of predictions developed in section 2. Nevertheless, we would like to draw some philosophical conclusions from this fruitful endeavor.It is obvious that we need more than a discussion section and certainly more than an article to present in details all our reflections, which we will surely do in forthcoming works.But, we want at least to summarize some of our queries about the implications of this work.More precisely, we have chosen to limit focus on the two following aspects.
The value of prediction vs explanation: the new emerging role of prediction: Bridging philosophy As mentioned in [7], in contrast with scientific explanation, the value or the role of scientific prediction seems to have always been clear.Indeed, common sense has it that predictions are used for their own interest: predict what will be going on in order to make better decisions.At the same time, it helps testing the strength of the efficient way of understanding the surrounding world.In addition, as indicated in Section 2, according to explicit or tacit scientific practices, philosophers of science wish to formalize other interests of predictions as validating or invalidating theories and models and then assessing the maturity of theories ([4] and [2]).When the value of explanation is unclear as indicated in [7] with reference to the work [6] entitled "Why Ask Why?", see also [30].More precisely, the argument is: suppose that we possess a perfect power in predicting everything in this world (like Laplace's demon), then the need of explanation becomes unnecessary or serves at most to assuage our thirst for understanding and therefore should be seen as a simple 'psychological satisfaction' which makes it epistemically suspect [31].The conclusion from this thought-experiment can be corroborated by the actual success of Data Sciences based on Statistics and Machine Learning, which are black boxes with explanation free knowledge.Now suppose we hold the reverse of the argument: we have a perfect explanation power making us capable to explain everything, do we always need predictions?Certainly, yes.Indeed, we need them in order to make, accordingly, the most adequate decisions.At this stage, we point out a certain Dissymmetry between predictions and explanations as the value of the first one appears better compared to the second one.There is a bias in this reasoning, though can we considere an explanation without prediction or the reverse?A development concerning the first part of this question can be found in [7] and [31].More precisely, authors highlight that we can talk about scientific explanations if and only if this last is able to generate new testable predictions in order to explore theories implications, which would probably justify why scientists appreciate explanations.This ties in with, in some way, the famous Karl Popper's idea of falsifiability or refutability.When predictions can arise from explanations but don't necessarily need them (see the definition of pediction considered in Section 2), then the dissymmetry persists.In fact, in the absence of perfect predictive processes, explanations are still needed to properly assess the accuracy of predictions and to accept them by scientists [32].All in all, we think that as our predictive power grows, so does the need of explanation decrease.
In light of the conclusion of this work, we can point out new perspectives on the role of scientific prediction.Indeed, as depicted in the general framework (Fig. 1), the bottom line of our idea is to test the extensivity of the theories' or models' predictive capacity.We could actually conclude that: on the one hand, achieving our difficult challenge by succeeding our first test proves that it is doable, and that all without necessarily being experts in the fields studied.On the other hand, it is worth a try it, since this sort of modus operandi offers scientists the opportunity to create Bridges between different scientific fields/subfields or different scientific theories by following predictions success.At that moment of our experiments, we wonder if there are other examples of these bridges in science.After carreful and extensive research, we found some examples of the tacit applications of the Fig. 1.We chose to present a significant example from theoretical physics, which also focuses the attention of some philosophers of science [36] and [37], but never from the angle developed here.Thus, let us mention the famous so called AdS/CFT Correspondence, first introduced by Juan Maldacena in the mostly cited article of high energy physics (20000 citations) [33] and most significant achievement in string theory in the last twenty years.From our perspective, this example illustrates perfectly our ideal conclusion.
Indeed, in theoretical physics, the anti-de Sitter/conformal field claims the equivalence between the strongly coupled four dimensional gauge theory and gravitational theory in five dimensions.In particular, it can be seen as a bridge between two completely different fields of physical theory, since it is an unexpected link between Gravity theory in five dimensions and Quantum field theory in four dimensions.This correspondence deserved to be explained in details, for the sake of conciseness, we refer the interested readers to [35] and [33].Nonetheless, we must point out that there is no formal mathematical proof of this correspondence, even if, there exists an AdS/CFT dictionary permitting to switch between the two theories.An example of applying the previous process can be clearly identified in nuclear physics.Briefly, in order to study quark-gluon plasma, instead of using its insoluble mathematical formalism, in [34], according to the AdS/CFT correspondence, authors predict the value of the ratio of shear viscosity to the entropy density associated with the quark-gluon plasma, which is close to the experimental results.This leads to a new understanding and then new explanations concerning quark-gluon plasma through this correspondence.Here, we reach a new benefit of prediction since using this precess we may find, if even possible, new explanations by reversing the conventional closed loop, depicted in Fig. 11a.
Starting by testing the extensivity of theories or models' predictive capacity, we select the theory or model, which provides the most accurate predictions (i.e., there is a bridge) and only after we go back to the previous process by trying to obtain explanations and so on (see Fig. 11b).In fact, by systematizing the approach, described in this work, and by identifying the possible bridges according to their predictions accuracy, we will surely find more correspondences, and should not be dependent on the scientists' genius like Maldacena.In the next subsection, we will try to go further by investigating the deeper reason for the existence of these bridges.Nevertheless, this interesting gain relies on overcoming the major difficulty of how we can switch between scientific disciplines/sub-disciplines.More precisely, it is worth experimenting how we can transform initial conditions or data's field/subfield to be an input of a theory or a model of another one?In other words, we wish to develop in forthcoming works a more philosophical/mathematical framework, called Correspondence, when we hope to formalize in details this idea (we have very promising leads).In our example (FM2I), we have moved from 1D temporal series data structure to a suitable 2D image data structure.First of all, let us begin by designating the benefits of these bridges by giving three essential aspects, which can also be considered as novel roles of scientific predictions: • New models and methods: prediction has always helped scientists to select between models and methods inside a field/subfield.But, in the light of this work, it appears that this selection could be generalized between models and methods from other fields/subfields, since we found a correspondence permitting us to switch between them.In our example, we have found new forecasting models and methods arising from image inpainting methods.• New Correspondences and Analogies: we will develop our perspective in details concerning these notions in a forthcoming work.So far, we have presented above an example of correspondence based on a dictionary permitting to switch between two theories.We can also obtain other types of correspondence through other transformations like those developed in this paper.Of course, the different types of correspondences should be chosen according to their predictions accuracy.Here, we meet another new role of prediction.Analogies are also a type of correspondence, which are very fruitful processes used by many greatest scientist like J.C. Maxwell and H. Poincaré (see [38] and [39]).Then, we hope we can find new analogies following this procedure.• Towards new theories: the correspondences selected by predictions could lead to new theories, a discussion concerning an example related to analogies between Heat and Electricity can be found in [40].

Why do these bridges exist? Extensive Structural Realism
Wondering the reason of these bridges' existence, we have naturally converged towards a modified version of the Poincaré's Structural Realism thesis.Let us begin by briefly recalling some philosophy of scientific modes of thinking in order to better clarify our position.Concretely, Scientific Realism posits that mature theories with an undisputed predictive success are true or approximatively true in the sense that the structure assumed in these theories reflect the actual reality.The main argument in support of scientific realism is called the Miracle Argument, which is introduced by Hilary Putnam when he wrote "The positive argument for realism is that it is the only philosophy that doesn't make the success of science a miracle" [41].This means that it is virtually impossible that our theories succeed to describe observable phenomena and at the same time be completely false.The principal argument against scientific realism is called Pessimistic Induction.Based on history of science, this theory claims that since most of our past theories, even strongly confirmed, have turned out to be incorrect, thus, it is very improbable that our current theories be true.
According to this argument, the present theories are ephemeral and are likely to change; it is just a matter of time.
In an article entitled "Structural Realism: The Best of Both Worlds?" [42], John Worrall elaborated a position inspired by Poincaré's works.More precisely, structural realism takes into the account the pessimistic induction concerning the theoretical entities involved in current science, but at the same time it maintains the validity of the miracle argument at the mathematical structures level.Indeed, the famous example proposed in [42] and also analyzed by Poincaré in [43] is very relevant.Specifically, Fresnel's wave theory is based on the wrong postulate of an entity existence, called Luminiferous Aether, as the medium for the propagation of light.Augustin-Jean Fresnel developed a mathematical formula, called Fresnel diffraction integral, which was validated since it succeeded in observing and predicting the famous Frenel's spot.This example proves that there is some truth in Frenel's theory, which is clearly its mathematical structure and not the existence of an entity permitting the propagation of light.In fact, Frenel's equations can be seen as a particular case of Maxwell's equations, which does not assume any ether existence.Therefore, Frenel's mathematical structure can be viewed as approximatively true since it is conserved as a particular case in the later theories dealing with light propagation like electromagnetism, including quantum theory.
In this respect, we think that we should not be confined to conserve mathematical structures in later theories treating the same phenomenon.We have to distinguish between Intensive and Extensive structural realism.Indeed, we can posit that this propriety of conserved mathematical structure is an Intensive Structural Realism.While, in the light of this article development, we are able to introduce what an Extensive Structural Realism can be.Specifically, in most cases, testing the extensivity of theories' predictive capacity means testing the extensivity of the predictive capacity of its mathematical structures.Indeed, the above example of the prediction concerning the quark-glon plasma according to the AdS/CFT correspondence is specifically due to the mathematical language of the string theory.More accurately, we can conclude that the capacity of certain mathematical structures of working in several domains means that there exists a kind of universality of these structures.This allows us to think about the miracle argument at the level of the mathematical structures extensivity, in the sense that it should be a miracle that a mathematical equation or formula appears in many completely different domains without reaching any reality.We can find many examples (see [38]) in physical analogies when an equation or formula works in different fields.In this sense, extensive structural realism is the capacity for some mathematical structures to arise in domains other than those for which it was initially elaborated.Let us provide another example in relationship with this work.We are currently developing a forecasting promising method based on some partial differential equations like Heat and Navier-Stokes equations through adequate bridges (image).Knowing that heat equation, for example, was developed by Joseph Fourier to model heat diffusion; nevertheless, this equation (or its modified version) appears in many other domains like electricity, image restoration, Brownian motion, finance, quantum mechanics, forecasting, etc, which means that this equation possesses something universal, at least at a first order.Thus, it should be a miracle that this equation be completely divorced from a certain reality.That is what we called the extensive structural realism.Hence, as one of the objectives of this work is to highlight the interest of creating new bridges between theories and models we naturally highlighted the question of their existence.After careful consideration, we strongly think that this existence is due to the existence of mathematical structures, like heat equation; in other words, the miracle argument at the level of the mathematical structures recurrence or persistence.The basic idea is then, is that we are sure that these types of mathematical structures exist, since we possess many examples, then we have to strengthen the existing and find new ones by trying to test bridges according to their predictions' accuracy.
We must point out that the discussion above suggests a significant link with some modes of thinking from Philosophy of Mathematics.For example, it should be very interesting to further analyze these ideas in light of the Mathematical Realism concerning the question of mathematics and mathematical objects' objectivity.We will focus on such links in a forthcoming work.Let us conclude this subsection by the following reflection.Elaborating a theory of everything, capable of describing and explaining all phenomena in the universe, is a precious physicists' hope.One of the main impediments is the difficulty of finding a theory unifying gravitation and standard model of particle physics.Whether this unification is really possible, it is reasonable to think that it should be realized through a mathematical structure.Hence, this last must be able to explain and predict everything.We think that it will be more suitable to proceed by trying to find this structure through its capacity of predicting everything and only after that check (or not) its capacity of explaining and describing everything, and not the reverse, which seems much more difficult.Indeed, if we are able to evacuate the major difficulty of bridging theories and models through adequate correspondences, we will be able to test this procedure.Who knows?Perhaps the equation of everything already exists in the literature; it just needs to be tested!Let us conclude this section by the following remark.
Remark 3 (Philosophy is not dead) In [44], the famous theoretical physicists Stephen Hawking and Leonard Mlodinow wrote: "Philosophy is dead".This statement naturally made many philosophers react.Without wishing to engage in polemics, this article is a direct refutation of this declaration.Indeed, it is well known in logic that a counterexample is better than any argument.This article is clearly a counterexample since it proves the power of philosophy.More precisely, we are neither forecasting specialists nor signal processing specialists; nevertheless, starting by philosophical considerations and using our scientific background, we have created a new accurate forecasting method by bridging two domains.In the light of this work, we strongly think that experimental philosophy of science is an interesting and promising way combining philosophy and science capable to create new and innovative research fields.
9 Scientific discussion and other potential directions The ideas developed in this article are clearly the beginning of a large series of works since there is an important variety of other potential directions.We briefly focus on two aspects or questions: is the FM2I improvable?Why it works?Should it be adapted for real-time data streaming?

Parameters tuning and optimization
The parameters tuning of the FM2I forecasting algorithm is based on a grid search through testing numerous combinations of matrices (conventional and differenced ones), min-max TS scaling and patch size.It generates a set of best candidate forecasting models during the progressive TS exploration.The aim is to extract the most frequent and best suitable configuration.Generally, this model provides accurate forecasts, however, our exhaustive forecasting approaches shows that this model is not necessarily the best fitting one.Fig. 12 and Table 9 show, as an initial proof of this statement, the improvement of FM2I when using the best fitting model.The accuracy differences show that adequate improvement could be performed in order to select the best model (i.e., Forecast-S1 in the current case, near the real TS).This could be achived through adapted and well known optimization approaches.

Towards ensemble data autocorelation forecasting
Several methods, as depicted in Fig. 10, have been proposed for TS forecasting.Many recent studies have shown that none of these forecasting methods is alone able to model real-world TS.Hybrid ensemble models, named also ensemble learning model, have been proposed for accurate TS forecasting, combining then strengths of each forecasting method.For instance, they have been heavily used in the past decades for weather forecasting [81].These models' aim  Figure 12: Forecasting values: Forecast-S1 represents the best fitting model is to combine several forecasting models in order to increase the accuracy against the usage of a single model [72], [74].In fact, an ensemble of independent models (i.e., multiple and diverse) could be established for being used in the collective prediction process [75].For instance, in [73] authors propose a hybrid model, combining ARIMA and Elman artificial neural network (EANN), for TS forecasting.Reported results show that the proposed hybrid model outperforms both ARIMA and EANN when considered separately.Similarly, authors in [77] propose an ensemble learning model composed of the decision tree, gradient boosted trees and random forest models and results show the effectiveness of this hybrid approach.Most of work to-date show that combining multiple forecasts, which are generated by an ensemble models, could generally provide better forecasts with higher accuracy [78].However, their efficiency was effectively highlighted in the M4 competition by the Smyl's approach, which won the competition.This later, named ensemble of specialists, combines the LSTM (long short term memory) neural network with an ES model (exponential smoothing) (ES) model, for TS forecasting.
Unlike ensemble learning models, in our poposed FM2I, we are able to forecast O(n h ) possibilities, where n is the TS size and h is the considered horizon.Thus, FM2I can be seen as ensemble data autocorrelation forecasting (EDAF) in which, instead of generating an ensemble of learning models, several ensemble data are generated (i.e., forecast scenarios).This later is then used to select the best suitable forecast that maximizes the accuracy.

Augmented dimension prediction and entropy reduction
Revisiting this works' outcome, we naturally put more emphasize on the following question: Why does this bridge works so well?The first idea that we have highlighted is: it should be a link with augmenting dimension!Indeed, we start by wishing to predict forthcoming values of 1D TS, we transform it to a richer 2D structure and then predict a region related to these values.It turns out that this forecasting method is very accurate, and then we test the effect of augmenting dimension.We will explain in details this link in a forthcoming work.The second idea we are investigating concerns the entropy.Indeed, it is well known that the entropy is related to TS's complexity and then to the forecasting performance [79] [80].Thus, basically, according to the accuracy of FM2I, we have absolutely augmented the forecastability of our initial TS.This indicates that we have reduced the initial entropy (somehow) of TS by augmenting its information through our TS-image transformations.In other words, the idea is to measure the amount of information gained for each TS-image transformations.

Real-time TS forecasting
Real-time TS streams processing induces two main challenges, limited resources and frequent data distribution changes.The first challenge requires using distributed computing platforms for handling the high volume and velocity of data streams.Regarding the second challenge, time series are usually received from data sources with several imperfections, such as noise and/or redundancies, missing values and inconsistencies.Consequently, predictive algorithms will perform poorly using low quality data.Data preprocessing or data preparation techniques, such as cleaning integration, normalization and transformation are highly required in order to deal with this issue.In addition, the amount and the dimension of received data is growing considerably with the emergence of IoT devises will be connected and embedded in our environments (e.g., buildings, vehicles), and reduction techniques become mandatory for dimensions reduction and simplification (e.g., feature selection).These techniques will help in providing high quality data with reduced dimensions, which are necessary for faster training and better understandability of results.Another important characteristic of data is their continuous and high speed arrivals (i.e., data streams).Many emerging real-world applications are generating data streams, and therefore, dataset ever-grow inconsiderably.However, techniques are required to cope with the time and memory constraints of the high arrivals of data streams.These streams might have a non-stationary behavior that could lead to the concept drift in which the distribution of data streams change frequently [66].So, learning techniques must be adaptive and learn from new arrival data (i.e., adapt to changes in the processes that generate data streams) especially for context-driven applications requiring real-time or near time decisions (e.g., road accident avoidance, patient monitoring).In this direction, FM2I is under deployment in real-sitting scenarios in order to study its effectiveness in dealing with real-time TS streams.We are also adapting it to deal with multivariate TS for both batch and stream processing.
Thus, if our process is stationary (wide) and ergodic, we have an equality between the statistical autocorrelation function and the temporal autocorrelation function: Γ(τ ) = R(τ ), Thus, our (STAM), defined for any (ST), is equal to the statistical autocorrelation matrix for an ergodic stationary stochastic process.Stationarity is testable but the ergodicity, however, could not be tested, it is only assumed.
Remark 4 If we use the Wiener-Khinchin-Einstein theorem, as the PSD is the Fourier transform of Γ and R. Thus, in the cases where the Fourier transform is bijective we will have the following, without any assumption on the ergodicity of the stochastic process.
F −1 (S)(τ ) = Γ(τ ) = R(τ ), This is done in particular for Γ and R belonging to the Schwartz space or L 2 (R).Thus, if we assume that the signals are fairly regular, we can assume that they are ergodic.

Figure 2 :
Figure 2: A schematic view of the proposed fully integrated framework FM2I

Figure 3 :
Figure 3: Actual TS with a) real data, b) predicted data

Figure 4 :
Figure 4: Forecasting using the auto-correlation matrix

Figure 5 :
Figure 5: Matrix representations of TS

Figure 6 :
Figure 6: Image-based representation: without and with forecast area

Figure 8 :
Figure 8: The number of TS in which the methods show the highest accuracy

Figure 9 :
Figure 9: The number of TS in which the methods show the highest accuracy:short-term horizon (6) for 100 TS

Figure 10 :
Figure 10: Classification of approaches for time series classification and regression

Table 1 :
Performance metrics for short, medium and long horizon forecasting Table

Table 2 :
Matrices ranking for various time-series: short-term forecasting

Table 4
presents the comparison, in terms of forecasting accuracy, of the FM2I against the above-mentioned methods.The accuracy is assessed in terms of MAE, MSE, RMSE, MASE and sMAPE.As shown in this Table, the FM2I outperforms all methods in all considered metrics.Similar behaviour is shown for medium term-(Table

Table 3 :
M3-Competition TS: the dataset used for FM2I evaluation

Table 4 :
sMAPE and ranks of error for short-term horizon: 645 yearly TS

Table 5 :
sMAPE and ranks of error for med-term horizon: 756 quarterly TS

Table 6 :
sMAPE and ranks of error for mid-term horizon: other 174 TS

Table 7 :
FM2I rank against other methodsRemark 1 (From the extensivity to the intensivity) We have started this work wishing to find a new forecasting method by testing the extensivity of image inpainting methods.Our aim was to create new methods for TS forecasting.

Table 9 :
Metrics for the best scenario via the best fitting model: Forecast-S1