A Patchwork Method to Improve the Performance of Current Methods for Solving the Inverse Problem of Electrocardiography

<italic>Objective:</italic> Noninvasive electrocardiographic imaging (ECGI) reconstructs cardiac electrical activity from body surface potential measurements. However, current methods have demonstrated inaccuracies in reconstructing sinus rhythm, and in particular breakthrough sites. This study aims to combine existing inverse algorithms, making the most of their advantages while minimizing their limitations. <italic>Method:</italic> The “patchwork method” (PM) combines two classical numerical methods for ECGI: the method of fundamental solutions (MFS) and the finite-element method (FEM). We assume that the method with the smallest residual in the predicted torso potentials, computed using the boundary element method (BEM), provides the most accurate solution. The PM selects for each heart node and time step the method whose estimated reconstruction error is smallest. The performance of the PM was evaluated using simulated ectopic and normal ventricular beats. <italic>Results:</italic> Cardiac potentials and activation maps obtained with the PM (CC = 0.63 <inline-formula><tex-math notation="LaTeX">$\pm$</tex-math></inline-formula> 0.01 and 0.61 <inline-formula><tex-math notation="LaTeX">$\pm$</tex-math></inline-formula> 0.05 respectively) were more accurate than MFS (CC = 0.61 <inline-formula><tex-math notation="LaTeX">$\pm$</tex-math></inline-formula> 0.01 and 0.48 <inline-formula><tex-math notation="LaTeX">$\pm$</tex-math></inline-formula> 0.05 respectively), FEM (CC = 0.58 <inline-formula><tex-math notation="LaTeX">$\pm$</tex-math></inline-formula> 0.01 and 0.51 <inline-formula><tex-math notation="LaTeX">$\pm$</tex-math></inline-formula> 0.02 respectively) or BEM (CC = 0.57 <inline-formula><tex-math notation="LaTeX">$\pm$</tex-math></inline-formula> 0.02 and 0.49 <inline-formula><tex-math notation="LaTeX">$\pm$</tex-math></inline-formula> 0.02 respectively). The PM also located all epicardial breakthrough sites, whereas the traditional numerical methods usually missed one. Furthermore, the PM showed its robustness and stability in the presence of Gaussian noise added to the torso potentials. <italic>Conclusion:</italic> The PM overcomes some of the limitations of classical numerical methods, improving the accuracy of mapping important features of activation during sinus rhythm and paced beats. <italic>Significance:</italic> This novel method for optimizing ECGI solutions opens a new avenue for improving not only ECGI but also other inverse problems.


I. INTRODUCTION
T HE inverse problem of electrocardiography, also known as noninvasive electrocardiographic imaging (ECGI), combines electrical potential measurements on the torso surface with a geometric description of the heart and torso to reconstruct their cardiac source. Most ECGI approaches describe this cardiac source as the electrical potentials on the epicardial surface of the heart.
ECGI can help to diagnose and treat cardiac conditions. For example, ECGI has been demonstrated to help predict the benefit of cardiac resynchronization therapy [1], [2] and to detect ablation targets for atrial fibrillation [3]. However, recent clinical and experimental validation studies have shown that current clinical implementations of ECGI are inaccurate in reconstructing electrical activity in the presence of i) complex conduction patterns such as in sinus rhythm or in the presence of conduction block [4], [5], and ii) structural abnormalities [6]. That is, the detection of focal activity, and epicardial breakthrough sites can be missing or misplaced. The problem is seen not only with clinical data [5] but also in torso tank experiments [1], [4] and in simulated data [7], [8] where forward models are well defined, suggesting that the issue lies with the inverse methods themselves.New inverse methods are therefore needed to improve the performance and the accuracy of ECGI in the reconstruction of focal activity, and epicardial breakthrough sites.
The inverse problem of electrocardiography can be mathematically represented by the following Cauchy problem for the Laplace equation which relates the potential field on the heart surface Γ h to the potential field on the torso surface Γ T : ⎧ ⎨ ⎩ ∇ · (σ T ∇Φ T ) = 0 in the torso volume σ T ∇Φ T · n T = 0 on Γ T , where Φ T is the potential field in the torso, Φ h the potential field on the heart surface, σ T the torso conductivity tensor field and n T the outward normal direction. Several classic numerical methods have previously been proposed to solve this problem, including the boundary-element method (BEM) [9], the finite-element method (FEM) [10] and the mesh-less method of fundamental solutions (MFS) [11]. The three methods are based on three different formulations of the same problem: FEM solves the problem in a variational form in the whole volume, BEM solves an elliptic problem with an integral equation on the edge of the domain and MFS uses the fundamental solutions. As they each start from different ways of expressing the solution, they will each give slightly different results. Each of these methods reduces the governing equation (1) to a matrix-vector system 2 describing the forward problem of electrocardiography: where A is a transfer matrix, its form differing according to the numerical approach used. For the FEM and the BEM, the vectors x and b represent respectively the epicardial potentials on the surface of the heart and the body surface potentials (BSPs). For the MFS, x is the vector of weighting coefficients from which it is possible to reconstruct the epicardial potentials and the vector b represents a concatenation of the BSPs and a null vector representing the non-flux boundary condition.
To recover the values of x from the known vector b, we need to solve the linear system (2). This linear problem has a unique solution only if the number of measurements on the torso surface is larger than the number of values expected on the heart surface. But this is often not the case. Moreover, the inverse problem has an ill-posed nature in the sense of Hadamard [12], which means that even low levels of error in the model or noise in the potentials can cause large errors in the solution. One solution to both issues is to regularize the problem. The regularized problem will be well-posed, that is, will have a solution whose value is less influenced by the level of noise. Several regularization methods are available (i.e. L1-Norm [13], IRN-MLSQR [14], physiology-based regularization (PBR) [15], Kalman filter-based [16], rank-deficient [17], [18]), of which Tikhonov regularization [19] is the most commonly used to solve this inverse problem. It minimizes the functional where λ t is a constant weighting term at time t controlling the level of regularization and R the regularization operator. The first term is the least squares solution to equation (2) and the second term defines a spatial constraint on the solution as a function of the choice of R.
To solve the inverse problem, equation (2) can be solved using equation (3) to obtain Despite the large number of possible methods to solve the inverse problem of electrocardiography, with more being developed every day [14], [16], [18], [20]- [24], no single algorithm consistently outperforms the rest. Previous studies comparing different methods have found that the optimal algorithm can depend on the data set [25], the level of noise [26], [27], the geometric error present in the data [27] among other factors. We hypothesize that by combining current inverse algorithms, we can optimize reconstruction accuracy and produce an inverse solution that is more accurate than each individual method. The objective of this study was therefore to develop a novel method to combine existing inverse algorithms, making the most of their advantages while minimizing their limitations. This Patchwork Method (PM) locally chooses the algorithm that provides the optimal solution, defined as the method that minimizes the estimated reconstruction error. This algorithm was evaluated using simulated data in the reconstruction of pacing and sinus rhythm activation sequences.

A. Current Numerical Methods
In this study, three classic numerical methods were used to solve (1) to obtain the transfer matrix A (equation 2): the FEM [10], the BEM [9] and the MFS [11], each using the same epicardial potential layer source model. The MFS approximates the epicardial potentials using a linear combination of fundamental solutions of the Laplacian (weighting coefficients). The weighting coefficients (x), used to reconstruct the epicardial potentials, are located on a set of virtual source points over an auxiliary surfaceΓ of the auxiliary domain, which contain the study domain determined by inflation and deflation. The MFS can be applied to discretize the Dirichlet boundary conditions: (5) and the Neumann boundary conditions: The discretization can be reduced to a matrix-vector (2) with Where the fundamental solution of Laplace's equation in 3D is f (d) = 1 4Πd , d = v − w is the Euclidean distance between v and w, Γ h andΓ are respectively the heart surface, and the outer boundary of the auxiliary domain, x 0 is the constant component of Φ T (v), x j is the coefficient of a virtual source at location w j and M is the number of virtual source points. The epicardial potentials Φ(h) are then calculated using a forward problem of virtual points [11] with the formula: The advantage of using fundamental solutions and virtual source points is that standard quadrature rules can be used to approximate the surface potential and its normal gradient when calculated on the boundary, as the torso and heart boundaries do not contain singularity points [28]. To determine normal directions required for the MFS computation, we have used a vertex normal approximation, which is a mesh-based methodology for calculating the normal directions.
The FEM and the BEM used the same epicardial mesh, seen in Fig. 1, to solve the inverse problem (5132 nodes and inter-node spacing 4.1 ± 1.3 mm). As a meshless method, the MFS used only the node locations of this mesh. The outer surface of the heart, where the virtual source points are placed, was obtained by deflating these node locations by a factor of 0.8 relative to the geometrical center of the heart. For visualization, all the three methods used the epicardial mesh described.
The two views are from the front (A) and from the left side (B). The heart consisted of 5132 nodes and 10260 elements and the torso had 1234 nodes and 2464 tetrahedral elements.
At the torso surface, a linear triangular surface mesh with 252 nodes located at typical electrode locations for clinical ECGI systems was used to provide BSPs (inter-node spacing 54.8 ± 36.5 mm) (Fig. 2). For the MFS method, the torso virtual source points were obtained by applying an inflation to these electrode node locations by a factor of 1.2 relative to the geometrical center of the heart. For the BEM, a more refined linear triangular torso surface mesh was used to create the transfer matrix (1234 vertices and inter-node spacing 27.1 ± 3.63 mm). The FEM used the same mesh for the torso surface as the BEM, with the volume between the torso and heart surface discretized using tetrahedral elements to form a volumetric mesh (15788 nodes, inter-node spacing 9.4 ± 10.7 mm) ( Fig. 1). For both the BEM and FEM, to avoid interpolation of BSPs to the refined mesh which can introduce error into the problem, only the columns corresponding to electrode node locations are used to resolve the inverse problem, as previously described [29].

B. Patchwork Method
The linear relationship between the cardiac source and the torso surface potentials after discretization with any of the abovedescribed methods can be written in matrix form: where A is the transfer matrix obtained using the different numerical methods, b the torso measurements of BSPs (BEM and FEM) or a concatenation of the BSPs and a null vector representing the no-flux boundary condition (MFS), and x representing either the potentials on the heart surface (Φ h ) for the FEM or the BEM, or the coefficients of the linear combination of fundamentals solutions in the case of the MFS. We denote by A F , A B and A M the transfer matrices for the FEM, the BEM and the MFS respectively.
Let Φ ex be the exact solution. We assume that all numerical methods used for the computation of the transfer matrix are consistent with the forward problem of ECGI, meaning that the residual of Φ ex tends to zero when the distance between the nodes tends to zero: represent the difference between the numerical solution on the torso obtained with the reconstructed epicardial potentials (Φ h ) and the measurements on the torso (Φ T ). We propose to use this property as a criterion to select locally, among the inverse solutions obtained, the one that is closest to the exact solution, without knowing what the exact solution is. Thus we computed the inverse solutions with the FEM and the MFS using zerothorder Tikhonov regularization [19] and with the regularization parameter determined by the CRESO method [30]. Their residuals were then obtained using a BEM transfer matrix.
The BEM is used as an "independent arbiter" in order to compare the FEM and MFS residuals at the body surface. We used the BEM for the forward problem to be sure that there is no bias for the FEM or the MFS.
The PM algorithm is the following: r For each time step n: -Compute the approximate potential values Φ n h,F and Φ n h,M obtained with the FEM and the MFS using the standard FEM/MFS inverse methods with zero-order Tikhonov regularisation method, -Use these values to compute the forward solution and the associated residuals R B (Φ n h,F ) and R B (Φ n h,M ) on the torso surface using the BEM, -Project the residuals from the torso onto the heart surface using an orthogonal projection. The residual value of each node on the torso is assigned to the heart node closest to the projected torso node in the normal direction, -For each epicardial node, select the method whose residual is the smallest on the corresponding torso point, and define a coefficient α n = 0 if the MFS has the smallest residual, and 1 otherwise.
r As the coefficient α chooses between the MFS and FEM solution at each time step n, sudden artificial variations in the epicardial potentials can occur between successive time steps. In order to avoid these abrupt variations, a temporal smoothing of the the coefficient α is performed: +0.025 α n+4 r For each time step n, compute the new approximate solution as The methods were evaluated using data generated with a detailed heart-torso model that was previously tailored to a patient [31].
The electrocardiogram (ECG) is caused by the electrical activation and subsequent relaxation of the heart muscle. This activation is carried by changes in the electrical potential across the membranes of the muscle cells themselves. The cells have a mechanism to amplify this activation, and electrically conductive links between the cells allow it to be carried from one cell to another. In a normal heart the activation spreads like a wave over the entire heart muscle, from a single site known as the sinus node in which so-called pacemaker cells ensure the repetitive beating of the heart. The electrical behaviour of the cells has previously been captured in mathematical models by others. To simulate propagating activation we used a "monodomain" reaction-diffusion equation where C m is the membrane capacitance, β the amount of membrane surface per unit volume, G an effective conductivity tensor, V m the transmembrane potential, and y a set of variables representing the membrane state [32]. Furthermore, F and I ion are the two functions that constitute the membrane model, here the Ten Tusscher -Noble -Noble -Panfilov model [33] for the human ventricular myocyte: I ion represents the ionic currents that the cell membrane generates and F the evolution of y. The simulations were performed on a hexahedral mesh of the ventricles with 0.2-mm resolution.
Monodomain reaction-diffusion models are an efficient and well-accepted method to simulate cardiac electrical activity but they render the electrical potential only across the cell membranes. The torso potential field Φ T , measured outside the cells, was computed by solving, in a whole-torso model at 1-mm resolution, the equation where G i and G e are the effective conductivity tensors for the coupled myocytes and the extracellular space, respectively [32]. Zero-flux boundary conditions on the torso surface ensured that this equation had a unique solution, up to an arbitrary offset potential.
Damage patterns consisted of sheets of connective tissue aligned with the 3 coordinate axes [34]. The sheets were 0.4 mm thick and were placed at 1-mm intervals. 20% of the surface of the sheets consisted of randomly placed holes. The sheets extended through the entire myocardium but not through the thin layer that represents the cardiac Purkinje system in the model. The stability and the robustness of the inverse approaches was tested by adding Gaussian measurement noise to the torso surface potentials at noise levels of 0.5, 1 and 2 mV of the overall signal amplitude. We added to the torso signal a noise matrix composed of the product of the noise level (0.5, 1, 2 mV) and a matrix randomly generated with the function "randn" in Matlab where the number of rows is the number of torso electrodes, and the number of columns is the length of signal.
As testing data for the inverse methods we used a set of 10 simulations of ventricular paced beats, and 8 simulations where the ventricles of the heart were stimulated as if the activation was coming from the sinus node. Although the atria and sinus node were not included in the model we will from here refer to these as sinus rhythm.
The mesh used for the inverse problem was a structured triangular/tetrahedral mesh (Fig. 1), while the meshes for the forward problem used to create the simulated data were Cartesian grids with a much finer mesh size (Fig. 2). Moreover, organs in the torso volume were taken into account for the creation of the simulated data, while they were not in the inverse problem.

D. Evaluation Metrics
PM inverse solutions were compared to inverse solutions computed with the BEM, FEM and MFS using zero-order Tikhonov regularization and CRESO criterion to compute the regularization parameter. To quantify the accuracy of the inverse solutions, we computed the spatial relative error (RE) and the correlation coefficients (CC) between the simulated "ground truth" and the reconstructed potentials as and where N is the number of epicardial nodes at which the cardiac potentials are simulated and reconstructed, Φ C i and Φ M i are the inverse-computed potential and the corresponding simulated potential respectively at epicardial node i, andΦ C andΦ M are the spatial means of inverse-computed and simulated potentials, respectively.
Activation times were determined using a spatio-temporal algorithm developed for ECGI potentials [35]. For activation maps, the RE and CC were also computed between simulated and inverse-computed values.
Epicardial breakthrough sites were identified from activation maps as sites with local early activation. When activation was near simultaneous (< 2 ms variation) over a large zone of tissue, the center of this area was taken as the breakthrough site. The localization error (LE) of these breakthrough sites was calculated using the geodesic distance, and the time difference (offset breakthrough) between the ECGI-detected and the nearest simulated breakthroughs was calculated.
Graphing and statistical analysis were performed using the statistics software GraphPad Prism 8.3. A test for normality was performed using the Shapiro-Wilk test. For each metric, the significance of the differences was tested using a repeatedmeasures one-way ANOVA with inverse method, or two-way ANOVA with noise level and inverse method defined as the independent variables. p < 0.05 was defined as significant. Data are expressed as median and interquartile range. Fig. 3 shows examples of simulated and reconstructed electrograms during a paced beat captured at (A) late, (B) intermediate and (C) early activation sites. At each site, the optimal method selected by the PM is seen when the electrogram traces overlap. For these nodes the MFS was selected most often. However, for a few key time steps the FEM was selected as optimal, as is seen during the steep downslope in panel A. This resulted in a higher correlation of electrograms reconstructed with the PM compared to electrograms reconstructed with either the MFS or the FEM alone. Fig. 4 shows a spatial map of the percentage of the electrogram that the PM selected from each method (FEM or MFS) during a paced beat. This demonstrates that most nodes followed the same trend as the example electrograms shown in Fig. 3. That is, for 80% of heart nodes, the PM selects the MFS for more than 70% of the electrogram.

A. Paced Beats
For each simulated pacing sequence, the mean CC and the mean RE for potentials were computed over all nodes. Fig. 5 presents a comparison of the mean CC and mean RE obtained with different methods across the 10 pacing cases. Potentials reconstructed with the PM had higher mean CC and lower mean RE than those reconstructed by the MFS, the FEM and the BEM alone (p < 0.01). The difference in performance between the PM and the MFS was smaller than between the PM and the FEM, reflecting the fact the MFS method was selected more often by the PM.
Figs. 6 and 7 show two examples of epicardial activation maps during a paced beat, corresponding to potentials directly obtained from the simulation data, or those reconstructed with the MFS, the the FEM, the BEM or the PM. The black sphere marks the true pacing site location and the white sphere marks the pacing sites detected using each of the different reconstruction methods. In these examples, though the global pattern of activation is similar between the three methods in both cases, key differences exist. The FEM outperformed the BEM and the MFS in terms of localizing pacing sites (e.g. LE = 5.1, 6.4 and 9.1 mm respectively in Fig. 6, although the activation maps were not as smooth as with the MFS particularly in Fig. 7. The PM localized the epicardial pacing sites more accurately than either the FEM, the BEM and the MFS (LE = 1.5 mm and 4.1 mm in Figs. 6 and 7 respectively), and had smoother activation maps than the FEM and the BEM, though more patchy than the MFS (CC = 0.92 and RE = 0.14 in Fig. 6 and CC = 0.84 and RE = 0.19 in Fig. 7). Fig. 8 shows the CC, RE between activation maps, and the LE of pacing sites for each method for the 10 pacing beats. Activation maps reconstructed by the PM showed a higher correlation to the ground truth activation maps than those reconstructed by the MFS, the FEM and the BEM alone (p = 0.04). The PM also reduced the RE for activation maps (p = 0.006). Though all methods succeeded in capturing the general location of the pacing sites, the LE obtained by the PM was substantially smaller (p = 0.02) than that obtained by standard ECGI methods ( Fig. 9 shows examples of simulated and ECGI electrograms during sinus rhythm captured at (A) late, (B) intermediate and (C) early activation sites. As with the pacing sequences, the PM chose the MFS more often than the FEM for the selected electrograms. That is, for 82% of heart nodes the MFS method was selected for more than 70% of the electrogram (figure 10). For each simulated sinus beat, the mean CC and the mean RE for potentials were computed over all nodes. Fig. 11 shows a comparison of the mean CC and mean RE of the different methods across all sinus rhythm data. Potentials reconstructed by the PM were more correlated to the simulated potentials than those reconstructed by the MFS or the FEM (p < 0.0001). The PM also reduced the RE for electrograms (p < 0.0001). Fig. 12 presents an example of epicardial activation maps in normal sinus rhythm obtained directly from the simulation data as well as reconstructed with the MFS, the FEM, the BEM and the PM. The black spheres mark the true breakthrough sites and the white spheres mark the breakthrough sites detected using the classic ECGI methods. In this example, the FEM localized breakthrough sites better than the MFS and the BEM although, as with pacing data, the activation pattern was less smooth. The PM provided a more accurate activation map pattern (LE1 = 18.1, LE2 = 3.3 and LE3 = 4.6 mm) than the FEM, the BEM and the MFS, localizing the epicardial breakthrough sites more accurately than the FEM, the BEM and the MFS and having a relatively smooth propagation pattern similar to the simulated pattern (CC = 0.50 and RE = 0.32). Fig. 13 presents the CC and RE for activation maps with the different methods in sinus rhythm. The PM substantially improved the CC and reduced the RE of activation maps compared to the MFS, the FEM and the BEM (p < 0.001). Table I gives a summary of numerical comparisons of ECGIdetected and the nearest true breakthroughs sites. For all sinus rhythm cases, the MFS only captured 2 of the 3 breakthrough sites while the FEM and the BEM sometimes missed one breakthrough. However the PM always succeeded in detecting all 3 breakthrough sites. The LE obtained by the PM was smaller than that obtained by the FEM, the BEM and the MFS (p < 0.0001) and the time differences between ECGI-detected breakthroughs and the nearest actual breakthrough acquired by the PM were also smaller (p = 0.04).       Fig. 14 shows a comparison of the mean CC and mean RE values for potentials (panel A and panel B respectively) as well as CC and RE values for activation times (panel C and panel D respectively) obtained with the different reconstruction methods across the entire pacing data set at different noise levels. All of the methods are sensitive to noise with CC decreasing (panel A) and RE increasing (panel B) with higher noise levels. Despite this sensitivity, potentials reconstructed with PM showed higher CC and lower RE to the true potentials than those reconstructed by the MFS, the FEM and the BEM at all noise levels (p < 0.0001). Furthermore, the reduction in CC and increase in RE with each noise level was smaller for the PM than for the MFS, the FEM and the BEM, demonstrating it was less sensitive to the addition of noise. For activation maps, as was seen with the reconstructed potentials, the accuracy of the reconstructed activation maps is reduced with increasing noise levels. Likewise, activation maps reconstructed by the PM show a higher CC (panel C) and lower RE (panel D) to the ground truth activation maps than those reconstructed by the MFS, the FEM and the BEM at all noise levels (p < 0.0001). The PM again minimized the increase and reduction in RE and CC respectively compared to the other methods.

1) Pacing:
2) Sinus Rhythm: Fig. 15 presents a comparison of the mean CC and mean RE values for potentials (panel A and panel B respectively) as well as CC and RE values for activation times (panel C and panel D respectively) computed for each sinus rhythm simulation for the different reconstruction methods at different noise levels added to the body surface potential distribution. Despite the sensitivity of the methods to noise, potentials reconstructed with the PM showed higher CC (panel A) and lower RE (panel B) than those reconstructed by the MFS, the FEM and the BEM. Furthermore, the degradation in RE with the PM was small compared to the FEM, the BEM and the MFS. For the activation maps in sinus rhythm reconstructed with the different methods over all noise levels, as with the potentials, at all noise levels the PM reconstructed activation maps with higher CC (panel C) and lower RE (panel D) than the other methods. Moreover the reduction in CC and increase in RE for the MFS, the FEM or the BEM was larger with increasing noise level than was seen with the PM.

IV. DISCUSSION
In this study we present an innovative ECGI method to improve the accuracy of reconstructed epicardial electrical activity. This novel approach, called the Patchwork Method (PM), combines the solutions obtained with two classic ECGI numerical methods (FEM and MFS), using another classic numerical approach, the BEM, to compare their residuals in order to select the most accurate method.
In our numerical study, globally the FEM performed better than the MFS and the BEM for localizing pacing sites and epicardial breakthroughs in sinus rhythm while for the accuracy of the potential reconstruction the MFS performed better. The differences between the methods arises due to 1) the resolution of the meshes used by the FEM and the BEM, 2) the use of virtual  sources in the MFS computation of the transfer matrix. These two factors also impact the choice of regularization parameter selected for each method. As has been previously seen in the literature [11], in this study we found the MFS to be more stable to the level of regularization than the FEM. Inverse solutions using the same regularization parameters (0.001 to 0.5) were computed and the MFS solution accuracy remained relatively constant regardless of lambda value was more stable (CC = 0.62+−0.05 for MFS vs 0.53+−0.07 for FEM).
By combining the FEM and the MFS solutions into the PM solution we obtained better results than for either method alone, outperforming both the FEM and the MFS where they each perform best. Interestingly, the improvement obtained with the PM for activation times was most significant in sinus rhythm, which is more difficult to reconstruct than paced beats. This study is to our knowledge the first one to discuss results using different numerical methods in the case of sinus rhythm. Finally, we demonstrated the stability and robustness of the PM to Gaussian measurement noise added to the body surface potential distribution, with the PM consistently out-performing the individual methods.
By analysing the methods selected at each moment in time and space, this new approach may also be useful in determining the sources of inaccuracies seen with different inverse methods. For example, in this study during paced beats, for 80% of heart nodes, the PM selected the MFS for more than 70% of the electrogram. However, the PM predominantly selected the FEM for nodes near the stimulation site. As the spatial map (Figs. 4 and 10) changes according to the site of stimulation, this suggests the selection of either the MFS or FEM method is dependent on the activation wavefront. We hypothesize that this may be related to the level of regularization computed for each method. The MFS uses more regularization (λ = 0.0997 ± 0.002), which likely flattens activity occurring at the start of the the activation sequence and near the stimulation site. The FEM is less regularized (λ = 0.0542 ± 0.001) so potentially more capable of capturing these early activity sites (although it also produces more fractionated EGMs). However, this hypothesis does not hold in sinus rhythm where electrical activation is more complex, and the MFS is selected more often for nodes near epicardial breakthrough sites.
Since the inverse problem of electrocardiography was first described over 50 years ago, many novel methods to solve it have been developed; including numerical methods for the forward problem [9]- [11], regularization techniques [19], [36], [37] and methodologies to compute the regularization parameter [17], [26], [30]. Several studies have aimed to compare methods in order to determine an optimal approach [25]- [27]. But as the number of new methods grows, the number of possible combinations of the different methods also expands. For example, in one of these most recent studies [25], two different numerical approaches were combined with two different regularization methods, and five different methods to compute the regularization parameter resulting in fifteen different inverse problem pipelines. Furthermore, there appears to be no single best approach. Rather, the optimal methods depends not only on the data set or activation sequence [25], but also the level of noise [26], [27] and the geometric error present in the data [27] among others. One advantage of the PM is therefore in its versatility: that is its potential to combine the results based on any ECGI method, including the MFS and the FEM, with various regularization parameters to always provide the optimal solution regardless of the data used, or noise/geometric error present.
The idea of combining numerical methods to solve the inverse problem of electrocardiography has previously been suggested. Bradley et al. [38] built a new model by coupling the FEM and BEM (F-BEM) in order to construct the transfer matrix. F-BEM models use the BEM for isotropic and homogeneous regions of the torso (e.g., the lungs and the torso cavity), and the FEM for anisotropic zones (e.g., the skeletal muscle). The main advantage of this coupled model is to use the appropriate modeling technique in the appropriate region. This reduces the overall size of the problem, while still allowing the model to be refined in areas of high gradients (i.e. around the heart), without the need to have the same mesh density propagated to areas with low gradients (e.g. the outer torso surface). Though this overcomes several limitations of the individual numerical methods, the model will still result in a single inverse solution with a performance that will depend on the factors presented above.
Optimizing the inverse solution based on minimizing the torso residual has also previously been proposed. Bergquist et al. [39] have proposed improving ECGI accuracy with a geometric correction method. This method minimized the cardiac geometry localization error by reducing errors from respiratory movement. To improve the localization of the heart and the ECGI accuracy they used an iterative coordinate descent optimization. A single inverse solution was calculated in each iteration with current per-beat estimates of cardiac location using BSPs from multiple beats. The solution was then used to estimate new cardiac positions for each beat by minimizing forward solution error.
One of the limitations of our work is the use of simulated data. Although we added Gaussian noise, other real-world sources of error were not evaluated and require validation using experimental or clinical data. We expect that the accuracy of the PM using clinical data will not be better than in the case of simulation data, due to the presence of geometric error and other uncertainties. However, as we take the best method in each zone and at each time step, we expect that even in the case of experimental or clinical data the PM will improve the reconstruction of the electrical potentials and activation times on the heart compared to classic ECGI approaches. The final limitation of the PM is the computational time required. Solving two (or more) inverse and forward problems makes the PM a computationally demanding inverse method which may influence its applicability in the clinical field, compared to the FEM, the BEM and the MFS methods individually. However, in certain cases the improvements seen in reconstruction accuracy may be worth the computational demand of the method.
The PM implementation presented uses a single regularization method (zero-order Tikhonov) and a single method to define the regularization parameter (CRESO). Previous studies have demonstrated the dependence of reconstruction accuracy on both regularization method and parameter selection methods, with the optimal method depending on the numerical method chosen. For example, Karoui et al. [25] demonstrated a variability in CC and RE of up to 30% for a single numerical method, depending on the regularization parameter selection method used. An advantage of the PM approach is that it is theoretically unlimited in the number of methods that could be combined and optimized. We will therefore work in the future to incorporate different regularization and parameter selection methods into the PM workflow.
In this study, we have chosen a method to temporally smooth the parameter α for its simplicity, to help avoid abrupt variations in the epicardial potentials between time instances due to jumps between the MFS and the FEM solution selected. In the future, we aim to evaluate alternative methods to smooth or select α that could improve the PM.
In future work, we will also test the PM using more complex volume conductor models, in order to further study the impact of lungs and other organs on the accuracy.

V. CONCLUSION
We have presented a novel ECGI approach to reconstruct cardiac electrical activity that optimizes reconstruction accuracy by selecting the best reconstruction method for each node and time step. This new approach, called the Patchwork Method, helps overcome some of the limitations of current numerical methods, improving the accuracy of reconstructed potential and activation maps and the localization of epicardial breakthroughs during pacing and sinus rhythm. By ensuring the optimal solution is always reconstructed, this new approach could be useful in determining the limits in accuracy of previously developed ECGI approaches based on epicardial sources. Furthermore, this novel method for improving ECGI solutions could be applicable not only to the inverse electrocardiography problem but also to other inverse problems.