A Unifying Mathematical Definition of Particle Methods

Particle methods are a widely used class of algorithms for computer simulation of complex phenomena in various fields, such as fluid dynamics, plasma physics, molecular chemistry, and granular flows, using diverse simulation methods, including Smoothed Particle Hydrodynamics (SPH), Particle-in-Cell (PIC) methods, Molecular Dynamics (MD), and Discrete Element Methods (DEM). Despite the increasing use of particle methods driven by improved computing performance, the relation between these algorithms remains formally unclear, and a unifying formal definition of particle methods is lacking. Here, we present a rigorous mathematical definition of particle methods and demonstrate its importance by applying it to various canonical and non-canonical algorithms, using it to prove a theorem about multi-core parallelizability, and designing a principled scientific computing software based on it. We anticipate that our formal definition will facilitate the solution of complex computational problems and the implementation of understandable and maintainable software frameworks for computer simulation.


I. INTRODUCTION
Particle methods are a classic and widespread class of algorithms for computer simulation, with applications ranging from computational plasma physics [19] to computational fluid dynamics [11]. Historically, some of the first computer simulations in these domains used particle methods [8], [37], and the field is still under active development [6], [9]. A key advantage of particle methods is their versatility, as they can simulate both discrete and continuous phenomena stochastically or deterministically.
In simulations of discrete models, particles naturally represent the discrete entities of the model, such as atoms in molecular dynamics simulations [2], cars in simulations of road traffic [28], or grains of sand in discrete-element simulations of granular flows [38]. When simulating continuous models or numerically solving differential equations, particles represent mathematical collocation or Lagrangian tracer points of the discretization of the continuous fields [10], [32]. The evaluation of differential operators on these fields can directly be approximated on the particles using numerical methods such as Smoothed Particle Hydrodynamics (SPH) [16], [30], Reproducing Kernel Particle Methods (RKPM) [27], Particle Strength Exchange (PSE) [12], [14], and Discretization-Corrected PSE (DC-PSE) [4], [36]. Also, simulations of hybrid discrete-continuous models are possible, as often done in plasma physics, where discrete point charges are coupled with continuous electric and magnetic fields [19]. In addition to their versatility, particle methods can efficiently be parallelized on shared-and distributed-memory computers [20], [21], [22], [33], [35]. Furthermore, they simplify simulations in complex [34] and time-varying [3] geometries, as no computational mesh needs to be generated and maintained. Beyond the field of simulations, structurally similar algorithms have been developed, such as particle-based image processing methods [1], [7], point-based computer graphics [17], and computational optimization algorithms using point samples [18], [31].
Even though all these methods share structural similarities, a formal description of those similarities is missing. The current understanding of particle methods mainly relies on qualitative and loose notions, such as "particles" and their "interaction" and "evolution," which have not yet been rigorously described and rationalized as standalone concepts or together to formally define particle methods as an algorithmic class.
Formal mathematical descriptions, models, or definitions provide a rigorous way of formulating concepts and form the basis for well-founded scientific discussions in computer science. They allow more profound insights into the described subject based on mathematical theorems that are incontrovertible under the given definition. Mathematical theorems in computer science are extremely valuable, for example, to represent and query knowledge with the rigorous mathematical structure of answer set programming [15], [25], or for runtime complexity studies of algorithms in automata theory [5].
Despite their broad use, a formal mathematical definition of particle methods that covers all structural similarities of these algorithms from different fields is lacking. Hence, what constitutes a particle method, and what does not, remains unclear, hindering potential development in various research areas.
Here, we fill this knowledge gap by presenting a formal definition that unifies all particle methods and enables the design of novel particle methods also for non-canonical problems. We showcase this by formulating various algorithms as particle methods, ranging from classic SPH over molecular dynamics to Gaussian elimination. Furthermore, we use our definition for a theoretical analysis of the parallelizability of particle methods on multi-core computer systems and for designing and implementing a principled software framework for particle methods. The presented definition of particle methods and its applications open up a new way of investigating and developing computer simulation algorithms and software systems based on these principles.

II. NOMENCLATURE AND NOTATION
We introduce the notation and nomenclature used and define the underlying mathematical concepts. We use, bold symbols for tuples of arbitrary length, e.g. p ∈ P * , regular symbols with subscript indices for the elements of these tuples, e.g. p = (p 1 , . . ., p n ), vertical bars around a tuple is the number of elements it contains, e.g., |p| := n ∈ N, regular symbols for tuples of determined length with specific element names, e.g., p = (a, b, c) ∈ A × B × C, an indexed tuple of determined length with named elements, e.g. p j = (a j , b j , c j ), underline for vectors, e.g. v ∈ A n , and the length of a vector if A = R, |v| := v 2 1 +. . .+ v 2 n . Definition 1: The Kleene star A * is the set of all tuples of elements of a set A, including the empty tuple (). It is defined using the Cartesian product as follows: (1) Definition 2: The composition operator * h of a binary function h: Definition 3: The concatenation • : A * × A * → A * of tuples (a 1 , . . ., a n ), (b 1 , .., b m ) ∈ A * is defined as: (a 1 , . . ., a n ) • (b 1 , .., b m ) := (a 1 , . . ., a n , b 1 , .., b m ) . (5) Definition 4: We define the construction of a subtuple b ∈ A * of a ∈ A * . Be f : A * × N → { , ⊥} ( = true, ⊥ = f alse) the condition for an element a j of the tuple a to be in b. b = (a j ∈ a : f (a, j)) defines a subtuple of a as follows: Definition 5: Be α = (a 1 , . . .., a n ) ∈ A 1 × . . . × A n a potentially inhomogeneous tuple of length n, and being j 1 , . . ., j m ∈ {1, 2, . . ., n} then α ( j 1 , j 2 ,..., j m ) := (a j 1 , a j 2 , . . ., a j m ). (7)

III. DEFINITION OF PARTICLE METHODS
To develop a formal definition of the algorithmic class of particle methods, we choose the specific formulation that naturally relates to the structural elements of practical implementations, i.e., to the methods and subroutines often found in particle methods-related literature and software. More specifically, our definition is based on observing and analyzing many existing particle methods and distilling their structural commonalities. The structural commonalities are also reflected in the terminology. "Particles" or "points" are the basic objects that get manipulated throughout the algorithms [1], [11], [17], [19], [31], [32], [37], [38]. "Particles" carry algorithm specific attributes or properties. These "particles" can "interact" [17], [28], [32], [37], [38] with each other within a "neighborhood," "surrounding," or "sampling radius" [1], [17], [19], [28], [31], [32], [37], [38]. In addition "Particles" can "evolve" [1], [17], [31], [32], [38]. The term "evolve" is used inconsistently. It is referred to as the change of a single "particle" depending only on its properties or the change of the whole system of "particles". The change of the whole system is more often called "step," "time step" or "state transition" [11], [17], [19], [28], [31], [32], [37], [38]. Using some of these terms and their underlying concepts, we subdivide our definition of particle methods into three parts: First, the definition of the general particle method algorithm structure, including structural components, namely data structures and functions. Second, the definition of a particle method instance. A particle method instance describes a specific problem or setting, which can then be solved or simulated using the particle method algorithm. Third, the definition of the particle state transition function. The state transition function describes how a particle method instance proceeds from the first state to the last using the data structures and functions from the particle method algorithm. In summary, we present a definition of particle methods in the most general, sequential form. For deeper explanations and examples, the reader is referred to the supplementary material SM1.

A. PARTICLE METHOD ALGORITHM
The definition of a particle method algorithm encapsulates the structural elements of its implementation in a small set of data structures and functions that need to be specified at the onset. This approach follows a similar logic as some definitions of Turing machines [24]. Both concepts describe state-transition systems working on discrete objects. Definition 6: A particle method algorithm is a 7-tuple (P, G, u, f , i, e,e), consisting of the two data structures (9) such that [G × P * ] is the state space of the particle method, and five functions: f : G → { , ⊥} the stopping condition, (11) i : G × P × P → P × P the interact function, e : G × P → G × P * the evolve function, e : G → G the evolve function of the global variable.
These are the only objects to be defined by the user to specify a particle method algorithm.

B. PARTICLE METHOD INSTANCE
Using the above definitions of the particle method algorithm and its data structures and functions, we define an instance of a particle method as a specific realization.
Definition 7: An initial state defines a particle method instance for a given particle method algorithm (P, G, u, f , i, e,e): The instance consists of an initial value for the global variable g 1 ∈ G and an initial tuple of particles p 1 ∈ P * .

C. PARTICLE STATE TRANSITION FUNCTION
In a specific particle method, the elements of the tuple (P, G, u, f , i, e,e) (8)- (14) need to be specified. Given a specific starting point defined by an instance, the algorithm proceeds in iterations. Each iteration corresponds to one state transition step that advances the current state of the particle method [g t , p t ] to the next state [g t+1 , p t+1 ], starting at the initial state [g 1 , p 1 ]. The state transition uses the functions u, i, e,e to determine the next state. The state transition function S generates a series of these state transition steps until the stopping function f is true. The so-calculated final state is the result of the state transition function. The state transition function is the same for every particle method and does not need to be defined by the user. Definition 8: We define the state transition function with three interact sub-functions (ι I , ι I×U , ι N×U ), two evolve sub-functions ( I , N ) and the state transition step (s). These functions build upon each other. The interact sup-functions manipulate only a particle tuple p and ultimately compute all interactions of each particle with all its neighbors. The first interact sup-function ι I calculates one interaction and results, therefore, in the change of two particles in the particle tuple p = (p 1 , . . ., p |p| ), The second interact sup-function ι I×U builds on ι I and calculates for one particle the interaction with all its neighbors given by the neighborhood function u. The result is a potential change of all involved particles in the particle tuple p, The third interact sup-function ι N×U complements the interaction. It uses ι I×U to calculate the interactions for all particles with their respective neighbors, leading to a change of p in potentially every particle, The first evolution sup-function I calculates the evolution of one particle. The result is stored in the global variable and an intermediate particle tuple q, The second evolution sup-function N calculates for all particles the evolution. The result is returned in the global variable and a new particle tuple, The state transition step s brings all sup-functions together and advances the particle simulation by one iteration, Finally, the state transition function S advances the particle method instance to the final state, We illustrate the state transition function and its subfunctions with the Nassi-Shneiderman diagram in Fig. 1. This is the most generic form of the state transition function without further constraints and for sequential computing. Further constraints can be imposed, leading to more specific state transition functions valid for a subset of particle methods.

IV. APPLICATIONS OF THE DEFINITION OF PARTICLE METHODS
After having defined particle methods, a pivotal question arises: How can we leverage this mathematical definition in practice? To address this question, we focus on three main applications of this definition: First, we illustrate how several canonical and non-canonical particle methods can be formalized in the notational framework of the definition. Second, we use our definition to prove a theorem about the parallelization of particle methods formally. Third, we show the practical applicability of our definition by using it for designing and implementing scientific computing software.

A. KNOWN ALGORITHMS AS PARTICLE METHODS
We demonstrate how our definition provides a unifying notation for various canonical and non-canonical particle methods.
Here, we present a three-dimensional simulation of the continuum diffusion equation (24).
We refer the reader to the supplementary material for more examples. As canonical particle methods, we present there an n-dimensional perfectly elastic collision simulation (SM2.1), a three-dimensional Smoothed Particle Hydrodynamics (SPH) [16], [29] fluid simulation (SM2.2), and a onedimensional Lennard-Jones [26] molecular dynamics (MD) simulation (SM2.3). As non-canonical examples, we present there how triangulation refinement (SM2.4) and Gaussian elimination (SM2.5) can be formulated as particle methods. We thereby illustrate the unifying nature of the presented definition.
Particle Strength Exchange (PSE) is a classic particle method that numerically solves partial differential equations in time and space [12], [13], [14], [36]. It provides a general framework for numerically approximating differential operators over sets of irregularly placed collocation points called particles. Here, we consider the example of using PSE to numerically solve the isotropic, homogeneous, and normal diffusion equation in three dimensions: for the continuous and sufficiently smooth function We use the explicit Euler method for time integration and PSE for space discretization on equidistant points with spacing h. PSE approximates the Laplace operator w, a second-order differential operator in space, at location x j using the surrounding particles at positions x k as [13]: Using PSE theory, we determine the operator kernel η such as to yield an approximation error that converges with the square of the kernel width : The kernel's support is [−∞, ∞]. However, the exponential quickly drops below the machine precision of a digital computer, so it is custom to introduce a cut-off radius r c to limit particle interactions to non-trivial computations. The approximation of the Laplace operator then is: The explicit Euler method allows the approximation of the continuous time derivative ∂w ∂t at discrete points in time t n , n ∈ N with time step size t := t n+1 − t n : Hence, the above differential equation is discretized as: To numerically solve (24), this expression is evaluated over the particles at locations x j with property w j at time points t n . For simplicity, we consider a free-space simulation without boundary conditions. Hence, we assume that an instance of this particle method has enough particles with no or low concentration w j around the region of interest in the initial tuple of particles. We further assume that the particles are regularly spaced with inter-particle spacing h such that h ≤ 1. This is a theoretical requirement in PSE known as the "overlap condition". Without it, the numerical method is not consistent. This defines the particle method algorithm data structures: and functions: Each particle p represents a collocation point of the numerical scheme. It is a collection of three properties, each of which is a real vector/number: the position x, the concentration w, and the accumulator variable w that collects the concentration in the interact function i. An accumulator variable is required here to render the computation result independent of the indexing order of the particles. The global variable g is a collection of seven real-valued properties that are accessible throughout the whole calculation: the diffusion constant D, the spacing between particles h, the kernel width , the cut-off radius r c , the time step size t, the end time of the simulation t end , and the current time t.
The neighborhood function u returns the surrounding particles no further away than the cut-off radius r c and different from the query particle itself. The stopping condition f is true ( ) if the current time t exceeds the end time t end . Then the simulation halts.
The interact function i evaluates the sum in the PSE approximation (27). Each particle p j accumulates its concentration change in w j during the interactions with the other particles. In the present example, we choose an asymmetric/pull interact function i, just changing particle p j . The neighborhood function u accounts for this. However, this is unnecessary, and symmetric formulations of PSE are also possible.
The evolve function e uses the accumulated change w j to update the concentration w j of particle p j using the explicit Euler method (31). For that, it also uses D, h, , and t from the global variable g. In addition, the evolve function e resets the accumulator w j to 0. In this example, the evolve function does not change the global variable g. That is exclusively done ine, which updates the current time t by adding the time step size t.
We need to fix the parameters and the initial condition to define a specific instance of this particle method. We choose a box where w = 0 for all particles except for the center, where we place a concentration peak.

B. THEORY ABOUT PARTICLE METHODS
In addition to formalizing algorithms, our definition can also be used to prove theorems about particle methods. We demonstrate this by proving that one can parallelize the state transition function on a shared memory system under five conditions. After verifying that a particle method fulfills these conditions, we know directly if it is parallelizable in principle with this parallelization scheme. We exemplify this on the particle methods application for the simulation of the threedimensional diffusion (Section IV-A). We assume that parallel reading from one storage location is possible but not parallel writing for a shared memory system. Hence, we need to ensure that the calculations can be done in parallel and that we do not have race conditions while writing. Therefore, we need the five conditions:  condition one, pull-interaction where the first particle p j is changed while the second p k stays the same. Condition two, interaction independence of previous interactions 1 condition three, neighborhood independence of previous interactions condition four, constant number of particles e(g, p) = (g, (p)), condition five, global variable independence of particles e(g, p) = (g, q).
All particles are constantly overwritten at that part of the scheme. Also the next part [p 1 ← 2 e(g, p 1 ) 1 ] (line 6) overwrites each particle. Hence, inside these parts are potential writing conflicts, but not between them because they are parallel. Therefore, we can take them together to get 2 e p 1 * 1 i g p u(g,p,1) 1 , . . ., 2 e p |p| * 1 i g p u(g,p,|p|) 1 .
Adding to this, the evolution of the global variable function results in a parallelized step of the particle method state transition function. The rest of the state transition is identical to the sequential state transition. Hence, to prove that the Nassi-Shneiderman diagrams (Figs. 1 and 3 The proof relies on the five lemmas. Lemma 1: If the interaction is a pull interaction, all interactions of a particle with its neighbors do not change any particle besides the particle itself. ∀p j , p k ∈ P, g ∈ G : i(g, p j , p k ) = (p j , p k ) Proof: Lemma 2: If the result of the interact function is independent of an interaction of the second particle, then it is independent of all its previous interactions.
∀p j , p k , p k ∈ P, g ∈ G : 1 i g (p j , 1 i g (p k , p k )) = 1 i g (p j , p k ) =⇒ 1 i g p j , p k * 1 i g p k 1 , . . ., p k n = 1 i g p j , p k Proof: = 1 i g p j , 1 i g p k * 1 i g p k 1 , . . ., p k n−1 , p k n (65) . . .
Lemma 3: The neighborhood needs to be interactionindependent, so the neighbors on the processors do not leak a possibly changed particle. Lemma 4: Under the constraints that the interact function i is a pull-interaction (43) and is independent of the previous interactions (44) and the neighborhood is independent of previous interactions (45), the outer interaction loop is parallelizable. In our notation: ,p,1) , . . ., Proof: . . . −1 p, j) , . . ., p |p| Lemma 5: Under the constraints that the number of particles stays constant (46) and the global variable is independent of all particles (47), the evolve loop is parallelizable. In our notation: N ([g, p]) = g, 2 e(p 1 ) 1 , . . ., 2 e(p |p| ) 1 (87) Proof: Under the constraints (46) and (47), we can rewrite the first evolution sub-function (20) to e I ([g, p], q, j) = g, q • 2 e(g, p j ) 1 . (88) Using this result, we can rewrite the second evolution subfunction (21) Using these interim results, we can prove that the parallelization scheme in Fig. 3 produces identical results to the sequential state transition function.

B. TIME COMPLEXITY
The time complexity of an algorithm describes the asymptotic behavior of the run-time as a function of the input size. In the case of the state transition function, the input size is the length of the initial particle tuple p 1 , assuming that a constant bounds the size of the global variable and the particles. The condition that the evolve function does not change the number of particles (46) restricts the number of particles to be constant over all state transition steps We assume a maximum time complexity τ for each function for all state transition steps. We indicate the corresponding function with a subscript, τ i , τ e , τ• e , τ f , τ u . We also assume a maximum size of the neighborhood ς u for each particle and all steps. We assume further that the maximum time complexities are independent of the particle method instance [g 1 , p 1 ] except for the neighborhood. This is true for many canonical particle methods. We indicate the neighborhood function's possible dependency on p 1 with a superscript. Using these maxima of the time complexities, we can derive an upper bound for the time complexity of the sequential state transition function and for the time complexity of the parallelized state transition function on n CPU processors with shared memory

B. APPLICATION TO THE EXAMPLES
We can now use the theorem to check if we can parallelize the particle method from Section III-A. The three-dimensional diffusion application based on PSE and Euler integration is formulated with a pull interaction. To change particle properties, the interact function uses only properties that are not changed by it. Hence, it is impossible to transfer the result of an interaction through an interaction to another particle. Meaning the interact function is independent of previous interactions. The same argumentation holds for the neighborhood function. The interact function does not change any property used by the neighborhood function. We formally check these conditions. Independence of the interact function from previous interactions: 1 i g (p j , 1 i g (p k , p k )) (100) Independence of the neighborhood function from previous interactions.
Note that the interaction of particle p k with p k does not change the position of p k . Hence, the neighborhood function is not influenced by the interaction. The other two conditions are directly matched by the evolve method. There is a constant number of particles, and the global variable is independent of all particles. Therefore, the diffusion example is parallelizable with the shared memory scheme.

C. A BASIS FOR SCIENTIFIC SOFTWARE ENGINEERING
Our definition can be leveraged to implement scientific computing software where the particle method algorithm and instance are used as an interface. This allows hiding all generic parts of a particle method from the user, such as finding the neighborhood and running the state transition function. Such software could potentially also encapsulate theoretical results, such as the result from Section III-B. We showcase the hiding of the neighbor search and the state transition function by designing a software prototype where we implement fast neighbor access for arbitrary-dimensional meshes and free particles for one-sided and two-sided interactions, as well as the state transition in its most general and hence, sequential form. We demonstrate the use of our prototype exemplarily for SPH ( Fig. 4(a)), 3D diffusion ( Fig. 4(b)), and the n-dimensional perfectly elastic collision (Fig. 4(c)) algorithms, but the prototype is not limited to these three examples. In addition, we validated the software, particularly the runtime complexity concerning the fast neighbor access methods and the correctness of the results. The source code is available at: https://git.mpi-cbg.de/mosaic/prototype-particlemethods-defintion-as-an-interface.
The fundamental design of the prototype is illustrated in Fig. 5 as a UML class diagram. We encapsulated the data structures and functions in the namespace Base. The two templated classes GlobalVariable_Method and Particle define the data structures and are mostly empty except for the position in Particle, which is necessary for the fast neighbor access algorithms. The templated class Par-ticle_Method carries the bare bones of the five functions of the particle method algorithm and all hidden functionality. The hidden functionality is orchestrated by the run() method. It calls the appropriate fast neighbor access methods and chooses the correct state transition implementation. The function handles the neighbor search differently, whether ALL particles are neighbors or just those in a cutoff radius, whether there are free particles or mesh particles.
The uses of this prototype are then done in separate namespaces. We chose as an example the three-dimensional diffusion again. The user has to create three new classes that inherit from the three base classes. In the data structure classes, the user adds the necessary properties. In the Particle_Method class, the user has to overwrite the functions evolve(), interact(), evolveGlobal-Variable(), neighborhood(), and stop() exactly as described in the particle method algorithm except for neighborhood() due to the neighborhood access optimizations. Suppose the user decides not to overwrite certain functions. In that case, the prototype accounts for that and treats them respectively as identity functions, empty neighborhood, or stop after only one state transition step. The user needs to define the instance in addition to that separately and executes the particle method by calling the run() method on the instance. We have tested the software to make sure it works as expected. Therefore we compared our prototype to native C++ implementations, did a convergence study, and conducted two run-time evaluations.
For the comparison to native C++, we implemented the perfectly elastic collision (Section SM2.1) and the 3D PSE application in native C++. Then we calculated the difference to our particle methods implementation. The results are identical. Note that we simulated an example with three balls that do not interact simultaneously (i.e., no three-way collisions). This is to ensure the result is independent of the indexing order of the particles, which is not preserved in a fast neighbor search algorithm. Since we use the native C++ implementation to validate the correctness of our implementation, we deliberately kept it as simple as possible to exclude implementation mistakes. Therefore, it does not use any fast neighbor search acceleration. The native C++ implementation without fast neighbor search acceleration also serves as a baseline for the time complexity test to show that our fast neighbor search implementation accelerates the computation.
For the convergence study, we tested the PSE (Section IV-A) approximation of the second derivative of a sine function in 1D using a second-order accurate kernel function. The analytical solution of the second derivative of sin(x) is − sin(x). We tested for different grid (or particle) spacings. Then, using the analytical solution, we plotted the error. The solution converges with decreasing inter-particle spacing with a second order to the analytical solution until the errors from finite-precision arithmetic start to dominate at around 10 −8 , as expected [36].
The run-time evaluations consider the 3D diffusion and the perfectly elastic collision implementations from our prototype and native C++. We see for the implementation in our prototype that we have a linear run-time complexity concerning the number of particles, while it is quadratic for the native C++ implementation without fast neighbor search, as expected. All tests can be found in the source code.

V. DISCUSSION
Particle methods are used in a wide range of fields such as plasma physics [19], computational fluid dynamics [8], [11], image processing [1], [7], computer graphics [17], and computational optimization [18], [31]. Formulation of what constitutes a particle method demands a generalized view. Here, we presented a general definition and showed its applicability in developing a common interface for this important class of algorithms.
The proposed definition highlights the algorithmic commonalities across applications, enabling a sharp classification of particle methods. Furthermore, the presented definition unifies all particle methods and allows the formulation of particle methods for non-canonical problems. In addition, this formulation enables theoretical analyses of particle methods and provides a rigorously defined structure for implementing software frameworks for particle methods.
We formulated the presented definition in the most general way to encompass everything called a "particle method". However, most practical instances do not exploit the full generality of the definition. For example, one would frequently restrict a particle method to be order-independent, i.e., to produce results independent of the particle's indexing order.
Such restrictions are frequently needed to parallelize particle methods on multi-processor computer architectures, such as computer clusters or GPUs. We have shown how such definition restrictions can lead to a parallelizable state transition function, whereas it is sequential in its most general form in the definition.
Even though the presented definition is general and easily applicable to, e.g., SPH, MD, and PSE, a particle method could potentially have a worse time and space complexity than a non-particle algorithm, especially for non-canonical problems. Further, our definition is limited by its monolithic nature. An algorithm composed of smaller algorithms, such as a solver for the incompressible Navier-Stokes equation, would become very large and complex with several nested cases when explicitly formulated in our definition. Importantly, our definition is not unique. Alternative, possibly more compact, but equivalent definitions are possible. However, we chose the presented formulation for its similarities to practical implementations, as we showcased in our software framework prototype. The prototype interface is almost the same as the particle method algorithm structure. Hence, it takes advantage of the structure of the definition and hides more complex algorithms from the user,e.g., the state transition function and fast neighbor search. Although we investigated the parallelization of the state transition, this is not implemented in our software prototype. The presented parallelization scheme is not implemented in the software. Therefore, its value lies not in its direct applicability in software but in its theoretical significance.
Notwithstanding these limitations, the present definition establishes a rigorous algorithmic class that contrasts the so far loose empirical notion of particle methods. This rigorousness paves the way for future research both in the theoretical and algorithmic foundations of particle methods and the engineering of their software implementation.
Future theoretical work could define standard classes of particle methods by formulating class-specific restrictions to our definition. Such restrictions enable classifying particle methods concerning their parallelizability, algorithmic complexity, and computational power. The presented parallelization scheme for particle methods restricted to pull interactions on shared memory systems requires minor constraints. Hence, we expect that the presented proof for this scheme will be the foundation of proofs for the parallelizability of push(-pull) interaction schemes for shared and distributed memory. It seems intuitive that the presented definition of particle methods is Turing-powerful since one could use a single particle to implement a universal Turing machine in the evolve method. However, this trivial reduction offers no insight into the algorithmic structure and computational power. Studying the computational power of certain classes of particle methods could provide exciting insights into what is possible with different amounts of computational resources. Other possible directions of theoretical research include the derivation of complexity bounds for certain classes of particle methods.
On the engineering side, future work can leverage the presented definition to better structure software frameworks for particle methods, such as the PPM Library [35], OpenFPM [20], POOMA [33], or FDPS [21]. This would render them more accessible and maintainable, as the formal definition provides a common vocabulary. The present definition also enables the classification and comparison of software frameworks concerning their expressiveness, coverage of the definition, or optimization toward specific classes of particle methods.
Future work could also develop a less monolithic definition that allows modular combinations of different particle methods. While this could lead to a formulation that can potentially be exploited directly in software engineering or the design of domain-specific programming languages for particle methods [22], [23], one would first need to solve some theoretical problems: How can different types of particles from different methods interact, e.g., during interpolating stored values from one set of particles to another? How can access be restricted to a particle subset, e.g., for boundary conditions? Solving these problems might lead to additional data structures or functions in the presented definition.

VI. CONCLUSION
Mathematical definitions reveal the concepts upon which a method is founded, and they render it possible to rationalize the fundamental characteristics of a method. After defining what constitutes a particle method, we leveraged this knowledge to formalize canonical and non-canonical particle methods algorithms and to design and implement new computer software to simulate various physical systems, from fluid dynamics to elastic collision and diffusion.
The presented formal definition of particle methods is a necessary first step toward a sound and formal understanding of what particle methods are, what they can do, and how efficient and powerful they can be. It also provides practical software implementation guidance and enables comparative evaluation on common grounds. We therefore hope that the present work will generate downstream investigation and studies in branches of science developing or using particle methods.
The source code of the prototype generic software framework is available at: https://git.mpi-cbg.de/mosaic/prototypeparticle-methods-defintion-as-an-interface.