Modeling in the Time of COVID-19: Statistical and Rule-based Mesoscale Models

We present a new technique for rapid modeling and construction of scientifically accurate mesoscale biological models. Resulting 3D models are based on few 2D microscopy scans and the latest knowledge about the biological entity represented as a set of geometric relationships. Our new technique is based on statistical and rule-based modeling approaches that are rapid to author, fast to construct, and easy to revise. From a few 2D microscopy scans, we learn statistical properties of various structural aspects, such as the outer membrane shape, spatial properties and distribution characteristics of the macromolecular elements on the membrane. This information is utilized in 3D model construction. Once all imaging evidence is incorporated in the model, additional information can be incorporated by interactively defining rules that spatially characterize the rest of the biological entity, such as mutual interactions among macromolecules, their distances and orientations to other structures. These rules are defined through an intuitive 3D interactive visualization and modeling feedback loop. We demonstrate the utility of our approach on a use case of the modeling procedure of the SARS-CoV-2 virus particle ultrastructure. Its first complete atomistic model, which we present here, can steer biological research to new promising directions in fighting spread of the virus.


Introduction
Living organisms on Earth share a common complex, hierarchical structure. At the lowest level of the hierarchy, biomolecules such as proteins and DNA perform all of the basic nanoscale tasks of information management, energy transformation, directed motion, etc. These biomolecules are assembled into cells, the basic units of life. Cells typically are surrounded by a lipid bilayer membrane, which encloses several thousand different types of biomolecules that choreograph the processes of finding resources, responding to environmental changes, and ultimately growing and reproducing. Most familiar organisms, such as plants and animals, add an additional level to the hierarchy, with multiple cells cooperating to form large, multi-cellular organisms.
Viruses are pared-down versions of living organisms, with just enough of this hierarchical structure to perform a targeted task: to get inside a cell and force it to create more copies of itself. Viruses are typically comprised of some form of nucleic acid (RNA or DNA) that encodes the genome, and a small collection of proteins that are encoded in this genome, that form the molecular mechanism for finding cells and infecting them. Some viruses also include a surrounding envelope composed of a lipid bilayer membrane that is acquired as the virus buds from an infected cell. arXiv:2005.01804v1 [q-bio.QM] 1 May 2020 Figure 1: The first complete ultrastructure of a SARS-CoV-2 particle created with our modeling technique. The membrane shape and distribution of spike proteins is learned from microscopy image data and the internal assembly is a result of an interactive 3D rule specification approach. Left: internal structure of the virus -the rope. Middle: RNA bound to the rope. Right: Overview of the model.
Effective computational methods are available for modeling and visualizing the biomolecular components of cells and viruses. Atomic structures of over a hundred thousand biomolecules are available at the Protein Data Bank (wwpdb.org), and decades of research and development have generated a comprehensive toolbox of simulation, structure prediction, modeling, and visualization tools to utilize and extend this data [42,45]. However, modeling and visualization of the full hierarchical structure of living organisms-from atoms to cells-is a field still in its infancy, limited largely by the size and complexity of the hierarchy and its many interacting parts.
Modeling and visualization of the cellular mesoscale-the scale level bridging the nanoscale of atoms and molecules with the microscale of cells-is necessarily an integrative process, since there are no existing experimental methods for directly observing the mesoscale structure of cells [15]. Mesoscale studies integrate information from microscopy, structural biology, and bioinformatics to generate representative models consistent with the current state of knowledge. Challenges that are currently limiting the integrative modeling pipeline include (a) finding and curating disparate sources of data, and (b) intuitive 3D construction and visualization of models of this size and complexity within a reasonable user and computational effort. This latter challenge is addressed in this paper.
The central idea behind our rapid modeling approach for mesoscale models is to take advantage of the hierarchical structure of living systems. We model structural characteristics on a small representative collection of structural elements, which are then assembled into the entire cellular or viral system through a set of learned rules that guide placement and interaction of the component elements. These rules are specified directly through 3D interactive modeling, instead of indirectly through some rule-definition syntax. In this way, we can reduce the burden on the users, provide them with an intuitive modeling interface, and automatically generate instances of the full model comprised of a huge number of interactive component elements. In case the model needs to be further fine-tuned or new information needs to be incorporated, the construction rules are revised in 3D and new models are generated that incorporate the latest revision. If structural evidence is available in form of EM images, our system learns basic structural properties from images by few inputs from the user.
We demonstrate the rapid modeling method for integrating data from electron microscopy with structural information for the novel coronavirus SARS-CoV-2. Generated models can be used for exploring the diversity of structure and analyzing the detailed arrangement of spike glycoproteins on its surface.

Related Work
Modeling geometric representations of molecules has been a driving scientific visualization and computer graphics research for several decades. Since late seventies Richards has developed a geometric representation of molecular surface that characterized their area [48], which as a concept was popularized by Connolly [7]. Over the years many geometric construction algorithms for molecular surfaces have been developed, notably Reduced Surface [49], blobby objects [4,44], or α-shapes [10] to name a few. These algorithms are typically well parallelizable on multiprocessor systems [56] or on modern GPUs [28,19,6], and nowadays scale up to interactive rates of huge atomistic models thanks to e. g.view-guided rendering strategies [5]. Simplified representations, such as van-der-Waals space filling molecular models can be interactively constructed and visualized to represent up to billion-sized atomistic scenes [32,12,26] making use of various acceleration strategies, such as the procedural impostors [54], adaptive level-of-detail tesselation [29,37], or hybrid particle-volumetric representation [50].
Geometric representations of molecular structures described above has been mostly concerned with modeling protein macromolecules. Recently, dedicated approaches for modeling large lipid membranes have been developed [8,3]. Another special type are fibrous macromolecules such as the 3D genome. Halladjian et al. presented an approach to construct and visualize a multi-scale model of interphase chromosomes [18]. A typical approach to model the backbone of linear polymers like RNA or DNA procedurally is to concatenate building blocks with processes like a random walk. A random walk produces a sequence of points where the location of each generated point is dependent on its predecessor. While this process leads to plausible models and is able to incorporate measured characteristics like the stiffness, it is hard to control and guide to specific points. Klein et al. [27] propose a parallel algorithm for constructing a 3D genome sequence, which builds on the midpoint-displacement concept. We utilize this approach in calculating the path for genetic macromolecules.
Above methods model molecular geometry based on some underlying well-defined structure. Technique presented in this paper are primarily concerned with interactive 3D modeling of the molecular assembly, which relates to methodologies developed in graphics research. A particular category is rapid 3D modeling, which can be characterized as a process where the author specifies the desired 3D model through a minimal amount of user interactions. The algorithm or learned statistical model constructs then the geometric model by preserving user-defined constraints. There are two dominant strategies for achieving rapid modeling.
One methodology, known as sketch-based modeling, allows the user to specify certain geometric details directly in the scene. A good example for sketch-based modeling is the Teddy system presented by Igarashi et al. [20]. Here the user only specifies a 2D contour of an object and a 3D geometry is generated using the contour inflation approach [59], which we also utilize. An interesting recent trend is to control a deep learning model using sketches, e.g. for modeling terrains [16], faces [46], or buildings [41]. Utilization in the sciences can be exemplified through modeling advanced geological concepts and phenomena [30,38] or for creating quick molecular landscapes for communicating to peers or a broader audience [14]. These works result in approximate sketches of complex scientific scenarios and make use of rules that are algorithmically defined within the system. Our approach also features sketching components, however, the proposed approach is initially rule-free and all rules are specified by the user or determined from imaging data. Moreover, these rules can be defined at a wide range of modeling precision. Our rule design space is open: many simple rules are generated first, then can be combined in a construction of more complex structural elements, and obsolete concepts can be revised into new rules and effortlessly reapplied. The established rules can be stored as structural templates that can be shared among users.
The second approach is known as procedural modeling and its basic idea is that the geometric structure is defined indirectly by specifying rules and parameters of these rules. The rules are then used when executing procedural construction of the 3D scene geometry, often without any direct geometric input from the user. The rapid modeling aspect is achieved through quick setting of few parameters that can serve as sufficient input for massively large scenes. Procedural modeling has a long tradition in computer graphics [9]. It is frequently used for modeling large environments that look plausible. Examples are models of vegetation [47], cloudscapes [58], roads [13], street networks [43], and buildings [36,52].
Procedural modeling has been utilized in sciences beyond visually-plausible modeling to create scientifically-accurate models. Biologists are recreating the mesoscale using procedural modeling methods, based on evidence from nanoscale and microscale measurements. Johnson et al. [22] have proposed a system called cellPACK that takes as an input a so-called recipe, a description of how structures should be positioned in the organism model. A packing algorithm then iteratively places the macromolecular building blocks into different compartments of the organism. This compartment is described by a discretized distance volume, which is packed and updated in a sequential manner. Currently, on a desktop workstation, such a packing process takes several minutes up to hours, to pack a representation of the HIV particle 100 nm in diameter. The specification of the recipe, however, is a human-readable textual rule definition that relies on the accurate specification from the user. In our approach, instead, the rules are learned from the user interaction during interactive and intuitive modeling of few structural elements directly in the 3D environment. Such elementary 3D models of nanoscale elements can-in spirit of procedural modeling-be assembled into much larger mesoscale complexes that can be interactively constructed and visualized.

Method
In scientific data visualization, we use procedural modeling methods that result in plausible representations not only for a broad audience, but also for scientists for hypothesis generation, testing, or even in the simulation of stability and dynamics. To be able to create models that are scientifically relevant, our modeling framework needs Figure 2: An overview of our mesoscale modeling pipeline. First, user specifies the segmentation of the contours and visible membrane-embedded proteins. Contours and a histogram of amounts of proteins within individual parts of the membrane (a) are used for statistical contour modeling that is inflated into 3D mesh and populated with membrane proteins (b). Subsequently, user specifies rules of how invisible proteins should be placed within the 3D model. The output is the finalized model (c), which can be iteratively refined by modifying the rules.
to allow for versatile structural arrangement specification and needs to support integration with acquired evidence from microscopy data. This requirements differ from already introduced procedural modeling methods. For example L-system models [31] are typically topological trees and the procedural models are based on growing plants according to the tree structure or the architecture where models are generated by top-down subdivision with elements in regular arrangement. Mescoscale biological models are different case. We have many elements that are in relation to each other and interact with each other. Also arrangement is much more irregular than architecture. Therefore, we do not go the route of domain specific languages [21], but aim to design an interactive procedural modeling framework to tackle with biological models.
The mesoscale biological structure is typically characterized at the nanoscale by its molecular composition, where molecular structure can be either measured or simulated. The microscale is characterized from microscopy images or tomographic volume reconstructions of the entire entity. Rough shapes of the macromolecules can often be observed in these image data, so typically several hypotheses can be formulated about the specifics of the assembly. Usually the membrane boundaries and associated proteins are more recognizable than the soluble assemblies inside the membrane. Therefore we model the membrane information based on image data and the information inside the membrane is characterized through interactive 3D modeling using structural rules.
In our work we concentrate on the extraction of membrane contours that are often obviously revealed in microscopic images, which are usually used to show scientific observations. First, a handful of membrane contours are traced by the user. These contours are co-registered to analyze their variation. Such representation is statistically captured so that many new contours, similar to the input samples, can be generated. Based on the contour information, a three-dimensional particle geometry is estimated that matches the contour shape. Resulting mesh representation of particles are populated with molecules bound to the membrane according to the observations in the images. We characterize the molecular distribution around the contour and estimate a corresponding distribution for the entire particle surface.
Once the information from the images is incorporated into the mesoscale model, further modeling of elements that are not directly observed in image data is used to complete the model. Several hypotheses can be generated to express what a biologist considers as a valid assembly configuration. The modeling proceeds through an interactive process, where the modeler expresses certain spatial relationships on exemplary structural representatives and an interactive 3D visualization shows how this rule is applied for the corresponding molecular population. Based on the instantaneous visual feedback, the modeler can revise previous inputs to obtain the desired assembly. In this stage hierarchical relationships can be utilized for expressing the rules that define distance and orientation distributions among molecular instances. The scene population is corrected by collision handling so that a valid molecular scene results from the application of the rules.
The overview of the modeling process and involved steps described above are shown in Figure 2. The following sections (Section 4, Section 5) describe the technical details of our approach.

Learning Shapes and Distributions from Imaging
For rapid processing of electron microscopy images, we implemented a segmentation tool that produces the input for learning the contour of the membrane and the protein distribution on the membrane. The user creates an outline, the outer contour of the cell, and places small elliptical proxy objects representing proteins scattered over the surface of the cell. The major axis of the ellipse is aligned with the main axis of the protein. We perform this quick feature extraction for all proteins that are close to the cross-section or silhouette of the membrane as shown in Figure 3. These proteins naturally are not exactly on the contour, but they are located close to it within a certain surface band. To characterize this band, an inner contour is specified. The user can easily specify the thickness of this band on which the marked surface proteins are located. With this we have fully characterized the most important aspects of the input microscopy image. Next a distribution of the surface proteins on the membrane band needs to be estimated. For this, we subdivide the band into equally sized surface patches and count the amount of proteins associated with each patch. To characterize the distribution of the proteins from what we see on the membrane contour, we store the per-patch protein counts in a histogram. This gives us a distribution function of amounts of protein per patch area which we use when we populate membrane-protein instances on the 3D model of the membrane.
Once we obtain a set of membrane contours extracted from multiple virion particles, we then use them for generating new distinct contours that have similar characteristics. For that we need to register all contours into a common coordinate system. We do this by fitting an ellipse to each contour and then translating and rotating the contours such that the approximating ellipses are in canonical form. We assume that all contours can be approximated by an ellipse. To parametrize an ellipse, we use the two focal points and the semi-major length. A point p is on the ellipse if and only if: x, c 2 .y) are the focal points and a is the semi-major length. To fit an ellipse to a set of data points , we pose the problem as optimization problem: This objective function has a global minimum at infinity. When the two focal points move to infinity and the semi-major length tends to infinity, the value of this function approaches zero. We therefore add an L2 regularizer to avoid the undesirable global minimum at infinity: where λ is a tuning parameter. In the initialization, a is initialized as the mean of the distance from the data points to the mean of all data points p µ , c 1 = (− amax 2 , 0), c 2 = ( amax 2 , 0), where a max is the largest distance from a data point to p µ . After initialization, the penalized objective function Equation 2 can be solved by gradient descent. The obtained ellipse has semi-major length a, semi-minor length b, center c e and angle of rotation θ e . The example of estimated ellipse can be seen in Figure 4 top left.
To register the contours into a common coordinate system, we estimate the translation and rotation based on the approximating ellipse. First, the segmented contour is translated to the origin O(0, 0) by translation vector t = −c e . Then, the segmented contour is rotated by angle −θ e .
Each contour in the set of contours is reparameterized by N p points p i . Each point p i is defined by an angle θ i and a distance r i = |Op i | in a polar-coordinate system. To get more data from the contours, we create 3 augmented  : Statistical contour modeling for particle mesh generation: The contour is approximated by an ellipse that is used for bringing all contours into a canonical form (top left). Statistical contour model is generated from set of contours and new contours can be generated (top middle two). Newly generated contour is rasterized for contour inflation (bottom left two). Two dimensional mesh is generated (top right) which is then inflated into 3D object and populated with spike protein amounts per triangle (bottom right).
contours for each contour. The first augmented contour is obtained by a rotation with angle π. The second and third augmented contours are obtained by flipping the original and the rotated contour through the x-axis. To generate a new contour from the contours, we compute a per-angle one-dimensional normal distribution by casting a ray from the origin O in all θ s directions and intersecting all contours with this ray. To the N p intersection (∩) points p ∩ i , we fit a normal distribution for N p r ∩ i = |Op ∩ i | with mean µ s and standard deviation σ s as parameters. We also truncate the normal distribution to the minimum and maximum distance values in the data. From these parameters, we perform rejection sampling of the truncated normal distribution of r ∩ i for each angle θ s . Finally, we interpolate the points using Catmull-Rom splines to create a new contour. The input contours and a number of generated contours are shown in Figure 4 top middle two images. The algorithm is listed in Algorithm 1 and Algorithm 2.
From the learned contour our next step is to generate a membrane, which is a three-dimensional ellipsoidal potato-like object. Our object should feature the same shape characteristics as observed on the particle's silhouette or cross-section, depending on the imaging method, termed here as the contour. We model the object based on three principal dimensions, d 1 ≥ d 2 ≥ d 3 of an ellipsoid and characterize it by two aspect ratios, namely as elongation index EI = d 2 /d 1 and flatness index F I = d 3 /d 2 [55]. EI can be determined from the contour, F I is defined by the user then it is used for determined d 3 . Next, we need to extrude the contour into three dimensions. For this task, we employ the standard contour inflation method from sketch-based modeling [59]. To assign a depth (z) value to all points on the three-dimensional object we proceed as follows. First, we make a binary mask from the contour so that 0.0 is assigned to the outside and 1.0 to the inside of the shape. Then, we apply a cascade of Gaussian filters (with radius 32, 16, 8, 4, 2, 1) on the image mask. After each smoothing pass, the resulting image is multiplied with the original image mask, so that all pixels that are outside the contour are again set to zero. The resulting image is used for assigning the depth values symmetrically on both subspaces partitioned by the contour plane of z = 0 (see Figure 4 bottom left two images for an example). The next step is to create the three-dimensional object represented by a triangular mesh. We create a 3D sphere with approximately equally-sized triangles where the radius is the largest radius from all contour points to the origin O. After that, we project this mesh onto the z = 0 contour plane. This projected mesh is distorted to the shape of the contour. The example 2D mesh backprojected onto the contour plane can be seen in Figure 4 top right. Finally, for each mesh point, we extrude its z-coordinate to inflate the contour. The z-coordinate value of each point of the mesh is calculated as a multiplication of half of d 3 with the corresponding pixel value (with the same x, y coordinates) from the previously calculated depth image in Figure 4 bottom left.
In the image segmentation phase a band around the contour was created. This band was subdivided into equally sized segments. Moreover, each of these segments was evaluated by a number representing the amount of membrane proteins belonging to the segment. With this construction, we obtain the distribution of the number of membrane proteins per given area. We use this distribution for populating the membrane proteins on the triangular mesh. The membrane protein density of triangles is computed in the following way: The 3D mesh is partitioned into approximately same-sized triangular patches. The size of the patch is determined by the size of the area of segments of the bands from the 2D contour. Afterwards, every patch is associated with one value from the above distribution. We use random sampling of the distribution. Then we distribute the number of membrane proteins among the triangles that belong to the current patch. Examples of generated 3D meshes with associated protein counts per triangle can be seen in Figure 4 bottom right along with the examples of membranes with the membrane proteins.

Interactive 3D Rule Specification
The second part of our approach is used to populate model with biological elements that are placed in relation to other elements in the cell. While some of these elements cannot be clearly seen in EM images, their structural information is generally understood or there is at least a hypothesis about the structural organization. For example a protein can be in a spatial relation (position, rotation) to another protein. The rules encode how new elements can be placed based on the geometry of already existing elements.
We create a three-dimensional model that consists of a set of elements. Our interactive procedural modeling approach organizes the elements in a tree. An element consists of the following: 1) A name to identify the element, e.g. to select an input element to a rule. 2) A type that can be either auxilliary or instance. An auxialliary element will be invisible in the final model and an instance will be visible. We often refer to an auxilliary element as skeleton.
3) The element geometry that can be either a polygonal mesh, a poly-line, or a set of points. Sometimes, the geometry is only a single polygon, line segment, or point. 4) An oriented bounding sphere that consists of a local coordinate system used to position the geometry in the world coordinate system and three scale factors to determine the size of the element.
We use a library of structural models, e.g. proteins, in our framework. Many of these models are freely available on the internet. The most common form in which they are distributed is a list of atoms where the type and position of each atom is specified. Conceptually, we could convert these descriptions into 3D meshes, but we typically keep them in a different representation (e.g. set of spheres) for faster rendering. We also assign an identifier G id to them. These identifiers will be used in the rules to specify the geometry of elements. We also use a library of elementary meshes, such as single polygons, a tetrahedron, or an icosahedron that proved to be useful as auxiliary geometry. See Figure 5 for an illustration of example geometries.

Definition of rule
The main function of a rule is to identify an element in the current model and to create one or multiple elements either as children or as siblings in the derivation tree. In contrast to other popular procedural modeling systems, e.g. [36,47], our rules are not described by a script like language, but they are designed and executed in an interactive editor. The user can interact with elements in a CAD-like environment, e.g. positioning and rotating elements using a virtual gizmo tool. In the following, we will describe the most important concepts and parameters. We plan to release the executable and detailed UI documentation upon acceptance.

Derivation of the model
The derivation of a model starts with an auxiliary root node of the tree and an empty model. Elements are placed by processing the specified rules and rule groups in a sequence. The user has full control of the rule execution and can execute all rules at once, execute rules step by step, and perform interactive edits between the execution of rules. Further, the user can undo rules or even partially undo rules. Rules take elements currently presented in the scene ( identified by their name) as input and generate zero, one, or multiple new elements.
If rules r i are placed in a group they can be either applied in an alternating manner, or rules can be selected randomly among the set of rules in the rule group according to their probability r i .probability.
Several geometric parameters can be specified as constants or as probability distributions. A probability distribution can be modeled by combining Gaussian and uniform functions as building blocks. The values are automatically normalized so that their sum integrates to one.
In all the rules, several rotational variants can be used to specify a transformation. Currently implemented variants are: user-defined rotation, random rotation, normal vector orthogonal to a parent element normal vector and element normal vector aligned to a parent normal vector. Moreover, these rotations can be extended by user-specified yaw, pitch, roll distributions.
An important part of our approach is collision detection. We implemented an OctTree accelerated method. A naive collision detection algorithm turned out to be unusable due to the amount of elements in the model ( 200000 element instances). We use an OctTree with 4 levels of subdivision. Every element in the scene is assigned a bounding sphere. This bounding sphere can be additionally scaled by the user to approximate the object better when the object is long but thin. Although this can lead to overlapping of elements, the property can be exploited for example in cases where string-like elements are placed in a plane close to each other. The user can control how close elements can be before creating a collision. Once a new element candidate is generated all leaf octants of the tree intersecting the element's bounding sphere are fetched and used for collision detection. If there is no collision, the element is created. Otherwise, the element is not created. To avoid rules that are not terminating because of collisions, we employ a parameter collisions max to specify the maximum number of consecutive detected collisions. If that number is reached, a rule is terminated.
The specifications of all rules are stored in a file. Thus, the user can create a library of rules that can be then re-used as templates to build other mesoscale models.

Type of rules
We identify and implement four main classes of rules: parent-child, siblings, siblings-parent, and connection rule.

Parent-child rule
In this rule new child elements are added to a parent element with name N ame in given as input. We employ two types of rules called distance rule and relative rule.
The main purpose of the distance rule is to create new elements in a specified distance to the parent. The distance d can be either a constant or modeled probability distribution that is sampled each time a new element is created. To determine the position of the new element, a random point on the parent geometry is generated and translated along the normal vector according to the (sampled) distance d. Another parameter determines if the translation happens along the positive normal direction, negative normal direction, or randomly selected among the two. In Figure 6 left we illustrate a probability functions that is modeled as a combination of two Gaussians with mean d 1 and d 2 . The Gaussian around d 1 has a higher weight than the Gaussian around d 2 . The resulting population of elements using a triangle and a point skeleton as parent are presented in Figure 6 middle and right, respectively.
The relative rule specifies the location of new elements with respect to a vertex of a polygon of the input element. For example, in Figure 7 left) a position K is specified by the user and subsequently encoded with respect to vertex v 0 . The position is computed by the parameters t, u that specify the distance from the edges connected to v 0 and the distance d along the normal of the polygon. The parameters t, u specify the location of S K , the closest point on the polygon. From these rule parameters, the corresponding positions positions S L and S M can be found inside the triangle and new elements are placed in the points L and M that are in the distance d along the normal vector from positions S L and S M , respectively. The rule created from the previous process can be transferred and applied to any polygon, e.g. to create a pentamer as shown in Figure 7 middle. The example in Figure 7 right shows two subsequent applications of the rule to model the stucture of a viral capsid. First, the relative rule is used to create three pentagon elements as children of a triangle element, Second, proteins are created as children of each of the pentagons.

Siblings rule
The siblings rule creates new elements and adds them as siblings to the same parent in the tree. The most important parameter of the siblings rule are a set of transformations T i . These transformations are typically a combination of translation and rotation. Each transformation also has an associated probability T i .prob. To apply the rule, a transformation T i is selected according to the probabilities T i .prob. Then, a new element with name N ame out is generated by transforming the coordinate system of the input element N ame in and setting the geometry as specified by the identifier G id . A parameter T num determines how many transformations will be selected. If T num is equal to the number of transformations, all transformations will be selected and the probabilities will be ignored. The rule is invoked recursively for newly created elements. Identical to previous rules, the parameter count max determines how many elements are inserted. In Figure 8 left, three different transformations T 1 = K → L, T 2 = K → M and T 3 = K → N were created. The user can generate these transformations interactively. In this example, an additional 8 instances labeled O were generated recursively (for T num = 3). In the example Figure 8 right, T num is set to one to showcase the random selection of transformations.

Siblings-parent rule
The siblings-parent rule is an extension of the siblings rule. The user specifies a transformation to the sibling element with name N ame in as before. After applying the transformation, the new element N ame out is snapped to a given distance d to the parent shape of N ame in . This distance preservation acts as correction factor. The main benefit of this rule is that the user who wants to distributes elements in a circle around a point or a spiral around a line segment does not have to precisely measure the angle and translation.
In Figure 9 left, the parent shape is a point labeled S and the input element is labeled K and the newly generated element is labeled L. The transformation T = K → L consists of a translation. In subsequent applications of the rule the distance d to the skeleton S is preserved (see Figure 9 middle). The user can specify a group of rules that can be applied in iterations. In Figure 9 right two rules with transformation K → L and L → M were created and applied.

Connection rule
This rule is used for creating string-like structures that connect a given set of 3D points. This set of points is typically generated by other rules. The output of this rule is a polyline element. The rule proceeds in three steps. First, an initial polyline is created by connecting the 3D points. For this purpose, the generator starts at a random point and connects it to a random one in its close proximity until all points are connected. The resulting polyline may have strong kinks. To remove the kinks, a cubic interpolation and subdivision is applied resulting in a smoother polyline. Fiber structures, that we would like to model, are characterized through a persistence length property that expresses the bending stiffness of a fiber. Midpoint displacement is able to incorporate the target stiffness by increasing or decreasing the amount of displacement [27]. Therefore, in the third step, the midpoint displacement algorithm is used to enhance the curve with detailed windings.

Use case: SARS-CoV-2
The novel coronavirus SARS-CoV-2 is currently posing an international threat to human health. As with previous SARS and MERS outbreaks, it emerged through zoonotic transfer from animal populations. These types of emerging viruses pose a continuing threat, and the biomedical community is currently launching a widespread research effort to understand and fight these viruses. Understanding of the mesoscale structure will play an essential role in understanding the modes of interaction of these viruses with their cellular receptors and designing effective vaccines. We introduce the topic in biology language first and then describe our modeling strategy.
Our mesoscale models integrate a growing body of cryoelectron micrographic data on entire virions with atomic structures of the biomolecular components. SARS-CoV-2 contains four structural proteins, a single strand of genomic RNA, and a lipid-bilayer envelope [39]. Other non-structural and/or host proteins may also be incorporated into the virion-this is a topic of current study in the field and it is not addressed in these models. Three of the structural proteins are embedded in the membrane. The spike (S) protein extends from the surface and forms the characteristic spikes that give the viruses their crown-like shape as seen by electron microscopy. The spikes recognize cellular receptors and mediate entry of the virus into cells. The membrane (M) protein has an intravirion domain that interacts with the nucleocapsid protein and is involved in packaging the viral genome as the virus buds from the infected cell's surface. The envelope (E) protein is a small pentameric complex that forms an ion pore through the membrane, which is thought to be involved in the process of budding, with only a small number of copies being incorporated into the virus. The viral genome is a single strand of RNA about 30,000 nucleotides in length, one of the largest genomes of RNA viruses. It is packaged by the nucleocapsid (N) protein, which coats and condenses the RNA strand.
At initiation of this project, structural information about SARS-CoV-2 proteins was limited, so we employed homology modeling to generate atomic structures of the viral proteins. The automated pipeline described here is created so that the most current structural models may be substituted as additional information is obtained. The project also made extensive use of the rapid modeling capabilities, to incorporate new sources of data as they became available.
Several structures of the S protein in its prefusion state have been determined with cryoEM (PDBids: 6VSB, 6VXX, 6VYB). These structures cover most of the protein, leaving out about 130 residues at the C-terminus, corresponding to HR2, transmembrane and cytoplasmic domains. The N protein has been partially determined with X-ray crystallography (PDBids: 6VYO, 6YI3, 6WJI, 6M3M). As for SARS-CoV-2 M and E protein, none of them have been experimentally resolved yet. In this work, we took advantage of SARS-CoV-2 sequence homology with other betacoronaviruses to perform homology modeling and to obtain structural model for each protein component present in the SARS-CoV-2 virions. Figure 11: Spikes scattered on the surface of the 3D mesh according to the assigned amounts per triangle. Membrane and envelope proteins are uniformly distributed on the membrane. The S protein model was obtained by a composite homology modelling approach. Residues 27-1146 were modelled with SWISS-MODEL [57] using 6VXX as template, in order to reconstruct all the missing loops that were not solved in the experimental structure. Residues 1147-1212, 1213-1242, 1243-1273 corresponding to HR2, transmembrane and cytoplasmic domain respectively were submitted separately to the homology modeling server for oligomeric structures GalaxyHomomer [1]. Template-based modeling generated the HR2 structural model, utilizing the structure of MERS-CoV fusion core (PDBid 4MOD, 39% sequence identity) as template, while transmembrane and cytoplasmic models originated from ab-initio modeling methods. Structures of the four components were aligned in PyMol [51] and merged to obtain a unique model in Coot [11]. Pentameric E protein was modelled with the protein structure prediction server Robetta [25] in 'comparative modeling' mode. The NMR structure of E protein from SARS-CoV (PDBid 5X29) was used as template, as the two proteins share 94% of their amino acid sequence. The structural model for M protein (residues 11-203) was predicted by DeepMind using AlphaFold prediction methods and freely distributed in response to the COVID-19 crisis [23]. Residues 3-10 were built onto the original model with Coot [11] and a N-glycosylation was added to residue N5 through GLYCAM-Web [53].
The structure of the viral nucleoprotein complex is a topic of current speculation and study, so we created a model that is consistent with estimates of the number of copies of N, the length of the genome, and the locally-ordered arrangement of proteins as seem by cryoelectron microscopy [40]. N protein folds to form two stable globular domains, connected by a flexible linker. The C-terminal domain (CTD) forms dimers, as seen in PDBid 6WJI of the SARS-CoV-2 CTD. In the SARS CTD, these dimers may further associate into octamers as seen in PDBid 2CJR. The structure of the N-terminal domain (NTD) has been determined in PDBid 6YI3, and is often termed the RNA-binding domain, although RNA binds to multiple sites in the N protein [24]. In cryoelectron micrographs, the nucleoprotein complex is seen to have short-range order as a close-packed collection of spherical features closely opposed to M protein on the inner face of the membrane [40], and gently-isolated nucleoprotein complexes have the look of rope-like collection of beads roughly 15 nm wide [34]. These data have been integrated into a conceptual model where CTD octamers stack loosely into long ropes and NTD are loosely associated around the central rope, interacting with RNA [17]. We have implemented this conceptual model, first building CTD octamers, then placing NTD, and finally building a continuous RNA strand that winds through the NTD. Explicit models for the intrinsically-disordered linkers and flexible portions of the N chain at the N-and C-termini will be a subject for future study. In the rest of this section the modeling process is described. Firstly the model of virion is created. In the following section the process of RNA construction is described.

Virion modeling
Due to an arbitrary rotation of molecular models in PDB files, the user must assign a normal vector for membrane-bound components, to define the orientation and location within the membrane (see Figure 10). If this assignment is skipped, the default rotation, as stored in the data file, is used. The whole modeling phase is done taking into account the estimated amount of individual elements published in [2]. We implemented a tool in which the user can segment 2D electron microscopy images (see Figure 3). Firstly, the scale of the image has to be set. We implemented a widget for selecting a portion of the image and assigning a real-world length. The tool then computes this scale and uses throughout the entire segmentation process. Afterwards, the user manually segments the image and creates the outer contour by drawing a polyline enclosing the virion. After the outer polyline is done, the inner contour is created by scaling down the outer contour. The scaling is driven by the user and can be updated whenever necessary. Once the inner contour is defined, the band between outer and inner contour is automatically subdivided into 10 equally-sized regions. As the last step, the user visually identifies the spikes in the image and marks them using proxy objects available in our utility. The tool automatically identifies the closest virion for the particle and assigns the particle into the corresponding contour region. After this assignment, the histogram is updated. Outer contour and histogram are the input for the statistical learning (Section 4). We process the data and estimate a new contour as described previously. From the newly generated contour we create a 3D triangular mesh and assign each of its triangles the amount of containing S-protein instances based on the histogram sampling (see Figure 2). In the next step S proteins are placed on the surface of the mesh. The user specifies the distance of the center point of a spike to the surface of the 3D mesh and the number of spikes to be placed. The parent-child rule Section 5.3 is used: the parent is the 3D mesh and children are the spikes. The illustration is depicted in Figure 11. A similar rule is used for both M and E proteins. The only difference is that these protein instances are uniformly distributed, i. e.no amounts distributed over the 3D mesh based on observation on the contour is taken into account. These protein are not discernible on the images and their distribution is to date not characterized. For the time being our model assumes a uniform distribution on the mesh. For illustration see Figure 11.
The nucleoprotein complex is built in several steps. First, a fiber-like assembly of N protein conformations is built. The N protein conformation is modeled using CTD dimer (2CDT) and NTD instances as follows. Firstly, siblings rule Section 5.3 is created to bind two NTD to 2CDT (see Figure 12 top-left). Although there are only two relations depicted in the image, we create the total amount of 6 relations between 2CTD and NTD. In the population phase only two out of 6 created relations are randomly chosen. In the following step (see Figure 12 bottom-left) 2CTD is bound to rotated 2CTD using siblings-parent rule Section 5.3 to a linear skeleton. This forms a tetramer 4CTD. Repeatedly, 4CTD is bound using linear skeleton to another instance of 4CTD forming an octamer 8CTD. The final N protein assembly is constructed using siblings-parent rule of 8CTD and a polyline skeleton. To create the polyline skeleton, we uniformly fill the interior of the 3D mesh by instances of a proxy object. This proxy object is a sphere that is customized to have a radius approximately the same size as the radius of bounding sphere of 8CTD. Applying connection rule Section 5.3 on such proxy spheres we obtain the polyline skeleton. Finally, siblings-parent rule is applied to 8CTD and the polyline forming the N protein assembly is created. RNA is then added to form the entire nucleoprotein complex, as described in the next section.
The lipid bilayer membrane is constructed using the siblings-parent rule from Section 5.3. In reality, both layers can be modeled by the similar rule with the only exception that the lipids in one layer are rotated so that they are oriented to each other with their hydrophobic part. The construction of a single layer is as follows: Two copies of the same lipid models are added into the scene. The user creates a rule with several relations by translating (and rotating) one copy of the model to desired positions and stores these positions together with the distance to the 3D mesh parental skeleton. In our model, 7 relations R i are created (see Figure 13 left) for a transition from a lipid to another lipid. We use two models of different lipids -Li 1 and Li 2 . Therefore, there are 7 relations for each of the combination: Li 1 → Li 2 , Li 2 → Li 1 , Li 1 → Li 1 . The generating process starts in randomly chosen triangle of the 3D mesh parental skeleton. In the beginning, one lipid is added. In the next steps up to 7 new lipids can be generated. The rule is re-applied on these new instances. This process continues until 100 consecutive collision hits are accumulated which indicates a fully dense lipid membrane. Populating models over the surface only by limited amount of relations would lead to an occurrence of visible patterns. This is tackled by fine-tuning the rotation of newly placed element by specifying how can standard deviation in yaw, pitch, roll vary (see Figure 13 right).

RNA modeling
We have modeled RNA using five elementary models: four RNA bases adenine (A), cytosine (C), guanine (G), uracil (U) and a model consisting of phosphate and sugar (P) that forms the RNA backbone. A model of an individual nucleotide is created by following approach. To a line skeleton using parent-child Section 5.3 rule a P part and a point skeleton are bound (see Figure 14 top). The point skeleton in this case has a role of a proxy object. In the population phase a corresponding model of a base (A,C,G,U) from a genome string is placed to this position to finish the formation of the intact nucleotide. The genome string is specified by the user during the definition of the point skeleton rule.
In the next step, the siblings-parent Section 5.3 rule of these individual nucleotides with a line skeleton is created. For simplicity, only translation is presented in Figure 14 top-right. Once this relation is applied to a line skeleton an RNA strand is created (see Figure 14 middle).
If a rotation between two nucleotides is specified during the process of modeling, the the resulting RNA structure will twist along the line skeleton (see Figure 14 bottom). The consecutive bases with their phosphate sugar backbone forms the RNA string. Now the RNA model is created as a template and can be used in any place where RNA is needed, it just needs a skeletal structure along which it will form the RNA backbone fiber and then populate the bases according to the given RNA sequence. In our case we replace the line skeleton with a polyline skeleton that represents a 3D curve connecting binding pockets of N proteins inside the core of the virion.
RNA of the virion leads through predicted binding sites on the surface of N proteins. To specify the points through which the RNA should lead, a rule with relations of a proxy object BP i to N protein was created (see Figure 15 left). After populating the N proteins in the model, the algorithm computes positions of all proxies in the scene. The RNA backbone is obtained by using the connection rule (see Section 5.3) and a 3D curve generator.

Discussion
The virion model has been undergoing many revisions as the new information about its ultrastructure was updated in the literature. Using standard modeling approaches this would often lead to complete reassembly of the model. In our case few of the rules had to be redefined and an updated model was instantly generated. This real world experience of streaming new information has confirmed that our modeling framework is versatile enough to accommodate for new revisions with a given set of rules, and that the modeling is a rapid process when it comes to complex structural characteristics of a virus particle. The benefit of the rule-based modeling is the nature of templating that becomes advantageous in a highly similar models of biological assemblies. Therefore, for example, once the RNA rules are specified, these can be effortlessly brought to another model. In case more structural knowledge comes in or a more advanced model of the RNA is refined, such model can be used in all mesoscale models which contain the RNA rule. The templating can be utilized for example in capsids, fibers, or membranes. This property inherently supports collaborative efforts, where one modeller revises initial models of the others and gradually a community can build a large base of mesoscale biological assemblies.
Today, availability of mesoscale models provides new opportunities for the understanding of SARS-CoV-2 structure and function. The number and distribution of spike proteins is still a matter of some conjecture, and is relevant to understanding of interaction of the virus with its cellular receptors and its interaction and neutralization by antibodies. The details of nucleoprotein condensation and packaging through interaction with the viral membrane protein are also of interest, since they provide possible targets for therapeutic intervention.
Currently, the whole approach is implemented as a single-threaded application on the top of the Marion molecular visualization framework [35] and as a proof-of-concept implementation it is not significantly optimized for performance. Some processing stages are not calculated at an instance, however, we believe that the overall user experience is performant enough for rapid prototyping of biological mesoscale models. The resulting model consists of 100 S, 2000 M, 25 E, 1000 N (in N-CTD and N-NTD), 29903 bp ss-RNA bases (GenBank: MN908947.3 [60]) and 29903 P elements forming the RNA backbone, and 180000 lipids. The entire model is created by 21 rules (with 57 relations) defined by the user. Several of these are different possible configurations of the same elements (as in the case of lipids). The population of S, E, M, N and all parts of RNA are processed within 2 seconds each. The population of lipids is the most computationally demanding part of the algorithm, it takes approximately two minutes for each inner and outer membrane. The main reason is that there are many lipid samples that are regressed. However, this time is heavily dependent on the rules defined. We have created very dense distribution of lipids with 7 relations for the rule.

Conclusion and Future Work
In this paper, we have presented a new system for rapid modeling of mesoscale biological models. Challenged with frequent revisions of the SARS-CoV-2 model, the framework demonstrated its versatility and was able to incorporate any new structural insight. The result for science is two fold: a new technology and a new structural model of SARS-CoV-2 1 that might lead to ideas for effective vaccination or treatment strategies. There are several research questions that are difficult to answer without an explicit imaged evidence. One such question could be: How many copies of RNA can a single virion pack? Is it just one or is there possibly enough space to accommodate another copy? The utility potential for hypothesis generation of biological questions that are of integrative structural nature is tremendous.
The framework allows for varying levels of accurate model specification. A model can be specified exactly so that one amino acid interacts with another one, or it can be placed more roughly. The system of rules preserves the accuracy which is given as input. Combined with the specification of flexibility and collision handling we can achieve even simple geometric docking.
Our modeling of the microscopic scale of the membrane, is currently limited to simple star-shaped structures, i. e., there is a point inside from which all contour points are directly visible. This is sufficient for simple virion shapes, however more complex shape modeling strategies would need to be employed for more complex shapes, such as the inner mitochondrial membrane, or Golgi apparatus for example. Another limitation is the case of sticky fibers such as single-stranded genome macromolecules. These often form complex secondary structures that are enabled through sequence-interval complementarity. It is unclear whether such characterization could be expressed through our system, or we would need to expand the rule set.
It could be interesting to study the interplay of rule-modeled structures and reconstructing details from microscopic images that are hardly discernible by fitting a particular rule-expressed pattern into the image. Parallelizing our implementation would allow for interactive performance, where a large amount of conformations could be tested within a short time. Iteratively fitting the detail to an unclear microscopy image might be a way to solve an inverse problem in a brute-force manner.
Modeling with mouse interactions is not the only way, and with advanced speech recognition, a voice-controlled modeling becomes a possibility. Simultaneously, the ontology community has created a rich categorization of shapes, that have however no associated geometry. By establishing such association, through usage of terminology in shape ontologies, a verbal specification of models can lead to a desired model.