• Abstract

# Supercubes: A High-Level Primitive for Diamond Hierarchies

Volumetric datasets are often modeled using a multiresolution approach based on a nested decomposition of the domain into a polyhedral mesh. Nested tetrahedral meshes generated through the longest edge bisection rule are commonly used to decompose regular volumetric datasets since they produce highly adaptive crack-free representations. Efficient representations for such models have been achieved by clustering the set of tetrahedra sharing a common longest edge into a structure called a diamond. The alignment and orientation of the longest edge can be used to implicitly determine the geometry of a diamond and its relations to the other diamonds within the hierarchy. We introduce the supercube as a high-level primitive within such meshes that encompasses all unique types of diamonds. A supercube is a coherent set of edges corresponding to three consecutive levels of subdivision. Diamonds are uniquely characterized by the longest edge of the tetrahedra forming them and are clustered in supercubes through the association of the longest edge of a diamond with a unique edge in a supercube. Supercubes are thus a compact and highly efficient means of associating information with a subset of the vertices, edges and tetrahedra of the meshes generated through longest edge bisection. We demonstrate the effectiveness of the supercube representation when encoding multiresolution diamond hierarchies built on a subset of the points of a regular grid. We also show how supercubes can be used to efficiently extract meshes from diamond hierarchies and to reduce the storage requirements of such variable-resolution meshes.

SECTION 1

## Introduction

Discrete volumetric datasets are often modeled as polyhedral meshes where scalar values are associated with the vertices of the mesh. Due to the increasing sizes of volumetric datasets, a multiresolution model is often used, thus creating a hierarchical spatial decomposition on the vertices of the mesh. Decompositions based on nested cubes, known as octrees, are popular for modeling 3D scalar fields as they allow one to focus resources on regions of interest within the dataset, while aggregating the less relevant regions into larger blocks. However, an oc-tree partitioning may introduce cracks into the scalar field representation. Additionally, variable-resolution representations extracted from octrees have limited adaptability in that each cubic block is replaced with eight new cubes. Longest edge bisection (LEB) hierarchies were introduced as a multiresolution modeling scheme over regular grids to increase adaptability while enforcing crack-free, or conforming, modifications to the model. However, this adaptability increases the number of modeling primitives at full resolution where each cube is replaced by six tetrahedra. This problem is mitigated by aggregating the set of tetrahedra involved in each modification to the mesh into a modeling primitive called a diamond. The regularity of the decomposition and the structure of the mesh enables implicit encodings for both the hierarchical and geometric relationships among the diamonds in the hierarchy.

Here, we propose the supercube, a high-level primitive for the edges of an LEB hierarchy. A supercube is a structured set of edges within an LEB hierarchy that captures the intrinsic symmetry of the model. Whereas previous representations contain several congruence classes of tetrahedra or diamonds, each with different geometric alignments, supercubes are all identical (up to scale). Diamonds and supercubes are related by a one-to-one correspondence from the longest edge of a diamond to a supercube edge. Thus, the set of edges in a supercube corresponds to all distinct types of diamonds within the hierarchy.

Many operations on LEB hierarchies apply to only a sparse subset of the tetrahedra within the hierarchy. For example, in isosurfacing applications, we are only interested in the subset of tetrahedra that intersect the isosurface. Similarly, large regions of a domain are often oversampled due to the use of a regularly sampled representation. However, implementations of these LEB hierarchies have mostly focused on efficient representations for the entire hierarchy, where the tetrahedra are implicitly indexed. We propose the use of supercubes as containers for data associated with coherent subsets of the elements within an LEB hierarchy. Since this clustering incurs overhead related to the storage of explicit spatial indexes, we consider the number of encoded diamonds with respect to an encoding of the full hierarchy, as well as the number of elements associated with each cluster. We demonstrate the effectiveness of supercubes as a multiresolution model for complete, or sparse volume datasets.

The remainder of this paper is organized as follows. We review related work and the longest edge bisection model in Sections 2 and 3, respectively. In Section 4, we introduce the supercube primitive, while, in Section 5, we discuss efficient encodings for supercubes. In Section 6, we discuss an application of supercubes to the encoding of multiresolution models of 3D scalar fields, while in Section 7 we discuss an efficient encoding of the mesh extracted though selective refinement. We present experimental results in Section 8. Finally, we draw some concluding remarks in Section 9.

SECTION 2

## Related Work

Multiresolution tetrahedral models for 3D scalar fields based on longest edge bisection (LEB) were introduced for domain decomposition in finite element analysis [10], [17], [22], and have been applied in several contexts, including scientific visualization [31], [23], [6], [8], [16], [25], surface reconstruction [18] and volume segmentation [12].

The containment hierarchy among the tetrahedra in an LEB mesh induces a natural tree representation, in which the nodes are tetrahedra and the two children of a tetrahedron τ are the tetrahedra generated by bisecting t. If the tree is encoded using an array, the parents and children of a tetrahedron can be implicitly determined by their array indices. This representation is used in [31], [7], [6], [13], [8], [16].

LEB meshes can be non-conforming and, thus, can generate isosurfaces with cracks. This problem can be avoided by simultaneously subdividing all tetrahedra sharing that edge. This can be performed through neighbor finding algorithms [10], [17], [13] or through a precomputed saturated error which implicitly forces all longest edge neighbors to split [7], [6].

An alternative approach is to directly encode the cluster of tetrahedra sharing a longest edge [2], [8], [9], [26], [28]. Gregorski et al. [8] denote such a primitive as a diamond, and propose an implicit representation for the spatial and hierarchical relationships of a diamond based on its level, orientation and position. In our implicit diamond representation, the level and orientation can also be derived from the position of the diamond. The diamond paradigm has been generalized to higher dimensions in connection with adaptive mesh generation [19]. A decomposition of an arbitrary dimensional diamond into a pair of axis-aligned hypercubes is introduced in [28] to yield an implicit representation for diamond-based LEB hierarchies over regular grids.

All the above methods exploit the regularity of the data distribution by encoding the multiresolution model as a regular grid where all vertices are present. They are thus only efficient for representing a complete LEB hierarchy of dimensions (2N +1)3. However, many operations on LEB hierarchies pertain to only a subset of the tetrahedra. To the best of our knowledge, methods to encode an incomplete LEB hierarchy have only been proposed in 2D for terrain modeling [5], [27]. The former representation [5] uses variable-length pointers to encode the number of containment hierarchy nodes that are skipped when a node is not refined. The latter representation [27] uses a similar clustering strategy to the one presented in this paper for encoding an incomplete hierarchy of 2D diamonds at multiple resolutions.

There have been many proposed optimizations to enable interactive visualization of LEB hierarchies including frame-to-frame coherence [3], [8], parallel rendering [7], front-to-back sorting, view-dependent rendering [16], tetrahedral stripping [20], and chunked updates [9], [1]. Since our model is based on the same hierarchies, these optimizations can be easily incorporated into our pipeline.

Octrees are another popular family of spatial data structures used in visualization [29], [4], [21], [11], [24]. The subdivision rule for an octree node n with side length ℒ, is to replace n by its 8 children, each with side length ℒ/2. Since LEB hierarchies extract conforming meshes, they are more closely related to restricted octrees [29], [24] where neighboring nodes can differ by at most one level of resolution.

SECTION 3

## Longest Edge Bisection and Diamond Hierarchies

In this Section, we review hierarchical spatial decompositions of the domain of a 3D scalar field based on the longest edge bisection operation applied to a tetrahedral mesh partitioning the domain.

A mesh in which all cells are defined by the uniform subdivision of a cell into scaled copies is called a nested mesh. A special class of nested meshes are those generated by bisecting tetrahedra along their longest edge, which we denote as Longest Edge Bisection (LEB) meshes. The bisection rule for a tetrahedron τ in such a mesh consists of replacing t with the two tetrahedra obtained by splitting τ along the plane defined by the middle point of its longest edge e and the two vertices of t not adjacent to e. When this rule is applied recursively to an initial decomposition of a cubic domain Ω into six tetrahedra sharing a diagonal of Ω, it generates three congruent classes of tetrahedra, each with a single longest edge. We denote the tetrahedra congruent to those sharing a diagonal of the base cube as 0 -tetrahedra. Tetrahedra congruent to those obtained by splitting a 0-tetrahedron are denoted as 1 -tetrahedra and have a longest edge aligned with a face diagonal of the base cube. Finally, 2 -tetrahedra have a longest edge aligned with an edge of the base cube and are congruent to those obtained by splitting a 1-tetrahedron. Tetrahedra obtained by splitting a 2-tetrahedron are congruent to 0-tetrahedra.

Applying the longest edge bisection rule to tetrahedra does not generate conforming meshes. Thus, we consider a clustering of the tetrahedra that need to be subdivided concurrently. Given an LEB mesh Σ, the set of tetrahedra sharing a common longest edge forms a diamond, which we denote as δ . The longest edge of δ is called its spine. Since all tetrahedra within a diamond share their spine, they all belong to the same class of tetrahedra. As a consequence, there are three congruence classes of diamonds: those with spines aligned with diagonals of an axis-aligned cube (0-diamonds), with face diagonals of such a cube (1-diamonds) or with edges of such a cube (2-diamonds). An i-diamond, for i ∈ {0, 1, 2} is formed by {6, 4, 8} tetrahedra, and contains {8,6,10} vertices, respectively Figure 1 illustrates the three diamond classes and highlights their spines (black edges), parents (hollow gray vertices) and children (red vertices).

Fig. 1. The three classes of diamonds, distinguished by the alignment of their spines (black edges). Central vertices of children coincide with the midpoint of a subset of the diamond's edges (red vertices), while those of parents coincide with a subset of its vertices (hollow gray vertices).

A diamond δ is subdivided by bisecting all of its tetrahedra according to the longest edge bisection rule, thus doubling the number of tetrahedra within δ. We note that all changes to the LEB mesh Σ due to the subdivision of a diamond δ occur within the interior of the domain of δ and, thus, the vertices, edges and faces on the boundary of δ are unaffected by its subdivision. The local effect of such a subdivision is to remove its spine, to add a vertex at the midpoint of its spine, which we denote as the central vertex of the diamond, and to add edges from the central vertex to all other vertices of δ.

Let us consider the collection Δ of all the diamonds associated with the longest edges of the tetrahedra T arising from longest edge bisection to a cubic domain Ω. The containment relation between the tetrahedra in T induces a parent-child dependency relation over the diamonds in Δ. A diamond δp is a parent of another diamond dc if and only if some tetrahedra in δc are created by the splitting of at least one tetrahedron in δp. If δp is a parent of δc, then δc is called a child of δp.

The set Δ with the parent-child dependency relation defined over Δ can be easily shown to define a partial order relation and thus it can be described as a Directed Acyclic Graph (DAG) (see [2] for a proof), whose nodes correspond to diamonds, and whose arcs are defined by the parent-child dependency relation between diamonds. We call the resulting model a Hierarchy of Diamonds (HD). Due to the regularity of the vertex distribution and the subdivision rule, the DAG associated with D has a fixed structure. Thus, with the exception of diamonds whose spines lie on the domain boundary, {0,1, 2} diamonds always have {3, 2, 4} parents and {6, 4, 8} children, respectively.

SECTION 4

## Supercubes

Since we employ longest edge bisection as the subdivision operation, and diamonds are defined by their spine, it is useful to consider the spatial and hierarchical relationships among the edges of an LEB hierarchy. An analysis of this structure reveals a higher level of symmetry within the hierarchy than that which is apparent at the level of diamonds (see Figure 3). Specifically, each level of resolution is tiled by a repeating pattern of edges arranged in a cubic domain, which we call a supercube.

Supercubes can be defined constructively by their set of edges. If we consider an empty cubic domain, the edges of a supercube σ are formed by

• the edges joining the center of the cube to its eight corners (see Figure 2a),

• the edges joining the center of each of the six faces of the cube to each of the four face corners and to the center of the cube (adding thirty additional edges, see Figure 2b), and

• the edges joining the midpoint of each of the twelve edges of the cube to the two closest cube corners, the two closest face centers and to the cube's center (adding 60 additional edges, see Figure 2c).

Fig. 2. The edges of a supercube σ. (a) Eight edges originating from the center of σ (b) Six groups of five edges originating from face centers of σ (c) Twelve groups of five edges originating from the side centers of σ. The black edges in (a) and (b) are shown only for context.

To ensure that each edge in an LEB hierarchy is only associated with a single supercube, we adopt the common convention used for octrees [24] that a supercube uses half-open intervals, i.e. it contains all internal edges as well as the edges on its lower boundaries but not the edges on its upper boundaries. Thus, we consider any edge of a supercube s whose endpoints are both on an upper boundary of s to belong to a neighboring supercube. Of the 98 edges introduced above, a supercube only contains the 56 edges that satisfy the half-open criteria and consists of: 8 cube diagonals, 24 face diagonals and 24 cube edges. This is illustrated (in 2D) in Figure 3 at three consecutive levels of resolution, where the solid lines are supercube edges, while the dashed lines on the upper boundaries indicate edges belonging to neighboring supercubes due to the half-open interval rule.

Fig. 3. Supercubes (in 2D) are structured sets of edges tiling each level of resolution within an LEB hierarchy. Three consecutive levels of resolution covering the same domain are shown, containing 1, 4 and 16 supercubes, respectively. Dashed edges are excluded due to the half-open interval rule.

The one-to-one correspondence between edges of an LEB hierarchy and the spines of diamonds provides a unique association from each diamond to a single supercube. This correspondence enables the use of supercubes to associate information with a coherent subset of the diamonds while minimizing the geometric overhead required to access this information. This is facilitated by an efficient encoding for supercubes, as described in the following Section.

SECTION 5

## Encoding Supercubes

In this Section, we discuss an encoding for supercubes within a hierarchy of diamonds D built on a grid with (2 N + 1)3 points. We assume that all vertices of the diamonds in D have integer coordinates in the range of [0 ..2 N].

### 5.1 Encoding diamonds

Due to the one-to-one correspondence between diamonds and grid points, a diamond d 2 D can be uniquely indexed by its central vertex. We consider the binary representation of the coordinates vx vyand vz of central vertex vc as TeX Source $${\bf{v}}_c = \left[\matrix{ v_x = x_1 x_2 \ldots x_m d_{x_1 } d_{x_2 } 00 \ldots 0 \hfill \cr v_y = y_1 y_2 \ldots y_m d_{y_1 } d_{y2} 00 \ldots 0 \hfill \cr v_z = \underbrace {z_1 z_2 \ldots z_m }_{\rm{\sigma }}\underbrace {d_{z_1 } d_{z_2 } }_{\rm{\tau }}\underbrace {00 \ldots 0}_{\rm{\gamma }} \hfill \cr} \right]$$ The scale γ of a diamond δ is the minimum of the number of trailing zeros among its three coordinates. Thus, in any diamond δ at scale γ, the rightmost g bits in each of vx vy and vz are zero, but at least one of the bits at position γ + 1 (i.e dx 2 dy 2 or dz 2 in Equation (1)) is nonzero. The level of a diamond δ at scale γ is , = N − γ, and equals the number of i-diamond ancestors of δ in the DAG, i.e. its depth in the DAG modulo 3.

The two bits at positions γ + 1 and γ + 2 in each coordinate of v cencode the diamond type τ. Since the type τ is encoded using 2 bits in each dimension, there are 43 = 64 possible values for τ. However, the definition of γ precludes the eight cases where dx 2 dy 2 and dz 2 are all zero. Thus, there are 56 valid diamond types, each corresponding to a distinct supercube edge. Finally, the number i of zeros at position γ + 1 of v c encodes the congruence class of δ.

The final m = N − (γ + 2) bits in each of the coordinates encode the origin of the supercube σ containing δ, i.e., its lower-left corner. Since there are no restrictions on the values of these bits, the origins of supercubes at a fixed level of resolution ,= m + 2 are points on a regular 3D grid that have been scaled by a factor of 2γ+2. Note that scale, type and supercube origin can be efficiently extracted from the binary representation of the central vertex of a diamond through bit shifting operations. Supercubes contain all unique types of diamonds within an LEB hierarchy, since the type τ of a diamond d is determined by the supercube edge with which its spine coincides. Thus, a diamond's type τ encodes the scaled offset of its central vertex from the supercube origin. i.e vc = σ + (τ ! γ), where '!' indicates a component-wise bit shift operation (see Figure 4).

Fig. 4. (a) A diamond's spine coincides with an edge (solid lines) of a single supercube. (b) Alternatively, a diamond's central vertex lies at the midpoint of a supercube edge.

### 5.2 Internal map of a supercube

Since supercubes are intended to efficiently encode information with a subset of the diamonds of an LEB hierarchy, they require some form of bookkeeping to index the encoded diamonds.

A simple approach would be to encode this data as an array with one entry for each of the 56 possible diamonds in the supercube. The data associated with each diamond is then indexed by its type τ, and unencoded diamonds are marked in place as missing. However, this can waste a considerable amount of storage for supercubes with many unencoded diamonds.

A more efficient approach for static representations is to index the encoded diamonds using a bitflag of 56 bits (i.e. 7 bytes) along with a corresponding array with storage only for the encoded diamonds. The data associated with a diamond of type τ is indexed in the array by the prefix sum of τ in the bitflag. Since prefix sum computations can be performed efficiently in hardware, the processing overhead of this representation compared to that of the simpler encoding above is negligible. Each supercube in this representation incurs a fixed storage overhead, regardless of the number of diamonds encoded. Thus, the overhead of this representation is reduced as the average concentration of encoded diamonds per supercube increases.

Since this bitflag representation requires a reorganization of the array data every time a diamond is added or removed, we use the array representation during the initial generation of the data, and convert to the bitflag representation immediately thereafter.

### 5.3 Encoding collections of supercubes

We observe that within a given level of resolution , supercubes can be uniquely indexed by their origin. However, supercubes from different levels can map to the same origin (see Figure 3).

We propose a two step access structure, where supercubes are first indexed by their level of resolution, and then by their origin. This also enables the level ℒ, of a supercube to be implicitly encoded within the access structure.

The encoded supercubes at a given level of resolution belong to a uniform grid that has been scaled by a factor of 2 γ+2. Thus, depending on the data distribution, we have several options for access structures to the supercubes. When the majority of the data within a given level is present, the supercubes can be indexed using a full array. However, most of the time, this will not be the case. An MX-Octree [24] is a hierarchical spatial access structure defined on grids where data is only associated with the leaf nodes of the complete tree, and adjacent unencoded elements can be aggregated to reduce the wasted space. The advantage of such a structure for supercubes is that the higher levels of resolution contain fewer elements, and enable quicker access. A third option is to organize the supercubes in a hash table to provide O(1) access.

Thus, the data associated with a diamond δ with supercube origin σ, type τ and scale γ can be accessed in three steps. First, the set of supercubes at level ℒ, = N − (γ + 2) is located. Next, the supercube s within this set is found. Finally, the data at location τ within σ is found using the internal map of σ.

### 5.4 Extension to higher dimensions

A closer inspection of Equation (1) reveals that supercubes can be defined in a dimension-independent manner. Thus, the binary representation of the central vertex provides all information for retrieving geometric and hierarchical information as in the 3D case. Consider a regular grid in a d-dimensional cubic domain with (2 N + 1) d vertices. With the exception of the 2 d vertices at the domain corners, each vertex corresponds to the central vertex of a d-dimensional diamond.

The type τ of a diamond δ at scale g is thus encoded by the two bits at positions γ + 1 and γ + 2 in the binary representation of the central vertex of δ . There are four possible values for each of the d components of τ, and thus, 4 d possibilities for τ. The values at position g + 2 are unrestricted, but the definition of diamond scale invalidates any case where all the bits at position g + 1 are zero. Since there are 2 d such cases, there are (4 d − 2 d) valid d-dimensional diamond types.

Similarly, since the class of a diamond is encoded within the diamond type τ by the number of zeros in the lower bit of each component, we can determine the number of distinct diamond types in each class in δ dimensions. Since there are (id) combinations of i zeros in the d lower bits, and 2 d arrangements in the upper bits of τ, there are 2 d (id) distinct types of i-diamonds in dimension d, for 0 < i < d.

SECTION 6

## Diamond-Based Multiresolution Volumes

One of the primary visualization applications of diamond hierarchies has been as a multiresolution model for a volume dataset defined at the vertices of a regularly sampled scalar field F. We call this model a Diamond-based Multiresolution Volume (DMV).

A multiresolution model [2], [15] is a static structure that consists of a coarse base mesh, a set of atomic modifications and a dependency relation on the modifications defining a partial order. In the case of a DMV, the base mesh is a coarse LEB mesh and the modifications correspond to the diamonds. Since each diamond corresponds to its central vertex, the vertices are ordered according to the dependency relation of a hierarchy of diamonds D. Thus, the spatial decomposition and dependency relation of a DMV are obtained from the implicit relationships among the diamonds in D, and only the modifications need to be explicitly encoded.

The minimal information encoded in a diamond δ is given by the scalar value F(v c), associated with the central vertex v c of δ . In addition to encoding F(v c), each diamond usually encodes aggregate information about the field values within the domain of δ, which can be used to accelerate mesh extraction. The error (δ) associated with diamond δ encodes the maximum approximation error for any point within the domain of δ, i.e., TeX Source $$\varepsilon \left(\delta \right) = \mathop {Max\left({\varepsilon \left(p \right)} \right),}\limits_{p \in {\rm{\delta }}}$$ where is the absolute difference between the field value at point p and the approximated value obtained through barycentric interpolation of the field values at the vertices of δ . The range of values within the domain of δ is also usually maintained. This is used to cull irrelevant regions during field-based queries [30].

A full DMV, which we denote as Δf, contains diamonds corresponding to all vertices of a scalar field of dimensions (2 N + 1)3. The base mesh of Δf is a single 0-diamond δ covering the entire cubic domain Ω. The eight corner points of Ω (i.e. the vertices of δ) are the only points within Δf that do not correspond to diamonds.

A full DMV Δf can be encoded as a three-dimensional array whose entries represent the information associated with each diamond and can be indexed using a C-style row major ordering, or a more complicated indexing scheme such as a hierarchical space-filling curve [8].

However, when some of the vertices of a full DMV Δf are unavailable or irrelevant for an intended application, a partial DMV, which we denote as Dp, can be much more efficient to encode than Δf . The base mesh of a partial DMV Δp is a coarse LEB mesh consisting of diamonds from a corresponding hierarchy of diamonds D, whose vertices are tagged with values from the scalar field F. The diamonds in Δp are a subset of the diamonds of D subject to the transitive closure constraint that if a diamond δ belongs to Δp then all ancestors of δ belong to Δp as well. Finally, the dependency relation of Δp is the dependency relation of Δ restricted to the diamonds in Δp.

A straightforward representation for Δp is a diamond-based one, where each encoded diamond must explicitly maintain the coordinates of its central vertex in addition to its encoded data. However, due to the transitive closure constraint of the partial DMV model, the encoded diamonds exhibit both a spatial and a hierarchical coherence which can be exploited by clustering the diamonds into supercubes. Since Δp is static, and typically sparse with respect to a corresponding full DMV we represent the internal map within supercubes using the bitflags encoding of Section 5.2. The supercubes at each level are indexed by the coordinates of their origin.

SECTION 7

## Encoding An Active Front

Selective refinement is the general operation of extracting a variable-resolution mesh from a multiresolution model [15]. It is defined by an application-dependent predicate called the Level of Detail (LOD) criterion, which determines the minimum set of modifications necessary to generate a mesh of the required resolution. The LOD criterion can be based on many factors, including approximation error, field-values (e.g. isosurface extraction), location (e.g. view-dependent and region of interest queries) and perception (e.g. screen-space error). Selective refinement is performed by traversing the DAG describing the multiresolution model either in a top-down manner starting from the base mesh or incrementally from an already extracted mesh. The status of the refinement process is described by a cut of the DAG, called the active front, which separates the set of modifications that have been applied from those that have not.

When selective refinement is performed on a DMV, the active front describes a conforming tetrahedral mesh Σ, that we call the current mesh, which can be used to visualize an approximation of the dataset, e.g. to extract an isosurface or for direct volume rendering. In general, a diamond in a current mesh S will not contain all of its tetrahedra, and thus diamonds need to track which of their tetrahedra are present in Σ (see Figure 5a for an example in 2D, where, e.g., the blue diamonds contain only one of their two triangles). A diamond in the active front cannot be subdivided until all of its parents have been subdivided and, thus, all of its tetrahedra are present in Σ.

Fig. 5. (a) A portion of a current mesh S (in 2D). (b) Highlighted triangles from S map to triangles in the supercube s. Gray triangles map to other supercubes. Diamonds in S rarely contain all their triangles. (c) The set of all triangles (tetrahedra in 3D) in a supercube overlap.

We have observed [26] that in a hierarchy of diamonds, a diamond with k parents has 2 k tetrahedra, and that, upon subdivision, each parent of a diamond δ contributes a pair of tetrahedra to δ . Thus, since 0-diamonds have three parents 1-diamonds have two parents and 2-diamonds have four parents, bitflags with three, two and four bits, respectively, are sufficient to track the subdivided parents of diamonds as well as their tetrahedra in Σ. Since a diamond cannot be subdivided until all of its parents have been subdivided, the former property can be used to accelerate the extraction process, while the latter property can be used to visualize the extracted mesh Σ.

A straightforward representation for encoding an active front utilizes a hash table of diamonds. Each diamond δ is indexed by its central vertex v c, and contains a set of bitflags tracking its subdivided parents as well as any additional information that must be encoded for the diamond. This representation requires 7 bytes of overhead for each encoded diamond: 6 bytes for the coordinates of its central vertex and one additional byte of bookkeeping.

However, there is a considerable amount of coherence among the tetrahedra in Σ that a diamond-based representation of an active front cannot exploit. Due to the LEB subdivision rule, neighboring tetrahedra in Σ can differ by at most one level of refinement, so the presence in Σ of a tetrahedron from diamond δ often indicates the presence of tetrahedra from neighboring diamonds in the hierarchy. We therefore propose a supercube-based representation for active fronts extracted from a DMV. Recall that a supercube indexes 56 diamonds, of which

• 8 are 0-diamonds, each with 6 tetrahedra,

• 24 are 1-diamonds, each with 4 tetrahedra and

• 24 are 2-diamonds, each with 8 tetrahedra.

Thus, each supercube indexes up to 336 tetrahedra. Note, however, that these tetrahedra overlap (see Figure 5c), but tetrahedra in a conforming mesh cannot overlap. Due to the containment relation among the tetrahedra in the hierarchy, the presence of a tetrahedron t in Σ precludes the presence of its parent tetrahedron and both of its children tetrahedra from Σ. Thus, in practice, a supercube in Σ contains significantly fewer than the 336 possible tetrahedra (see Figure 5b).

Since tetrahedra contribute to diamonds of Σ in pairs, we can track the tetrahedra due to a single supercube in the active front Σ using (336 ℒ/2) bits = 21 bytes. Our proposed supercube-based representation for an active front therefore requires 27 bytes of overhead per supercube: 6 bytes to index its origin and 21 bytes of bookkeeping.

SECTION 8

## Results

In this Section, we evaluate the effectiveness of supercubes as a partial DMV representation and as an active front representation across a testbed of volume datasets of resolution up to 5123. All experiments were run on a 2 GHz Intel Core 2 Duo laptop with 4 GB of RAM.

We first consider when it is appropriate to represent a dataset using a partial diamond hierarchy. This is measured in terms of the density of the dataset, i.e. the percentage of samples from a full hierarchy that are retained in the partial representation. Next, we analyze when supercube-based representations of a partial hierarchy are appropriate. For this, we consider the concentration of the clustering, that is, the average number of diamonds encoded per supercube. A supercube-based representation for a partial hierarchy of diamonds provides the maximum benefit when the desired dataset is sparse with respect to the full dataset and concentrated with respect to the supercube clustering.

Clustered representations typically incur some computational overhead related to the extra level of indirection required to index the data. We thus compare the runtime performance of the DMV representations during selective refinement queries. Since the efficiency of a selective refinement query depends on the representation of its associated active front, we conclude by comparing the supercube-based and diamond-based active front representations of Section 7 in terms of performance and storage costs.

We begin by introducing some notation. Let Δf denote the full DMV, containing nf = (2 N + 1)3 diamonds and let Δp denote the desired partial DMV, containing np diamonds. Δp can be encoded using a diamond-based partial DMV Δd or a supercube-based representation Δs, whose np diamonds are clustered into ns supercubes. Finally, let bδ denote the number of bytes required to encode the data associated with each diamond bv the number of bytes required to encode the coordinates of the central vertex of each diamond and bσ the number of bytes required to encode the indexing and bookkeeping information associated with each supercube.

We compare the costs of these representations in Table 1 with respect to an ideal representation Δp, which only represents the n p diamonds. This representation is not practical since it has no way of indexing the encoded diamonds, but we use it to compare the remaining representations. Δf must encode all n f samples but the indexing of its elements is implicit. However, it encodes nf − n p extraneous diamonds. In contrast, Δd encodes only the n p diamonds but must also explicitly encode the spatial coordinates of each diamond. Finally, the overhead in Δs can be attributed entirely to the n s supercubes.

TABLE 1 Storage requirements and overhead, in bytes, for the various DMV representations. Overhead is relative to Δp.

Using this notation, we define the density $= np/nf of the dataset as the ratio of retained diamonds in Δp compared to Δf . Also, we define the concentration # = np/ns of the dataset as the average number of diamonds per supercube. We note that #[1],56] since we only encode supercubes that contain at least one diamond. By rearranging the equations in Table 1 and substituting terms for$ and #, we can compare the representations. Δs is more compact than Δf when TeX Source $$< {{b\delta } \over {b\delta + (b\sigma +)}},$$ Δ is more compact than Δf when TeX Source $$< {{b\delta } \over {b\delta + b_v }},$$ and Δs is more compact than Δd when # > bσ /bv. However, since all representations must encode the np diamonds, a more relevant measure of the effectiveness of each representation is related to its overhead with respect to Δp (third column of Table 1). While Δd has a constant overhead of bv bytes per diamond, the overhead in Δs is related to # as (bs /#) bytes per diamond.

As a concrete example, let the size of each refinement be bδ = 4 bytes as in [8]. Further, assume vertices are encoded in 6 bytes as three unsigned shorts, then bv = 6 bytes. Finally, let bs = 17 bytes consisting of: the origin of the supercube (6 bytes), bitflags to indicate the encoded diamonds (7 bytes) and a pointer to an array containing the data (4 bytes). Then, in terms of density and concentration, Δs is more compact than Δ f and Δd, respectively, when , and when # > 17 ℒ/6. The gray curves in Figures 6 and 7 separate the half-spaces in which Ds is more compact than Δf by the constant to its right. For example, when # = 17 and $= .2, Δf requires four times as much space to encode as Δs. Fig. 6. Density and concentration of partial DMV datasets of Table 2 extracted from a complete DMV with uniform error greater than 0% (red) and 1% (orange). Gray curves indicate the factor by which a supercube-based partial DMV Ds is more compact than a full DMV Df. As a first application, consider a partial DMV Δp generated from a full DMV Δf by retaining all diamonds whose error is greater than a given threshold . Since Δp corresponds to a partial hierarchy of diamonds, it must also retain all ancestors of the retained diamonds. This ensures the transitive closure of the diamond dependency relation (as described in Section 6). When = 0, this generates a lossless encoding of Δf, i.e. Δf can be reconstructed from Δp without any error. Table 2 lists the number of elements, and storage costs in a zero-error partial DMV Δp for various datasets as well as their density$ and concentration #. These datasets are plotted on Figure 6 for = 0% and = 1%. We observe that some datasets, such as Fuel and Aneurism are extremely sparse, and achieve a 12.2 times and 17.3 times reduction in storage requirements, respectively, compared to Δf . In contrast, other datasets such as Plasma and Buckyball are quite dense, and thus, a partial representation does not yield a significant savings compared to Δf . However, even for these datasets, the size of Δs is close to that of Δ f (requiring 4% more and 19% less space, respectively), whereas Δd is much larger (requiring 2.4 times and 1.9 times more space, respectively). Most of the remaining datasets achieve around three times savings for Δs compared to Δf (see last column in Table 2).

Since bv (6 bytes) is 1.5 times as large as bδ (4 bytes), the overhead associated with Δd compared to the ideal representation Δp is 150%. In contrast, the overhead of Ds is related to the concentration of the supercube clustering, and averages around 12% across all datasets. Thus, the 2.25 times savings achieved by Δs compared to Δd is entirely due to the difference in storage overhead.

Partial hierarchies can also be used to reduce the storage requirements and mesh extraction times required for isosurface extraction when the (set of) isovalue(s) can be determined in advance. In iso-surfacing applications, the active cells, i.e. those that intersect the iso-surface, typically occupy a sparse but spatially widespread subset of the domain. Since isosurfaces are continuous, there is a great deal of spatial and hierarchical coherence among the active cells.

TABLE 2 DMVs generated based on uniform field error with = 0. File sizes for Δf, Δd and Δs are listed in MB (1 MB = 10242 B). All datasets are plotted on Figure 6 (red circles).

We can thus generate an isovalue-based partial DMV Δp from Δf, where all diamonds whose range intersects the predetermined isovalue(s) are retained, while those not intersecting the isovalue(s) are only retained if they are ancestors of the required diamonds. Δp can then be queried using selective refinement to extract adaptive tetrahedral meshes and isosurfaces. This model thus trades fidelity in regions away from the desired isosurface for storage and extraction efficiency of the desired isosurface(s).

Table 3 lists the number of elements and the storage requirements for the three DMV representations for each isovalue-based dataset (the values of N and nf can be found in Table 2). The density and concentration of these datasets are plotted in Figure 7. We observe that these extracted partial DMVs are indeed sparse with respect toΔ f, averaging around 5% of the samples and often much less. They are also quite concentrated with respect to the supercube clustering, with an average concentration of 26 out of a possible 56 diamonds per supercube. Thus, supercube-based partial DMVs of these datasets require an average of 25 times less storage than their corresponding full DMVs. In fact, the largest dataset Xmas{868} (orange square in Figure 7) requires only 1.3% of the samples of Δ f and is over 65 times more compact.

As in the error-based partial DMVs, the supercube-based encodings are approximately 2.3 times smaller than a corresponding diamond-based DMV, and have very low overhead (around 13%) compared to the ideal representation Δp.

We note that, when more than one isovalue is desired (as in the set of Engine datasets on the bottom of Table 3, and the corresponding colored rhombuses in Figure 7), there is also a significant amount of coherence among the active cells of distant isovalues (e.g. 58 and 170). Thus, the supercube-based representation for Engine{58, 100 } requires only 15% more storage space than either of the individual datasets Engine{58} or Engine{100}, and has a higher concentration than either of them. This advantage is increased in Engine{58, 100, 170} as the samples from a third isovalue are added, where the density only increases by 3% and the supercube concentration increases by .8.

Fig. 7. Density and concentration of partial DMV datasets of Table 3 containing all diamonds intersected by the specified isovalue(s).

We now compare the runtime performance of the three DMV representations, Δf, Δd and Δs, by comparing the rates at which they can process diamonds during selective refinement queries. Since an active front facilitates selective refinement, we evaluate the performance of the two active front representations introduced in Section 7. Recall that the active front of a selective refinement query on a diamond hierarchy corresponds to a tetrahedral mesh called the current mesh Σ. This mesh can be represented using a diamond-based representation, which we denote as Σd, or through a supercube-based representation which we denote as Σs.

TABLE 3 DMVs generated based on specific isovalue(s). File sizes are listed in MB (1 MB = 10242 B). All datasets are plotted on Figure 7.
TABLE 4 Selective refinement performance of the three DMV representations using a diamond-based active front representation Σd and a supercube-based active front representation Σs, in terms of the minimum, maximum and average number of diamonds visited per second.

Since selective refinement queries depend on the specific LOD criterion used, we evaluate the performance of each structure in terms of the number of diamonds visited by the selective refinement query per second. In Table 4, we present the aggregate results over our testbed of datasets for an error-based isosurface extraction query and note that we observed the same trends when using different queries, such as region of interest and approximation error queries. The LOD criterion for this query selects all diamonds with approximation error greater than some threshold e and containing a particular isovalue k. As a partial DMV for this query, we use the datasets generated in Table 3.

For these experiments, we implemented Δd using a hash table from the central vertex of each diamond in Δd to the data associated with it. This incurs a storage overhead inversely proportional to the load factor of the hash table, i.e., the ratio of diamonds in Δd to buckets in the hash table. Across all datasets tested, we found the load factor to average 73.5% (with a standard deviation of 16%). Thus, the hash-indexed Δd requires an average memory overhead of 36% compared to the values listed in Table 3. Similarly, for Δs, we used a separate hash table for each level of supercubes, and indexed the data associated with each supercube by its origin (as described in Section 5.3). We found the load factor to average 75% (with a standard devaiation of 11%) across the datasets, thus, requiring an average memory overhead of 33% compared to the values listed in Table 3.

We evaluate the performance of the DMV representations (i.e. the rows of Table 4) by comparing the average number of diamonds processed per second. Recall that due to the query type and the transitive closure of Δd and Ds, all three representations process the same set of diamonds and yield the same result. We first observe that all three representations yield similar performance results of about 300,000 diamonds per second. Δf is the fastest DMV representation, since it can directly access its diamonds using the array location of their central vertices. Δd is approximately 5-6% slower than Δf, due to its use of indirect hashing, while Δs is around 7.5% slower than Δf . Thus, despite its required extra processing, such as extracting the supercube origin and diamond type and the prefix sum calculation, supercube-based Δs's performance is within 2% that of diamond-based Δd.

Next, we evaluate the relative performance of the two active front representations Σs and Σd by comparing columns 4 and 7 (AVERAGE) of Table 4. Thus, the supercube-based active front representation Σs is, on average, 1-2% more efficient than the diamond-based active front representation Σd. Although this is not a significant difference, we note that the addition (removal) of any diamond to (from) Σd incurs a memory allocation (deallocation), whereas, due to the supercube clustering, such allocations (deallocations) are rarer for Σs.

Finally, we evaluate the sizes of the two active front representations. Recall that the supercube-based active front representation Σs requires 27 bytes overhead per supercube. Over the entire test, Σs averaged 26.5 tetrahedra per supercube (with a standard deviation of 3.1). Thus, the supercube-based active front Σs incurs an overhead of around 1 byte per tetrahedron in the active front. In contrast, the diamond-based active front representation Σd requires 7 bytes overhead per diamond. Over the entire test, Σd averaged 3.3 tetrahedra per diamond (with a standard deviation of .55). Thus, the diamond-based active front incurs an overhead of around 2.16 bytes per tetrahedron in the active front.

We implemented both Σs and Σd using hash tables, (analogously to our indexing of the partial DMVs above). Across all datasets, we achieved an average load factor of 74% for Σd (with a standard deviation of 15%), and thus the hash-indexed Σd incured a memory overhead of around 36%. The average load factor for Σs was 72% (with a standard deviation of 11%), requiring an average overhead of 39%. Thus, a supercube-based active front representation Σs can be used to extract an equivalent mesh from a DMV as a diamond-based representation Σd in slightly less time and using less than half the storage.

Figure 8 illustrates the clustering of tetrahedra in a supercube-based active front by the color of their isosurface triangles.

Fig. 8. Isosurfaces extracted from DMV models. Triangles are colored by their embedding supercube. (a) Tooth dataset with uniform error ∊ > 10%. A supercube-based active front representation Σs has an average of 26.9 tetrahedra per supercube yielding an average overhead of 1 byte per tetrahedron. The diamond-based active front Σd has an average of 2.9 tetrahedra per diamond yielding an overhead of 2.4 bytes per tetrahedron. (b) Fuel dataset with uniform error ∊ > .03%. Σs has an average of 25.6 tetrahedra per supercube yielding an average overhead of 1.05 bytes per tetrahedron. Σd has an average of 3.3 tetrahedra per diamond yielding an overhead of 2.12 bytes per tetrahedron.
SECTION 9

## Concluding Remarks

We have proposed the supercube as a high-level primitive for compactly encoding nested tetrahedral meshes generated through longest edge bisection. Supercubes represent the basic unit of symmetry within the hierarchy of diamonds model, and encode all distinct types of diamonds within the hierarchy (up to scaling factors). Supercube clustering provides an efficient means of associating information with a subset of the diamonds within an LEB hierarchy by exploiting the spatial and hierarchical coherence within the dataset.

We have demonstrated that several visualization datasets are over-sampled by a factor of three or more while at the same time the retained elements have a high degree of coherence with respect to super-cube clustering. Thus, a supercube-based partial DMV is an effective multiresolution representation that efficiently supports selective refinement queries with very little geometric or computational overhead. We have also demonstrated that a supercube-based active front representation can accelerate selective refinement while requiring less than half the storage of a diamond-based active front data structure.

Here, we have focused on the implementation of the internal map between supercubes and their associated data. However, since our indexing structure for the supercubes at each level utilizes a hash table (see the end of Section 8), our partial DMV representations can be inflated by as much as 40%. This can reduce the benefits of a supercube-based DMV Δs to a full DMV Δf, when their relative differences are less pronounced. However, when the relatives sizes are more signifi-cant, a hash-indexed Δs still provides a significant advantage over Δf . Additionally, we have demonstrated that hash-indexed DMVs and active front representations based on diamonds suffer from similar or worse overhead than their supercube-based counterparts.

In future work, we intend to focus on the indexing structure for diamonds and supercubes in a partial hierarchy. As mentioned in Section 5.3, since the set of supercubes at a given level of resolution tile the plane, an MX-octree is a promising representation for the spatial access structure. Specifically, a pointerless MX-octree [24] should achieve the storage goals of Tables 2 and 3. Alternatively, since Δd and Δs contain static spatial data, a perfect spatial hash [14] can be generated to yield significantly lower overhead than a standard hash table.

We are also currently looking into modifications of the supercube representation for higher dimensional datasets such as time-varying volume data where the internal map of each supercube must reflect the increased number of diamonds.

### Acknowledgments

We would like to thank the anonymous reviewers for their many helpful suggestions. This work has been partially supported by the National Science Foundation under grant CCF-0541032 and by the MIUR-FIRB project SHALOM under contract number RBIN-04HWR8. The Buckyball model is courtesy of AVS Inc. The Plasma model is courtesy of Visual Computing Group, National Research Council, Pisa, Italy. All other datasets are courtesy of volvis.org.

## Footnotes

K. Weiss is with the Department of Computer Science University of Maryland College Park 4406 A.V. Williams Building College Park MD 20742. E-mail: kweiss@cs.umd.edu.

L. De Floriani is with the Dipartimento di Informatica e Scienze dell'Informazione Università di Genova Via Dodecaneso 35 16146 Genova Italy. E-mail: deflo@disi.unige.it.

Manuscript received 31 March 2009; accepted 27 July 2009; posted online 11 October 2009; mailed on 5 October 2009.

## References

1. Adaptive tetrapuzzles: efficient out-of-core construction and visualization of gigantic multiresolution polygonal models.

P. Cignoni, F. Ganovelli, E. Gobbetti, F. Marton, F. Ponchio and R. Scopigno

ACM Transactions on Graphics, 23 (3): 796–803, 2004.

2. Multi-resolution mesh representation: models and data structures.

L. De Floriani and P. Magillo In M. Floater, A. Iske and E. Quak

editors, Principles of Multi-resolution Geometric Modeling, Lecture Notes in Mathematics, pages 364–418, Berlin, 2002. Springer Verlag.

3. ROAMing terrain: real-time optimally adapting meshes.

M. Duchaineau, M. Wolinsky, D. E. Sigeti, M. C. Miller, C. Aldrich and M. B. Mineev-Weinstein

In R. Yagel and H. Hagen, editors, Proc. IEEE Visualization, pages 81–88, Phoenix, AZ, 1997-10. IEEE Computer Society.

4. Adaptively sampled distance fields: a general representation of shape for computer graphics.

S. F. Frisken, R. N. Perry, A. P. Rockwood and T. R. Jones

In Proceedings SIGGRAPH'00 Conference, pages 249–254, New Orleans, LA, 2000-07. ACM Press.

5. Multi-resolution visualization and compression of global topographic data.

T. Gerstner

GeoInformatica, 7 (1): 7–32, 2003.

6. Topology-preserving and controlled topology simplifying multi-resolution isosurface extraction.

T. Gerstner and R. Pajarola

In Proceedings IEEE Visualization 2000, pages 259–266, 2000.

7. Multiresolutional parallel isosurface extraction based on tetrahedral bisection.

T. Gerstner and M. Rumpf

In Proceedings Symposium on Volume Visualization, pages 267–278. ACM Press, 1999.

8. Interactive view-dependent rendering of large isosurfaces.

B. Gregorski, M. Duchaineau, P. Lindstrom, V. Pascucci and K. Joy

In Proceedings IEEE Visualization, pages 475–484. IEEE Computer Society Washington, DC, USA, 2002.

9. Adaptive extraction of time-varying isosurfaces.

B. Gregorski, J. Senecal, M. Duchaineau and K. Joy

IEEE Transactions on Visualization and Computer Graphics, 10 (6): 683–694, 2004.

10. Symbolic local refinement of tetrahedral grids.

D. J. Hebert

Journal of Symbolic Computation, 17 (5): 457–472, 1994-05.

11. Dual contouring of hermite data.

T. Ju, F. Losasso, S. Schaefer and J. Warren

ACM Trans. Graph., 21 (3): 339–346, 2002.

12. Parallel volume segmentation with tetrahedral adaptive grid.

A. Kimura, Y. Takama, Y. Yamazoe, S. Tanaka and H. Tanaka

International Conference on Pattern Recognition, 2: 281–286, 2004.

13. Constant-time neighbor finding in hierarchical tetrahedral meshes.

M. Lee, L. De Floriani and H. Samet

In Proceedings International Conference on Shape Modeling, pages 286–295, Genova, Italy, 2001-05. IEEE Computer Society.

14. Perfect spatial hashing.

S. Lefebvre and H. Hoppe

ACM Transactions on Graphics (TOG), 25 (3): 579–588, 2006.

15. Level of Detail for 3D Graphics

D. Luebke, M. Reddy, J. Cohen, A. Varshney, B. Watson and R. Huebner

Morgan-Kaufmann, San Francisco, 2002.

16. 3D ROAM for scalable volume visualization.

S. Marchesin, J. Dischler and C. Mongenet

IEEE Symposium on Volume Visualization and Graphics, pages 79–86, 2004.

17. Local bisection refinement for n-simplicial grids generated by reflection.

J. M. Maubach

SIAM Journal on Scientific Computing, 16 (1): 210– 227, 1995-01.

18. Estimating the in/out function of a surface represented by points.

V. Mello, L. Velho and G. Taubin

Symposium on Solid Modeling and Applications 2003, pages 108–114, 2003.

20. Isosurface computation made simple: Hardware acceleration, adaptive refinement and tetrahedral stripping.

V. Pascucci

Eurographics/IEEE TVCG Symposium on Visualization (VisSym), pages 293–300, 2004.

21. Kizamu: A system for sculpting digital characters.

R. N. Perry and S. F. Frisken

In ACM Computer Graphics, (Proc. SIGGRAPH 2001), pages 47–56, Los Angeles, CA, 2001-08. ACM Press.

22. A 3D refinement algorithm for adaptive and multigrid techniques.

M. Rivara and C. Levin

Communications in Applied Numerical Methods, 8: 281–290, 1992.

23. Tetrahedron-based, least-squares, progressive volume models with application to freehand ultrasound data.

T. Roxborough and G. Nielson

In Proceedings IEEE Visualization 2000, pages 93–100. IEEE Computer Society, 2000-10.

24. Foundations of Multidimensional and Metric Data Structures

H. Samet

Elsevier, 2006.

25. Topological volume skeletonization using adaptive tetrahedralization.

S. Takahashi, Y. Takeshima, G. Nielson and I. Fujishiro

Geometric Modeling and Processing, 2004. Proceedings, pages 227–236, 2004.

26. Floriani. Multiresolution Interval Volume Meshes.

K. Weiss and L. De In H.-C. Hege, D. Laidlaw, R. Pajarola and O. Staadt

editors, IEEE/ EG Symposium on Volume and Point-Based Graphics, pages 65–72, Los Angeles, California, USA, 2008. Eurographics Association.

27. Sparse Terrain Pyramids.

K. Weiss and L. De Floriani

In Proc. of the 16th ACM SIGSPATIAL Int. Conference on Advances in Geographic Information Systems, pages 115–124, New York, NY, USA, 2008. ACM.

28. Floriani. Diamond Hierarchies of Arbitrary Dimension.

K. Weiss and L. De

Computer Graphics Forum, 28 (5): 1289–1300, 2009.

29. Real-time exploration of regular volume data by adaptive reconstruction of isosurfaces.

R. Westermann, L. Kobbelt and T. Ertl

The Visual Computer, 15 (2): 100–111, 1999.

30. Topological considerations in isosur-face generation extended abstract.

J. Wilhelms and A. Van Gelder

Proceedings of the 1990 workshop on Volume visualization, pages 79–86, 1990.

31. Multi-resolution tetrahedral framework for visualizing regular volume data.

Y. Zhou, B. Chen and A. Kaufman

In R. Yagel and H. Hagen, editors, Proceedings IEEE Visualization '97, pages 135–142, Phoenix, AZ, 1997-10. IEEE Computer Society.

## Cited By

No Citations Available

## Keywords

### IEEE Keywords

No Keywords Available

### More Keywords

No Keywords Available

No Corrections

## Media

No Content Available
This paper appears in:
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
Issue Date:
November/December 2009
On page(s):
1603 - 1610
ISBN:
1077-2626
Print ISBN:
N/A
INSPEC Accession Number:
10930766
Digital Object Identifier:
10.1109/TVCG.2009.186
Date of Current Version:
01 Nov, 2009
Date of Original Publication:
23 Sep, 2009