By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 9 • Date Sept. 2013

Filter Results

Displaying Results 1 - 14 of 14
  • An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces

    Page(s): 1425 - 1437
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4905 KB) |  | HTML iconHTML  

    Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IRn. To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bristle Maps: A Multivariate Abstraction Technique for Geovisualization

    Page(s): 1438 - 1454
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2613 KB) |  | HTML iconHTML  

    We present Bristle Maps, a novel method for the aggregation, abstraction, and stylization of spatiotemporal data that enables multiattribute visualization, exploration, and analysis. This visualization technique supports the display of multidimensional data by providing users with a multiparameter encoding scheme within a single visual encoding paradigm. Given a set of geographically located spatiotemporal events, we approximate the data as a continuous function using kernel density estimation. The density estimation encodes the probability that an event will occur within the space over a given temporal aggregation. These probability values, for one or more set of events, are then encoded into a bristle map. A bristle map consists of a series of straight lines that extend from, and are connected to, linear map elements such as roads, train, subway lines, and so on. These lines vary in length, density, color, orientation, and transparencyâcreating the multivariate attribute encoding scheme where event magnitude, change, and uncertainty can be mapped as various bristle parameters. This approach increases the amount of information displayed in a single plot and allows for unique designs for various information schemes. We show the application of our bristle map encoding scheme using categorical spatiotemporal police reports. Our examples demonstrate the use of our technique for visualizing data magnitude, variable comparisons, and a variety of multivariate attribute combinations. To evaluate the effectiveness of our bristle map, we have conducted quantitative and qualitative evaluations in which we compare our bristle map to conventional geovisualization techniques. Our results show that bristle maps are competitive in completion time and accuracy of tasks with various levels of complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cosine-Weighted B-Spline Interpolation: A Fast and High-Quality Reconstruction Scheme for the Body-Centered Cubic Lattice

    Page(s): 1455 - 1466
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4334 KB)  

    In this paper, Cosine-Weighted B-spline (CWB) filters are proposed for interpolation on the optimal Body-Centered Cubic (BCC) lattice. We demonstrate that our CWB filters can well exploit the fast trilinear texture-fetching capability of modern GPUs, and outperform the state-of-the-art box-spline filters not just in terms of efficiency, but in terms of visual quality and numerical accuracy as well. Furthermore, we rigorously show that the CWB filters are better tailored to the BCC lattice than the previously proposed quasi-interpolating BCC B-spline filters, because they form a Riesz basis; exactly reproduce the original signal at the lattice points; but still provide the same approximation order. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting Connectivity to Improve the Tangential Part of Geometry Prediction

    Page(s): 1467 - 1475
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1310 KB) |  | HTML iconHTML  

    Many algorithms have been proposed for the task of efficient compression of triangular meshes. Geometric properties of the input data are usually exploited to obtain an accurate prediction of the data at the decoder. Considerations on how to improve the prediction usually focus on its normal part, assuming that the tangential part behaves similarly. In this paper, we show that knowledge of vertex valences might allow the decoder to form a prediction that is more accurate in the tangential direction, using a weighted parallelogram prediction. This idea can be easily implemented into existing compression algorithms, such as Edgebreaker, and it can be applied at different levels of sophistication, from very simple ones, that are computationally very cheap, to some more complex ones that provide an even better compression efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image-Space Texture-Based Output-Coherent Surface Flow Visualization

    Page(s): 1476 - 1487
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4967 KB) |  | HTML iconHTML  

    Image-space line integral convolution (LIC) is a popular scheme for visualizing surface vector fields due to its simplicity and high efficiency. To avoid inconsistencies or color blur during the user interactions, existing approaches employ surface parameterization or 3D volume texture schemes. However, they often require expensive computation or memory cost, and cannot achieve consistent results in terms of both the granularity and color distribution on different scales. This paper introduces a novel image-space surface flow visualization approach that preserves the coherence during user interactions. To make the noise texture under different viewpoints coherent, we propose to precompute a sequence of mipmap noise textures in a coarse-to-fine manner for consistent transition, and map the textures onto each triangle with randomly assigned and constant texture coordinates. Further, a standard image-space LIC is performed to generate the flow texture. The proposed approach is simple and GPU-friendly, and can be easily combined with various texture-based flow visualization techniques. By leveraging viewpoint-dependent backward tracing and mipmap noise phase, our method can be incorporated with the image-based flow visualization (IBFV) technique for coherent visualization of unsteady flows. We demonstrate consistent and highly efficient flow visualization on a variety of data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiresolution Attributes for Hardware Tessellated Objects

    Page(s): 1488 - 1498
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3356 KB)  

    Hardware tessellation is one of the latest GPU features. Triangle or quad meshes are tessellated on-the-fly, where the tessellation level is chosen adaptively in a separate shader. The hardware tessellator only generates topology; attributes such as positions or texture coordinates of the newly generated vertices are determined in a domain shader. Typical applications of hardware tessellation are view dependent tessellation of parametric surfaces and displacement mapping. Often, the attributes for the newly generated vertices are stored in textures, which requires uv unwrapping, chartification, and atlas generation of the input meshâa process that is time consuming and often requires manual intervention. In this paper, we present an alternative representation that directly stores optimized attribute values for typical hardware tessellation patterns and simply assigns these attributes to the generated vertices at render time. Using a multilevel fitting approach, the attribute values are optimized for several resolutions. Thereby, we require no parameterization, save memory by adapting the density of the samples to the content, and avoid discontinuities by construction. Our representation is optimally suited for displacement mapping: it automatically generates seamless, view-dependent displacement mapped models. The multilevel fitting approach generates better low-resolution displacement maps than simple downfiltering. By properly blending levels, we avoid artifacts such as popping or swimming surfaces. We also show other possible applications such as signal-optimized texturing or light baking. Our representation can be evaluated in a pixel shader, resulting in signal adaptive, parameterization-free texturing, comparable to PTex or Mesh Colors. Performance evaluation shows that our representation is on par with standard texture mapping and can be updated in real time, allowing for application such as interactive sculpting. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ParaGlide: Interactive Parameter Space Partitioning for Computer Simulations

    Page(s): 1499 - 1512
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1368 KB)  

    In this paper, we introduce ParaGlide, a visualization system designed for interactive exploration of parameter spaces of multidimensional simulation models. To get the right parameter configuration, model developers frequently have to go back and forth between setting input parameters and qualitatively judging the outcomes of their model. Current state-of-the-art tools and practices, however, fail to provide a systematic way of exploring these parameter spaces, making informed decisions about parameter configurations a tedious and workload-intensive task. ParaGlide endeavors to overcome this shortcoming by guiding data generation using a region-based user interface for parameter sampling and then dividing the model's input parameter space into partitions that represent distinct output behavior. In particular, we found that parameter space partitioning can help model developers to better understand qualitative differences among possibly high-dimensional model outputs. Further, it provides information on parameter sensitivity and facilitates comparison of models. We developed ParaGlide in close collaboration with experts from three different domains, who all were involved in developing new models for their domain. We first analyzed current practices of six domain experts and derived a set of tasks and design requirements, then engaged in a user-centered design process, and finally conducted three longitudinal in-depth case studies underlining the usefulness of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scheduling in Heterogeneous Computing Environments for Proximity Queries

    Page(s): 1513 - 1525
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1695 KB) |  | HTML iconHTML  

    We present a novel, linear programming (LP)-based scheduling algorithm that exploits heterogeneous multicore architectures such as CPUs and GPUs to accelerate a wide variety of proximity queries. To represent complicated performance relationships between heterogeneous architectures and different computations of proximity queries, we propose a simple, yet accurate model that measures the expected running time of these computations. Based on this model, we formulate an optimization problem that minimizes the largest time spent on computing resources, and propose a novel, iterative LP-based scheduling algorithm. Since our method is general, we are able to apply our method into various proximity queries used in five different applications that have different characteristics. Our method achieves an order of magnitude performance improvement by using four different GPUs and two hexa-core CPUs over using a hexa-core CPU only. Unlike prior scheduling methods, our method continually improves the performance, as we add more computing resources. Also, our method achieves much higher performance improvement compared with prior methods as heterogeneity of computing resources is increased. Moreover, for one of tested applications, our method achieves even higher performance than a prior parallel method optimized manually for the application. We also show that our method provides results that are close (e.g., 75 percent) to the performance provided by a conservative upper bound of the ideal throughput. These results demonstrate the efficiency and robustness of our algorithm that have not been achieved by prior methods. In addition, we integrate one of our contributions with a work stealing method. Our version of the work stealing method achieves 18 percent performance improvement on average over the original work stealing method. This result shows wide applicability of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Splatterplots: Overcoming Overdraw in Scatter Plots

    Page(s): 1526 - 1538
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2770 KB) |  | HTML iconHTML  

    We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the data set as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how Splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Surface Mesh to Volumetric Spline Conversion with Generalized Polycubes

    Page(s): 1539 - 1551
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3095 KB) |  | HTML iconHTML  

    This paper develops a novel volumetric parameterization and spline construction framework, which is an effective modeling tool for converting surface meshes to volumetric splines. Our new splines are defined upon a novel parametric domain called generalized polycubes (GPCs). A GPC comprises a set of regular cube domains topologically glued together. Compared with conventional polycubes (CPCs), the GPC is much more powerful and flexible and has improved numerical accuracy and computational efficiency when serving as a parametric domain. We design an automatic algorithm to construct the GPC domain while also permitting the user to improve shape abstraction via interactive intervention. We then parameterize the input model on the GPC domain. Finally, we devise a new volumetric spline scheme based on this seamless volumetric parameterization. With a hierarchical fitting scheme, the proposed splines can fit data accurately using reduced number of superfluous control points. Our volumetric modeling scheme has great potential in shape modeling, engineering analysis, and reverse engineering applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Virtual Try-On through Image-Based Rendering

    Page(s): 1552 - 1565
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1856 KB) |  | HTML iconHTML  

    Virtual try-on applications have become popular because they allow users to watch themselves wearing different clothes without the effort of changing them physically. This helps users to make quick buying decisions and, thus, improves the sales efficiency of retailers. Previous solutions usually involve motion capture, 3D reconstruction or modeling, which are time consuming and not robust for all body poses. Our method avoids these steps by combining image-based renderings of the user and previously recorded garments. It transfers the appearance of a garment recorded from one user to another by matching input and recorded frames, image-based visual hull rendering, and online registration methods. Using images of real garments allows for a realistic rendering quality with high performance. It is suitable for a wide range of clothes and complex appearances, allows arbitrary viewing angles, and requires only little manual input. Our system is particularly useful for virtual try-on applications as well as interactive games. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • VisibilityCluster: Average Directional Visibility for Many-Light Rendering

    Page(s): 1566 - 1578
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4115 KB)  

    This paper proposes the VisibilityCluster algorithm for efficient visibility approximation and representation in many-light rendering. By carefully clustering lights and shading points, we can construct a visibility matrix that exhibits good local structures due to visibility coherence of nearby lights and shading points. Average visibility can be efficiently estimated by exploiting the sparse structure of the matrix and shooting only few shadow rays between clusters. Moreover, we can use the estimated average visibility as a quality measure for visibility estimation, enabling us to locally refine VisibilityClusters with large visibility variance for improving accuracy. We demonstrate that, with the proposed method, visibility can be incorporated into importance sampling at a reasonable cost for the many-light problem, significantly reducing variance in Monte Carlo rendering. In addition, the proposed method can be used to increase realism of local shading by adding directional occlusion effects. Experiments show that the proposed technique outperforms state-of-the-art importance sampling algorithms, and successfully enhances the preview quality for lighting design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visualization and Analysis of Vortex-Turbine Intersections in Wind Farms

    Page(s): 1579 - 1591
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3351 KB) |  | HTML iconHTML  

    Characterizing the interplay between the vortices and forces acting on a wind turbine's blades in a qualitative and quantitative way holds the potential for significantly improving large wind turbine design. This paper introduces an integrated pipeline for highly effective wind and force field analysis and visualization. We extract vortices induced by a turbine's rotation in a wind field, and characterize vortices in conjunction with numerically simulated forces on the blade surfaces as these vortices strike another turbine's blades downstream. The scientifically relevant issue to be studied is the relationship between the extracted, approximate locations on the blades where vortices strike the blades and the forces that exist in those locations. This integrated approach is used to detect and analyze turbulent flow that causes local impact on the wind turbine blade structure. The results that we present are based on analyzing the wind and force field data sets generated by numerical simulations, and allow domain scientists to relate vortex-blade interactions with power output loss in turbines and turbine life expectancy. Our methods have the potential to improve turbine design to save costs related to turbine operation and maintenance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wetting of Porous Solids

    Page(s): 1592 - 1604
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2146 KB)  

    This paper presents a simple, three stage method to simulate the mechanics of wetting of porous solid objects, like sponges and cloth, when they interact with a fluid. In the first stage, we model the absorption of fluid by the object when it comes in contact with the fluid. In the second stage, we model the transport of absorbed fluid inside the object, due to diffusion, as a flow in a deforming, unstructured mesh. The fluid diffuses within the object depending on saturation of its various parts and other body forces. Finally, in the third stage, oversaturated parts of the object shed extra fluid by dripping. The simulation model is motivated by the physics of imbibition of fluids into porous solids in the presence of gravity. It is phenomenologically capable of simulating wicking and imbibition, dripping, surface flows over wet media, material weakening, and volume expansion due to wetting. The model is inherently mass conserving and works for both thin 2D objects like cloth and for 3D volumetric objects like sponges. It is also designed to be computationally efficient and can be easily added to existing cloth, soft body, and fluid simulation pipelines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Ming Lin
Department of Computer Science
University of North Carolina