By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 7 • Date July 2014

Filter Results

Displaying Results 1 - 12 of 12
  • Guest Editor' Introduction: Special Issue on the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2013

    Page(s): 955 - 956
    Save to Project icon | Request Permissions | PDF file iconPDF (80 KB)  
    Freely Available from IEEE
  • WYSIWYG Stereo Paintingwith Usability Enhancements

    Page(s): 957 - 969
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1562 KB) |  | HTML iconHTML  

    Despite increasing popularity of stereo capture and display systems, creating stereo artwork remains a challenge. This paper presents a stereo painting system, which enables effective from-scratch creation of high-quality stereo artwork. A key concept of our system is a stereo layer, which is composed of two RGBAd (RGBA + depth) buffers. Stereo layers alleviate the need for fully formed representational 3D geometry required by most existing 3D painting systems, and allow for simple, essential depth specification. RGBAd buffers also provide scalability for complex scenes by minimizing the dependency of stereo painting updates on the scene complexity. For interaction with stereo layers, we present stereo paint and stereo depth brushes, which manipulate the photometric (RGBA) and depth buffers of a stereo layer, respectively. In our system, painting and depth manipulation operations can be performed in arbitrary order with real-time visual feedback, providing a flexible WYSIWYG workflow for stereo painting. Our data structures allow for easy interoperability with existing image and geometry data, enabling a number of applications beyond from-scratch art creation, such as stereo conversion of monoscopic artwork and mixed-media art. Comments from artists and experimental results demonstrate that our system effectively aides in the creation of compelling stereo paintings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive Mesostructures withVolumetric Collisions

    Page(s): 970 - 982
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1478 KB) |  | HTML iconHTML  

    This paper presents a technique for interactively colliding with and deforming mesostructures at a per-texel level. It is compatible with a broad range of existing mesostructure rendering techniques including both safe and unsafe ray-height field intersection algorithms. This technique is able to replace traditional 3D geometrical deformations (vertex-based) with 2D image space operations (pixel-based) that are parallelized on a GPU without CPU-GPU data shuffling and integrates well with existing physics engines. Additionally, surface and material properties may be specified at a per-texel level enabling a mesostructure to possess varying attributes intrinsic to its surface and collision behavior. Furthermore, this approach may replace traditional decals with image-based operations that naturally accumulate deformations without inserting any new geometry. This technique provides a simple and efficient way to make almost every surface in a virtual world responsive to user actions and events. It requires no preprocessing time and storage requirements of one additional texture or less. The algorithm uses existing inverse displacement map algorithms as well as existing physics engines and can be easily incorporated into new or existing game pipelines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Property and Lighting Manipulations for Static Volume Stylization Using a Painting Metaphor

    Page(s): 983 - 995
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2295 KB) |  | HTML iconHTML  

    Although volumetric phenomena are important for realistic rendering and can even be a crucial component in the image, the artistic control of the volume's appearance is challenging. Appropriate tools to edit volume properties are missing, which can make it necessary to use simulation results directly. Alternatively, high-level modifications that are rarely intuitive, e.g., the tweaking of noise function parameters, can be utilized. Our work introduces a solution to stylize single-scattering volumetric effects in static volumes. Hereby, an artistic and intuitive control of emission, scattering and extinction becomes possible, while ensuring a smooth and coherent appearance when changing the viewpoint. Our method is based on tomographic reconstruction, which we link to the volumetric rendering equation. It analyzes a number of target views provided by the artist and adapts the volume properties to match the appearance for the given perspectives. Additionally, we describe how we can optimize for the environmental lighting to match a desired scene appearance, while keeping volume properties constant. Finally, both techniques can be combined. We demonstrate several use cases of our approach and illustrate its effectiveness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Filtering Non-Linear TransferFunctions on Surfaces

    Page(s): 996 - 1008
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1814 KB) |  | HTML iconHTML  

    Applying non-linear transfer functions and look-up tables to procedural functions (such as noise), surface attributes, or even surface geometry are common strategies used to enhance visual detail. Their simplicity and ability to mimic a wide range of realistic appearances have led to their adoption in many rendering problems. As with any textured or geometric detail, proper filtering is needed to reduce aliasing when viewed across a range of distances, but accurate and efficient transfer function filtering remains an open problem for several reasons: transfer functions are complex and non-linear, especially when mapped through procedural noise and/or geometry-dependent functions, and the effects of perspective and masking further complicate the filtering over a pixel's footprint. We accurately solve this problem by computing and sampling from specialized filtering distributions on the fly, yielding very fast performance. We investigate the case where the transfer function to filter is a color map applied to (macroscale) surface textures (like noise), as well as color maps applied according to (microscale) geometric details. We introduce a novel representation of a (potentially modulated) color map's distribution over pixel footprints using Gaussian statistics and, in the more complex case of high-resolution color mapped microsurface details, our filtering is view- and light-dependent, and capable of correctly handling masking and occlusion effects. Our approach can be generalized to filter other physical-based rendering quantities. We propose an application to shading with irradiance environment maps over large terrains. Our framework is also compatible with the case of transfer functions used to warp surface geometry, as long as the transformations can be represented with Gaussian statistics, leading to proper view- and light-dependent filtering results. Our results match ground truth and our solution is well suited to real-time applications, requires only a few- lines of shader code (provided in supplemental material, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TVCG.2013.102), is high performance, and has a negligible memory footprint. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Translucent Radiosity: Efficiently CombiningDiffuse Inter-Reflection andSubsurface Scattering

    Page(s): 1009 - 1021
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1630 KB) |  | HTML iconHTML  

    It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid Long-Range Collision Avoidance for Crowd Simulation

    Page(s): 1022 - 1034
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1392 KB) |  | HTML iconHTML  

    Local collision avoidance algorithms in crowd simulation often ignore agents beyond a neighborhood of a certain size. This cutoff can result in sharp changes in trajectory when large groups of agents enter or exit these neighborhoods. In this work, we exploit the insight that exact collision avoidance is not necessary between agents at such large distances, and propose a novel algorithm for extending existing collision avoidance algorithms to perform approximate, long-range collision avoidance. Our formulation performs long-range collision avoidance for distant agent groups to efficiently compute trajectories that are smoother than those obtained with state-of-the-art techniques and at faster rates. Comparison to real-world data demonstrates that crowds simulated with our algorithm exhibit an improved speed sensitivity to density similar to human crowds. Another issue often sidestepped in existing work is that discrete and continuum collision avoidance algorithms have different regions of applicability. For example, low-density crowds cannot be modeled as a continuum, while high-density crowds can be expensive to model using discrete methods. We formulate a hybrid technique for crowd simulation which can accurately and efficiently simulate crowds at any density with seamless transitions between continuum and discrete representations. Our approach blends results from continuum and discrete algorithms, based on local density and velocity variance. In addition to being robust across a variety of group scenarios, it is also highly efficient, running at interactive rates for thousands of agents on portable systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ADAPT: The Agent Developmentand Prototyping Testbed

    Page(s): 1035 - 1047
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1335 KB) |  | HTML iconHTML  

    We present ADAPT, a flexible platform for designing and authoring functional, purposeful human characters in a rich virtual environment. Our framework incorporates character animation, navigation, and behavior with modular interchangeable components to produce narrative scenes. The animation system provides locomotion, reaching, gaze tracking, gesturing, sitting, and reactions to external physical forces, and can easily be extended with more functionality due to a decoupled, modular structure. The navigation component allows characters to maneuver through a complex environment with predictive steering for dynamic obstacle avoidance. Finally, our behavior framework allows a user to fully leverage a character's animation and navigation capabilities when authoring both individual decision-making and complex interactions between actors using a centralized, event-driven model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Large-Scale Overlays and Trends: Visually Mining, Panning and Zoomingthe Observable Universe

    Page(s): 1048 - 1061
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1231 KB) |  | HTML iconHTML  

    We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Shape Deformation via Interior RBF

    Page(s): 1062 - 1075
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1523 KB) |  | HTML iconHTML  

    We present a new framework for real-time shape deformation with local shape preservation and volume control. Given a 3D object, in any form, one would like to manipulate the object using convenient handles, so that the resulting shape is a natural variation of the given object. It is also important that the deformation is controlled, thereby enabling localized changes that do not influence nearby branches. For example, given a horse model, a movement of one of its hooves should not affect the other hooves. Another goal is the minimization of local shape distortion throughout the object. The first ingredient of our method is the use of interior radial basis functions (IRBF), where the functions are radial with respect to interior distances within the object. The second important ingredient is the reduction of local distortions by minimizing the distortion of a set of spheres placed within the object. Our method achieves the goals of convenient shape manipulation and local influence property, and improves the latest state-of-the-art cage-based methods by replacing the cage with the more flexible IRBF centers. The latter enables extra flexibility and fully automated construction, as well as simpler formulation. We also suggest the IRBF interpolation method that can extend any surface mapping to the whole subspace in a shape-aware manner. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structured Mechanical Collage

    Page(s): 1076 - 1082
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (975 KB) |  | HTML iconHTML  

    We present a method to build 3D structured mechanical collages consisting of numerous elements from the database given artist-designed proxy models. The construction is guided by some graphic design principles, namely unity, variety and contrast. Our results are visually more pleasing than previous works as confirmed by a user study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rock Stars of Cybersecurity Conference

    Page(s): 1083
    Save to Project icon | Request Permissions | PDF file iconPDF (1863 KB)  
    Freely Available from IEEE

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Ming Lin
Department of Computer Science
University of North Carolina