By Topic

Visualization and Computer Graphics, IEEE Transactions on

Issue 3 • Date May-June 2007

Filter Results

Displaying Results 1 - 16 of 16
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (391 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (86 KB)  
    Freely Available from IEEE
  • Editor's Note

    Page(s): 417 - 419
    Save to Project icon | Request Permissions | PDF file iconPDF (166 KB)  
    Freely Available from IEEE
  • Guest Editors' Introduction: Special Section on Virtual Reality

    Page(s): 420 - 421
    Save to Project icon | Request Permissions | PDF file iconPDF (81 KB)  
    Freely Available from IEEE
  • Demand Characteristics in Assessing Motion Sickness in a Virtual Environment: Or Does Taking a Motion Sickness Questionnaire Make You Sick?

    Page(s): 422 - 428
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (627 KB) |  | HTML iconHTML  

    The experience of motion sickness in a virtual environment may be measured through pre and postexperiment self-reported questionnaires such as the Simulator Sickness Questionnaire (SSQ). Although research provides converging evidence that users of virtual environments can experience motion sickness, there have been no controlled studies to determine to what extent the user's subjective response is a demand characteristic resulting from pre and posttest measures. In this study, subjects were given either SSQ's both pre and postvirtual environment immersion, or only postimmersion. This technique tested for contrast effects due to demand characteristics in which administration of the questionnaire itself suggested to the participant that the virtual environment may produce motion sickness. Results indicate that reports of motion sickness after immersion in a virtual environment are much greater when both pre and postquestionnaires are given than when only a posttest questionnaire is used. The implications for assessments of motion sickness in virtual environments are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparing Interpersonal Interactions with a Virtual Human to Those with a Real Human

    Page(s): 443 - 457
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2004 KB)  

    This paper provides key insights into the construction and evaluation of interpersonal simulators - systems that enable interpersonal interaction with virtual humans. Using an interpersonal simulator, two studies were conducted that compare interactions with a virtual human to interactions with a similar real human. The specific interpersonal scenario employed was that of a medical interview. Medical students interacted with either a virtual human simulating appendicitis or a real human pretending to have the same symptoms. In study I (n=24), medical students elicited the same information from the virtual and real human, indicating that the content of the virtual and real interactions were similar. However, participants appeared less engaged and insincere with the virtual human. These behavioral differences likely stemmed from the virtual human's limited expressive behavior. Study II (n=58) explored participant behavior using new measures. Nonverbal behavior appeared to communicate lower interest and a poorer attitude toward the virtual human. Some subjective measures of participant behavior yielded contradictory results, highlighting the need for objective, physically-based measures in future studies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparing Interpersonal Interactions with a Virtual Human to Those with a Real Human

    Page(s): 443 - 457
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2004 KB) |  | HTML iconHTML  

    This paper provides key insights into the construction and evaluation of interpersonal simulators├é┬┐systems that enable interpersonal interaction with virtual humans. Using an interpersonal simulator, two studies were conducted that compare interactions with a virtual human to interactions with a similar real human. The specific interpersonal scenario employed was that of a medical interview. Medical students interacted with either a virtual human simulating appendicitis or a real human pretending to have the same symptoms. In Study I (n = 24), medical students elicited the same information from the virtual and real human, indicating that the content of the virtual and real interactions were similar. However, participants appeared less engaged and insincere with the virtual human. These behavioral differences likely stemmed from the virtual human's limited expressive behavior. Study II (n = 58) explored participant behavior using new measures. Nonverbal behavior appeared to communicate lower interest and a poorer attitude toward the virtual human. Some subjective measures of participant behavior yielded contradictory results, highlighting the need for objective, physically-based measures in future studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Six Degree-of-Freedom God-Object Method for Haptic Display of Rigid Bodies with Surface Properties

    Page(s): 458 - 469
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2107 KB)  

    This paper describes a generalization of the god-object method for haptic interaction between rigid bodies. Our approach separates the computation of the motion of the six degree-of-freedom god-object from the computation of the force applied to the user. The motion of the god-object is computed using continuous collision detection and constraint-based quasi-statics, which enables high-quality haptic interaction between contacting rigid bodies. The force applied to the user is computed using a novel constraint-based quasi-static approach, which allows us to suppress force artifacts typically found in previous methods. The constraint-based force applied to the user, which handles any number of simultaneous contact points, is computed within a few microseconds, while the update of the configuration of the rigid god-object is performed within a few milliseconds for rigid bodies containing up to tens of thousands of triangles. Our approach has been successfully tested on complex benchmarks. Our results show that the separation into asynchronous processes allows us to satisfy the different update rates required by the haptic and visual displays. Force shading and textures can be added and enlarge the range of haptic perception of a virtual environment. This paper is an extension of M. Ortega et al., [2006] View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of Four Freely Available Frameworks for Image Processing and Visualization That Use ITK

    Page(s): 483 - 493
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4938 KB)  

    Most image processing and visualization applications allow users to configure computation parameters and manipulate the resulting visualizations. SCIRun, VoIView, MeVisLab, and the Medical Interaction Toolkit (MITK) are four image processing and visualization frameworks that were built for these purposes. All frameworks are freely available and all allow the use of the ITK C++ library. In this paper, the benefits and limitations of each visualization framework are presented to aid both application developers and users in the decision of which framework may be best to use for their application. The analysis is based on more than 50 evaluation criteria, functionalities, and example applications. We report implementation times for various steps in the creation of a reference application in each of the compared frameworks. The data-flow programming frameworks, SCIRun and MeVisLab, were determined to be best for developing application prototypes, while VoIView was advantageous for nonautomatic end-user applications based on existing ITK functionalities, and MITK was preferable for automated end-user applications that might include new ITK classes specifically designed for the application View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Texture-Based Visualization of Unsteady 3D Flow by Real-Time Advection and Volumetric Illumination

    Page(s): 569 - 582
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2367 KB)  

    This paper presents an interactive technique for the dense texture-based visualization of unsteady 3D flow, taking into account issues of computational efficiency and visual perception. High efficiency is achieved by a 3D graphics processing unit (GPU)-based texture advection mechanism that implements logical 3D grid structures by physical memory in the form of 2D textures. This approach results in fast read and write access to physical memory, independent of GPU architecture. Slice-based direct volume rendering is used for the final display. We investigate two alternative methods for the volumetric illumination of the result of texture advection: First, gradient-based illumination that employs a real-time computation of gradients, and, second, line-based lighting based on illumination in codimension 2. In addition to the Phong model, perception-guided rendering methods are considered, such as cool/warm shading, halo rendering, or color-based depth cueing. The problems of clutter and occlusion are addressed by supporting a volumetric importance function that enhances features of the flow and reduces visual complexity in less interesting regions. GPU implementation aspects, performance measurements, and a discussion of results are included to demonstrate our visualization approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Texture-Based Visualization of Unsteady 3D Flow by Real-Time Advection and Volumetric Illumination

    Page(s): 569 - 582
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2367 KB)  

    This paper presents an interactive technique for the dense texture-based visualization of unsteady 3D flow, taking into account issues of computational efficiency and visual perception. High efficiency is achieved by a 3D graphics processing unit (GPU)-based texture advection mechanism that implements logical 3D grid structures by physical memory in the form of 2D textures. This approach results in fast read and write access to physical memory, independent of GPU architecture. Slice-based direct volume rendering is used for the final display. We investigate two alternative methods for the volumetric illumination of the result of texture advection: First, gradient-based illumination that employs a real-time computation of gradients, and, second, line-based lighting based on illumination in codimension 2. In addition to the Phong model, perception-guided rendering methods are considered, such as cool/warm shading, halo rendering, or color-based depth cueing. The problems of clutter and occlusion are addressed by supporting a volumetric importance function that enhances features of the flow and reduces visual complexity in less interesting regions. GPU implementation aspects, performance measurements, and a discussion of results are included to demonstrate our visualization approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Radial Adaptation of the Sugiyama Framework for Visualizing Hierarchical Information

    Page(s): 583 - 594
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2786 KB)  

    In radial drawings of hierarchical graphs, the vertices are placed on concentric circles rather than on horizontal lines and the edges are drawn as outward monotone segments of spirals rather than straight lines as it is done in the standard Sugiyama framework. This drawing style is well suited for the visualization of centrality in social networks and similar concepts. Radial drawings also allow a more flexible edge routing than horizontal drawings, as edges can be routed around the center in two directions. In experimental results, this reduces the number of crossings by approximately 30 percent on average. Few crossings are one of the major criteria for human readability. This paper is a detailed description of a complete framework for visualizing hierarchical information in a new radial fashion. Particularly, we briefly cover extensions of the level assignment step to benefit from the increasing perimeters of the circles, present three heuristics for crossing reduction in radial level drawings, and also show how to visualize the results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Manifold Dual Contouring

    Page(s): 610 - 619
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2532 KB)  

    Dual contouring (DC) is a feature-preserving isosurfacing method that extracts crack-free surfaces from both uniform and adaptive octree grids. We present an extension of DC that further guarantees that the mesh generated is a manifold even under adaptive simplification. Our main contribution is an octree-based topology-preserving vertex-clustering algorithm for adaptive contouring. The contoured surface generated by our method contains only manifold vertices and edges, preserves sharp features, and possesses much better adaptivity than those generated by other isosurfacing methods under topologically safe simplification View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Physics-Based Subsurface Visualization of Human Tissue

    Page(s): 620 - 629
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1159 KB) |  | HTML iconHTML  

    In this paper, we present a framework for simulating light transport in three-dimensional tissue with inhomogeneous scattering properties. Our approach employs a computational model to simulate light scattering in tissue through the finite element solution of the diffusion equation. Although our model handles both visible and nonvisible wavelengths, we especially focus on the interaction of near infrared (NIR) light with tissue. Since most human tissue is permeable to NIR light, tools to noninvasively image tumors, blood vasculature, and monitor blood oxygenation levels are being constructed. We apply this model to a numerical phantom to visually reproduce the images generated by these real-world tools. Therefore, in addition to enabling inverse design of detector instruments, our computational tools produce physically-accurate visualizations of subsurface structures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TVCG Information for authors

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (86 KB)  
    Freely Available from IEEE
  • [Back cover]

    Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (391 KB)  
    Freely Available from IEEE

Aims & Scope

Visualization techniques and methodologies; visualization systems and software; volume visualization; flow visualization; multivariate visualization; modeling and surfaces; rendering; animation; user interfaces; visual progranuning; applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Ming Lin
Department of Computer Science
University of North Carolina