By Topic

Ultrascale Visualization, 2008. UltraVis 2008. Workshop on

Date 16-16 Nov. 2008

Filter Results

Displaying Results 1 - 12 of 12
  • [Title page]

    Publication Year: 2008 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (49 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2008 , Page(s): ii
    Save to Project icon | Request Permissions | PDF file iconPDF (106 KB)  
    Freely Available from IEEE
  • Content

    Publication Year: 2008 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (58 KB)  
    Freely Available from IEEE
  • Information and Knowledge assisted analysis and Visualization of large-scale data

    Publication Year: 2008 , Page(s): 1 - 8
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4271 KB) |  | HTML iconHTML  

    The ever-increasing sizes of data produced from a variety of scientific studies post a formidable challenge for the subsequent data analysis and visualization tasks. While steady advances in graphics hardware enable faster rendering, achieving interactive visualization of large data must also rely on effective data filtering and organization. In many cases, the best interactivity can only be obtained by taking into account the intrinsic properties of the data and domain knowledge to better reduce and organize the data for visualization. As a result, in recent years, we have seen increasing research and development efforts into the area of information and knowledge assisted visualization (IKV). In this paper, we survey research in IKV of scientific data and also identify a few directions for further work in this emerging area. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The data analysis computing hierarchy

    Publication Year: 2008 , Page(s): 9 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (233 KB) |  | HTML iconHTML  

    With the dramatic increases in simulation complexity and resolution comes an equally dramatic challenge for resources, both computational and storage, needed to facilitate analysis and understanding of the results. Traditionally these needs have been met by powerful workstations equipped with sophisticated analysis tools and special purpose visualization hardware. More and more these personal computing resources are unable to manage the so-called data deluge without significant support from additional high-end resources. The response to this crisis is taking shape in the form of an additional layer in the analysis pipeline: the visual and data analysis cluster. This High Performance Computing resource stands somewhere between the source of the data deluge (a supercomputer simulation or a large data collection experiment) and the user's personal computer. In this paper we discuss (1) the scale and character of a few current large-data enterprises, (2) a descriptive hierarchical model of the computation and analysis pipeline, and (3) some of the capabilities that will need to be developed in order for such an architecture to meet the challenges of efficient analysis in the face of huge datasets, I/O bottlenecks, and remote users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assessing improvements to the parallel volume rendering pipeline at large scale

    Publication Year: 2008 , Page(s): 13 - 23
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2040 KB) |  | HTML iconHTML  

    Computational science's march toward the petascale demands innovations in analysis and visualization of the resulting datasets. As scientists generate terabyte and petabyte data, it is insufficient to measure the performance of visual analysis algorithms by rendering speed only, because performance is dominated by data movement. We take a systemwide view in analyzing the performance of software volume rendering on the IBM Blue Gene/P at over 10,000 cores by examining the relative costs of the I/O, rendering, and compositing portions of the volume rendering algorithm. This examination uncovers room for improvement in data input, load balancing, memory usage, image compositing, and image output. We present four improvements to the basic algorithm to address these bottlenecks. We show the benefit of an alternative rendering distribution scheme that improves load balance, and how to scale memory usage so that large data and image sizes do not overload system memory. To improve compositing, we experiment with a hybrid MPI - multithread programming model, and to mitigate the high cost of I/O, we implement multiple parallel pipelines to partially hide the I/O cost when rendering many time steps. Measuring the benefits of these techniques at scale reinforces the conclusion that BG/P is an effective platform for volume rendering of large datasets and that our volume rendering algorithm, enhanced by the techniques presented here, scales to large problem and system sizes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Petascale visualization: Approaches and initial results

    Publication Year: 2008 , Page(s): 24 - 28
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (625 KB) |  | HTML iconHTML  

    With the advent of the first petascale supercomputer, Los Alamos's Roadrunner, there is a pressing need to address how to visualize petascale data. The crux of the petascale visualization performance problem is interactive rendering, since it is the most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. In this work, we evaluated the rendering performance of multi-core CPU and GPU-based processors. To achieve high-performance on multi-core processors, we tested with multi-core optimized raytracing engines for rendering. For real-world performance testing, and to prepare for petascale visualization tasks, we interfaced these rendering engines with VTK and ParaView. Initial results show that rendering software optimized for multi-core CPU processors provides competitive performance to GPUs for the parallel rendering of massive data. The current architectural multi-core trend suggests multi-core based supercomputers are able to provide interactive visualization and rendering support now and in the future. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An outlook into ultra-scale visualization of large-scale biological data

    Publication Year: 2008 , Page(s): 29 - 39
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3294 KB) |  | HTML iconHTML  

    As bioinformatics has evolved from a reductionistic approach to a complementary multi-scale integrative approach, new challenges in ultra-scale visualization have arisen. Even though visualization is a critical component to large-scale biological data analysis, the ultra-scale nature of systems biology has given rise to novel problems in visualization that are not addressed by existing methods. Visualization is a rich and actively researched domain, and there are many open research questions pertaining to the increasing demands of visualization in bioinformatics. In this paper, we present several broadly important ultra-scale visualization challenges and discuss specific examples of ultra-scale applications in systems biology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of fragmentation in shock physics simulation

    Publication Year: 2008 , Page(s): 40 - 46
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (17461 KB) |  | HTML iconHTML  

    Analyzing shock physics, which can involve high energies, high velocity materials, and highly variable results, is challenging. Very little can be measured during a shock physics experiment. Most experimental data is collected in the aftermath. High-fidelity simulations using codes like CTH are possible, but require a significant amount of post processing to properly understand the results. Physical structures and their accompanying data must be derived from the volumetric properties computed by the simulation. And, of course, the simulations must be validated against experiments. To capture small fragmentation effects, the CTH simulations must be run on very large scales using adaptive meshes, which further complicates the post processing. By using the scalable visualization tool ParaView coupled with customized feature identification, we are able to provide both the analysis and verification of these large-scale CTH simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scalable Adaptive Graphics middleware for visualization streaming and collaboration in ultra resolution display environments

    Publication Year: 2008 , Page(s): 47 - 54
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6781 KB) |  | HTML iconHTML  

    This paper describes the motivation and capabilities of SAGE- the scalable adaptive graphics environment- a middleware and software client for supporting ultra resolution collaboration and visualization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of cooperative visualization environment with intensive data management in project lifecycle

    Publication Year: 2008 , Page(s): 55 - 61
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1546 KB) |  | HTML iconHTML  

    Scientific data processing and visualization of computational simulation data have played an important role in knowledge creation, and its transfer for the benefit of society, through scientific discovery and understanding of physical and chemical phenomena. Massively parallel processing architecture has greatly contributed to increase the computational power and to enlarge the scale of computation. This facilitated the generation of larger and larger amounts of data. To tackle this oncoming tsunami of data, we propose a post-processing system for large-scale data focusing on the idea that a post-processing system needs to be an assistant of the researchers thinking process. We present a probable design solution targeting continuous development, and the technologies for productivity improvement such as collaborative work and visualization. This paper describes the current development status of a post-processing system designed for the next-generation 10-petaflop supercomputer under development in Japan. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Publication Year: 2008 , Page(s): 62
    Save to Project icon | Request Permissions | PDF file iconPDF (58 KB)  
    Freely Available from IEEE