By Topic

Eighth International Workshop on High-Level Parallel Programming Models and Supportive Environments, 2003. Proceedings.

22-22 April 2003

Filter Results

Displaying Results 1 - 11 of 11
  • Proceedings Eighth International Workshop on High-Level Parallel Programming Modes and Supportive Environments. Held in conjunction with 17th International Parallel and Distributed Processing Symposium (IPDPS)

    Publication Year: 2003
    Request permission for commercial reuse | PDF file iconPDF (287 KB)
    Freely Available from IEEE
  • Supporting peer-2-peer interactions in the consumer grid

    Publication Year: 2003, Page(s):3 - 12
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (420 KB) | HTML iconHTML

    A "Consumer Grid" provides the individual-based counterpart to the organisation-based computational grid. We describe a peer-to-peer system for utilising computational resources on the Grid - extending existing work undertaken in systems such as Entropia and SETI@home. The potential of such a distributed computing resource has been in some ways demonstrated recently by the SETI@home project, havin... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DPS - dynamic parallel schedules

    Publication Year: 2003, Page(s):15 - 24
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (344 KB) | HTML iconHTML

    Dynamic Parallel Schedules (DPS) is a high-level framework for developing parallel applications on distributed memory computers (e.g. clusters of PCs). Its model relies on compositional customizable split-compute-merge graphs of operations (directed acyclic flow graphs). The graphs and the mapping of operations to processing nodes are specified dynamically at runtime. DPS applications are pipeline... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ParoC++: a requirement-driven parallel object-oriented programming language

    Publication Year: 2003, Page(s):25 - 33
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (329 KB) | HTML iconHTML

    Adaptive utilization of resources in a highly heterogeneous computational environment such as the Grid is a difficult question. In this paper we address an object-oriented approach to the solution using requirement-driven parallel objects. Each parallel object is a self-described, shareable and passive object that resides in a separate memory address space. The allocation of the parallel object is... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the implementation of JavaSymphony

    Publication Year: 2003, Page(s):34 - 43
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (328 KB) | HTML iconHTML

    In previous work we have introduced JavaSymphony, a system whose purpose is to simplify the development of distributed and parallel Java applications. JavaSymphony is a Java library that allows to control parallelism, load balancing, and locality at a high level. Objects can be explicitly distributed and migrated within virtual architectures, which impose a virtual hierarchy on a distributed syste... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compiler and runtime support for running OpenMP programs on Pentium- and Itanium-architectures

    Publication Year: 2003, Page(s):47 - 55
    Cited by:  Papers (4)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (297 KB) | HTML iconHTML

    Exploiting Thread-Level Parallelism (TLP) is a promising way to improve the performance of applications with the advent of general-purpose cost effective uni-processor and shared-memory multiprocessor systems. In this paper, we describe the OpenMP* implementation in the Intel/spl reg/ C++ and Fortran compilers for Intel platforms. We present our major design consideration and decisions in the Inte... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SMP-aware message passing programming

    Publication Year: 2003, Page(s):56 - 65
    Cited by:  Papers (1)  |  Patents (11)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (287 KB) | HTML iconHTML

    The Message Passing Interface (MPI) is designed as an architecture independent interface for parallel programming in the shared-nothing, message passing paradigm. We briefly summarize basic requirements to a high-quality implementation of MPI for efficient programming of SMP clusters and related architectures, and discuss possible, mild extensions of the topology functionality of MPI, which, while... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparison between MPI and OpenMP Branch-and-Bound skeletons

    Publication Year: 2003, Page(s):66 - 73
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (303 KB) | HTML iconHTML

    This article describes and compares two parallel implementations of Branch-and-Bound skeletons. Using the C++ programming language, the user has to specify the type of the problem, the type of the solution and the specific characteristics of the branch-and-bound technique. This information is combined with the provided resolution skeletons to obtain a distributed and a shared parallel programs. MP... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Initial design of a test suite for automatic performance analysis tools

    Publication Year: 2003, Page(s):77 - 86
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (351 KB) | HTML iconHTML

    Automatic performance tools must of course be tested as to whether they perform their task correctly. Because performance tools are meta-programs, tool testing is more complex than ordinary program testing and comprises at least three aspects. First, it must be ensured that the tools do neither alter the semantics nor distort the run-time behavior of the application under investigation. Next, it m... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Algorithmic concept recognition support for skeleton based parallel programming

    Publication Year: 2003, Page(s):87 - 96
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (298 KB) | HTML iconHTML

    Parallel Skeletons have been proposed as a possible programming model for parallel architectures. One of the problems with this approach is the choice of the skeleton which is best suited to the characteristics of the algorithm/program to be developed/parallelized, and of the target architecture, in terms of performance of the parallel implementation. Another problem arising with parallelization o... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Publication Year: 2003, Page(s): 97
    Request permission for commercial reuse | PDF file iconPDF (178 KB)
    Freely Available from IEEE