By Topic

Programming Models for Massively Parallel Computers, 1993. Proceedings

Date 20-20 Sept. 1993

Filter Results

Displaying Results 1 - 24 of 24
  • An evaluation of coarse grain dataflow code generation strategies

    Publication Year: 1993, Page(s):63 - 71
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (689 KB)

    Presents top-down and bottom-up methods for generating coarse grain dataflow or multithreaded code, and evaluates their effectiveness. The top-down technique generates clusters directly from the intermediate data dependence graph used for compiler optimizations. Bottom-up techniques coalesce fine-grain dataflow code into clusters. We measure the resulting number of clusters executed, cluster size,... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Proceedings of Workshop on Programming Models for Massively Parallel Computers

    Publication Year: 1993
    Request permission for commercial reuse | PDF file iconPDF (348 KB)
    Freely Available from IEEE
  • MANIFOLD: a programming model for massive parallelism

    Publication Year: 1993, Page(s):151 - 159
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (728 KB)

    MANIFOLD is a coordination language for orchestration of the communications among independent, cooperating processes in a massively parallel or distributed application. The fundamental principle underlying MANIFOLD is the complete separation of computation from communication. This means that in MANIFOLD: computation processes know nothing about their own communication with other processes; and coo... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling parallel computers as memory hierarchies

    Publication Year: 1993, Page(s):116 - 123
    Cited by:  Papers (9)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (548 KB)

    A parameterized generic model that captures the features of diverse computer architectures would facilitate the development of portable programs. Specific models appropriate to particular computers are obtained by specifying parameters of the generic model. A generic model should be simple, and for each machine that it is intended to represent, it should have a reasonably accurate specific model. ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The DSPL programming environment

    Publication Year: 1993, Page(s):35 - 42
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (520 KB)

    Gives an overview on the principle concepts employed in the DSPL (Data Stream Processing Language) programming environment, an integrated approach to automate system design and implementation of parallel applications. The programming environment consists of a programming language and the following set of integrated tools: (1) The modeling tool automatically derives a software model from the given ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of distributed applications by suitability functions

    Publication Year: 1993, Page(s):191 - 197
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (356 KB)

    A simple programming model of distributed-memory message-passing computer systems is first applied to describe the couple architecture/application by two sets of parameters. The node timing formula is then derived on the basis of scalar, vector and communication components. A set of suitability functions, extracted from the performance formulae, are defined. These functions are applied as an examp... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • PROMOTER: an application-oriented programming model for massive parallelism

    Publication Year: 1993, Page(s):198 - 205
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (560 KB)

    The article deals with rationale and concepts of a programming model for massive parallelism. We mention the basic properties of massively parallel applications and develop a programming model for data parallelism on distributed-memory computers. Its key features are a suitable combination of homogeneity and heterogeneity aspects, a unified representation of data point configuration and interconne... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel symbolic processing-can it be done?

    Publication Year: 1993, Page(s):24 - 25
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (168 KB)

    My principle answer is: yes, but it depends. Parallelization of symbolic applications is possible, but only for certain classes of applications. Distributed memory may prevent parallelization in some cases where the relation of computation and communication overhead becomes too high, but also may be an advantage when applications require much garbage collection, which can then be done in a distrib... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compiling data parallel programs to message passing programs for massively parallel MIMD systems

    Publication Year: 1993, Page(s):100 - 107
    Cited by:  Papers (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (516 KB)

    The currently dominant message-passing programming paradigm for MIMD systems is difficult to use and error prone. One approach that avoids explicit communication is the data-parallel programming model. This model stands for a single thread of control, global name space, and loosely synchronous parallel computation. It is easy to use and data-parallel programs usually scale very well. Based on the ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structured parallel programming

    Publication Year: 1993, Page(s):160 - 169
    Cited by:  Papers (3)  |  Patents (2)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (680 KB)

    Parallel programming is a difficult task involving many complex issues such as resource allocation, and process coordination. We propose a solution to this problem based on the use of a repertoire of parallel algorithmic forms, known as skeletons. The use of skeletons enables the meaning of a parallel program to be separated from its behaviour. Central to this methodology is the use of transformat... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formal methods for concurrent systems design: a survey

    Publication Year: 1993, Page(s):12 - 21
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (688 KB)

    Concurrency is frequently employed as a means to increase the performance of computing systems: a conventional sequential program is designed, to be parallelised later on. This contribution is intended to show that concurrent systems can also differ essentially from conventional, sequential systems, with respect to the kind of problems to be solved, and even to the principal limits of capability a... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Massively parallel programming using object parallelism

    Publication Year: 1993, Page(s):144 - 150
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (508 KB)

    We introduce the concept of object parallelism. Object parallelism offers a unified model in comparison with traditional parallelisation techniques such as data parallelism and algorithmic parallelism. In addition, two fundamental advantages of the object-oriented approach are exploited. First, the abstraction level of object parallelism is application-oriented, ie., it hides the details of the un... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the implementation of virtual shared memory

    Publication Year: 1993, Page(s):172 - 178
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (352 KB)

    The field of parallel algorithms demonstrated that a machine model with virtual shared memory is easy to program. Most efforts in this field have been achieved on the PRAM-model. Theoretical results show that a PRAM can be simulated optimally on an interconnection network. We discuss implementations of some of these PRAM simulations and discuss their performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Modula-2* environment for parallel programming

    Publication Year: 1993, Page(s):43 - 52
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (736 KB)

    Presents a portable parallel programming environment for Modula-2*, an explicitly parallel machine-independent extension of Modula-2. Modula-2* offers synchronous and asynchronous parallelism, a global single address space, and automatic data and process distribution. The Modula-2* system consists of a compiler, a debugger, a cross-architecture make, graphical X Windows control panel, run-time sys... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structuring data parallelism using categorical data types

    Publication Year: 1993, Page(s):110 - 115
    Cited by:  Patents (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (384 KB)

    Data parallelism is a powerful approach to parallel computation, particularly when it is used with complex data types. Categorical data types are extensions of abstract data types that structure computations in a way that is useful for parallel implementation. In particular, they decompose the search for good algorithms on a data type into subproblems, all homomorphisms can be implemented by a sin... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An experimental parallelizing systolic compiler for regular programs

    Publication Year: 1993, Page(s):92 - 99
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (564 KB)

    Systolic transformation techniques are used for parallelization of regular loop programs. After a short introduction to systolic transformation, an experimental compiler system is presented that generates parallel C code by applying different transformation methods. This system is designed as a basis for development towards a systolic compiler generating efficient fine-grained parallel code for re... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interprocedural heap analysis for parallelizing imperative programs

    Publication Year: 1993, Page(s):74 - 82
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (524 KB)

    The parallelization of imperative programs working on pointer data structures is possible by using extensive heap analysis. Therefore, we consider a new interprocedural version of the heap analysis algorithm with summary nodes from Chase, Wegman and Zadeck (1990). Our analysis handles arbitrary call graph inclusive recursion, works on a realistic low-level intermediate language, and uses a modifie... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Beyond the data parallel paradigm: issues and options

    Publication Year: 1993, Page(s):179 - 190
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (1116 KB)

    Currently, the predominant approach in compiling a program for parallel execution on a distributed memory multiprocessor is driven by the data parallel paradigm, in which user-specified data mappings are used to derive computation mappings via ad hoc rules such as owner-computes. We explore a more general approach which is driven by the selection of computation mappings from the program dependence... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A programming model for reconfigurable mesh based parallel computers

    Publication Year: 1993, Page(s):124 - 133
    Cited by:  Papers (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (796 KB)

    The paper describes a high level programming model for reconfigurable mesh architectures. We analyze the engineering and technological issues of the implementation of reconfigurable mesh architectures and define an abstract architecture, called polymorphic processor array. We define both a computation model and a programming model for polymorphic processor arrays and design a parallel programming ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel programming models and their interdependence with parallel architectures

    Publication Year: 1993, Page(s):2 - 11
    Cited by:  Patents (26)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (908 KB)

    Because of its superior performance and cost-effectiveness, parallel computing will become the future standard, provided we have the appropriate programming models, tools and compilers needed to make parallel computers widely usable. The dominating programming style is procedural, given in the form of either the memory sharing or the message-passing paradigm. The advantages and disadvantages of th... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Virtual shared memory-based support for novel (parallel) programming paradigms

    Publication Year: 1993, Page(s):83 - 90
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (632 KB)

    Discusses the implementation of novel programming paradigms on virtual shared memory (VSM) parallel architectures. A wide spectrum of paradigms (data-parallel, functional and logic languages) have been investigated in order to achieve, within the context of VSM parallel architectures, a better understanding of the underlying support mechanisms for the paradigms and to identify commonality amongst ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reduced interprocessor-communication architecture for supporting programming models

    Publication Year: 1993, Page(s):134 - 143
    Cited by:  Papers (4)  |  Patents (3)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (628 KB)

    The paper presents an execution model and a processor architecture for general purpose massively parallel computers. To construct an efficient massively parallel computer: the execution model should be natural enough to map an actual problem structure into a processor architecture; each processor should have efficient and simple communication structure; and computation and communication should be ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A test bed for experimenting with visualization of parallel programs

    Publication Year: 1993, Page(s):53 - 62
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (668 KB)

    Because of the lack of the software tools to assist with concurrent programming, programming for parallel computers has been a significant technical problem for a diverse range of users. We are concentrating on techniques that allow computing and non-computing experts to define what they need and then automatically generate the specified visual language. Consequently, our visual language research ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Overall design of Pandore II: an environment for high performance C programming on DMPCs

    Publication Year: 1993, Page(s):28 - 34
    Cited by:  Papers (1)
    Request permission for commercial reuse | Click to expandAbstract | PDF file iconPDF (444 KB)

    Pandore II is an environment designed for parallel execution of imperative sequential programs on distributed memory parallel computers (DMPCs). It comprises a compiler, libraries for different target distributed computers and execution analysis tools. No specific knowledge of the target machine is required of the user: only the specification of data decomposition is left to his duty. The purpose ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.