By Topic

Software Engineering, IEEE Transactions on

Issue 4 • Date April 2000

Filter Results

Displaying Results 1 - 5 of 5
  • Guest editors' introduction: special issues on architecture-independent languages and software tools for parallel processing

    Page(s): 289 - 292
    Save to Project icon | Request Permissions | PDF file iconPDF (110 KB)  
    Freely Available from IEEE
  • Clustering algorithm for parallelizing software systems in multiprocessors environment

    Page(s): 340 - 361
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (832 KB)  

    A variety of techniques and tools exist to parallelize software systems on different parallel architectures (SIMD, MIMD). With the advances in high-speed networks, there has been a dramatic increase in the number of client/server applications. A variety of client/server applications are deployed today, ranging from simple telnet sessions to complex electronic commerce transactions. Industry standard protocols, like Secure Socket Layer (SSL), Secure Electronic Transaction (SET), etc., are in use for ensuring privacy and integrity of data, as well as for authenticating the sender and the receiver during message passing. Consequently, a majority of applications using parallel processing techniques are becoming synchronization-centric, i.e., for every message transfer, the sender and receiver must synchronize. However, more effective techniques and tools are needed to automate the clustering of such synchronization-centric applications to extract parallelism. The authors present a new clustering algorithm to facilitate the parallelization of software systems in a multiprocessor environment. The new clustering algorithm achieves traditional clustering objectives (reduction in parallel execution time, communication cost, etc.). Additionally, our approach: 1) reduces the performance degradation caused by synchronizations, and 2) avoids deadlocks during clustering. The effectiveness of our approach is depicted with the help of simulation results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated tuning of parallel I/O systems: an approach to portable I/O performance for scientific applications

    Page(s): 362 - 383
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1104 KB)  

    Parallel I/O systems typically consist of individual processors, communication networks, and a large number of disks. Managing and utilizing these resources to meet performance, portability, and usability goals of high performance scientific applications has become a significant challenge. For scientists, the problem is exacerbated by the need to retune the I/O portion of their code for each supercomputer platform where they obtain access. We believe that a parallel I/O system that automatically selects efficient I/O plans for user applications is a solution to this problem. The authors present such an approach for scientific applications performing collective I/O requests on multidimensional arrays. Under our approach, an optimization engine in a parallel I/O system selects high quality I/O plans without human intervention, based on a description of the application I/O requests and the system configuration. To validate our hypothesis, we have built an optimizer that uses rule based and randomized search based algorithms to tune parameter settings in Panda, a parallel I/O library for multidimensional arrays. Our performance results obtained from an IBM SP using an out-of-core matrix multiplication application show that the Panda optimizer is able to select high quality I/O plans and deliver high performance under a variety of system configurations with a small total optimization overhead View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A transformation approach to derive efficient parallel implementations

    Page(s): 315 - 339
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1612 KB)  

    The construction of efficient parallel programs usually requires expert knowledge in the application area and a deep insight into the architecture of a specific parallel machine. Often, the resulting performance is not portable, i.e., a program that is efficient on one machine is not necessarily efficient on another machine with a different architecture. Transformation systems provide a more flexible solution. They start with a specification of the application problem and allow the generation of efficient programs for different parallel machines. The programmer has to give an exact specification of the algorithm expressing the inherent degree of parallelism and is released from the low-level details of the architecture. We propose such a transformation system with an emphasis on the exploitation of the data parallelism combined with a hierarchically organized structure of task parallelism. Starting with a specification of the maximum degree of task and data parallelism, the transformations generate a specification of a parallel program for a specific parallel machine. The transformations are based on a cost model and are applied in a predefined order, fixing the most important design decisions like the scheduling of independent multitask activations, data distributions, pipelining of tasks, and assignment of processors to task activations. We demonstrate the usefulness of the approach with examples from scientific computing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A design methodology for data-parallel applications

    Page(s): 293 - 314
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1600 KB)  

    A methodology for the design and development of data-parallel applications and components is presented. Data-parallelism is a well understood form of parallel computation, yet developing simple applications can involve substantial efforts to express the problem in low level notations. We describe a process of software development for data-parallel applications starting from high level specifications, generating repeated refinements of designs to match different architectural models and performance constraints, enabling a development activity with cost benefit analysis. Primary issues are algorithm choice, correctness, and efficiency, followed by data decomposition, load balancing, and message passing coordination. Development of a data-parallel multitarget tracking application is used as a case study, showing the progression from high to low level refinements. We conclude by describing tool support for the process View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Software Engineering is interested in well-defined theoretical results and empirical studies that have potential impact on the construction, analysis, or management of software. The scope of this Transactions ranges from the mechanisms through the development of principles to the application of those principles to specific environments. Specific topic areas include: a) development and maintenance methods and models, e.g., techniques and principles for the specification, design, and implementation of software systems, including notations and process models; b) assessment methods, e.g., software tests and validation, reliability models, test and diagnosis procedures, software redundancy and design for error control, and the measurements and evaluation of various aspects of the process and product; c) software project management, e.g., productivity factors, cost models, schedule and organizational issues, standards; d) tools and environments, e.g., specific tools, integrated tool environments including the associated architectures, databases, and parallel and distributed processing issues; e) system issues, e.g., hardware-software trade-off; and f) state-of-the-art surveys that provide a synthesis and comprehensive review of the historical development of one particular area of interest.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Matthew B. Dwyer
Dept. Computer Science and Engineering
256 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0115 USA
tseeicdwyer@computer.org