By Topic

Computing in Science & Engineering

Issue 2 • Date March-April 2005

Filter Results

Displaying Results 1 - 15 of 15
  • [Front cover]

    Publication Year: 2005 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (487 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2005 , Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (376 KB)  
    Freely Available from IEEE
  • Staking New Ground

    Publication Year: 2005 , Page(s): 3 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (120 KB) |  | HTML iconHTML  

    In the last issue, I outlined my development priorities for moving CiSE to the next level--community, content, and approach. In this message, I want to describe our first steps on the way forward along each of these paths. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Digital detectives reveal art forgeries

    Publication Year: 2005 , Page(s): 5 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    A computer scientist practices a new kind of forensics: a statistical technique that gauges whether a photograph is computer generated (CG), or if a work of art is a forgery. So far, his computer algorithms have correctly identified five forgeries among 13 artists' drawings and matched some human experts' theories on the origins of a Renaissance oil painting. The detective saw visual elements in a work of art that would lend themselves to a mathematical technique called wavelet analysis. Wavelets can break down a picture into vertical, horizontal, and diagonal elements on large and small scales. Statistical algorithms can then detect a pattern in those elements $an image's unique signature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reviews of Maple, Mathematica, and Matlab: Coming Soon to a Publication Near You

    Publication Year: 2005 , Page(s): 9 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB) |  | HTML iconHTML  

    In our introductory article to our upcoming review series on Maple, Mathematica, and Matlab, we asked for feedback. The letters to the editor we subsequently received deliver two messages: first, that this review series will serve a real need, and second, that we must be thorough in our treatment of each product. We didn't intend the introductory article to be an in-depth account in any sense. Instead, we tried to set a context for the future reviews by relating certain background information--histories and design principles--and a brief overview of current uses for each of the three productivity packages.At the time, we had more such information about Mathematica and Maple than for Matlab. Subsequently, Matlab's inventor, Cleve Moler, authored an article that addressed much of the points we were missing. In view of this information's emergence, and in response to the feedback we've received, we felt we should include some selections from Moler's article in this installment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Guest Editors' Introduction: Cluster Computing

    Publication Year: 2005 , Page(s): 11 - 13
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (448 KB) |  | HTML iconHTML  

    What is cluster computing? In a nutshell, it involves the use of a network of computing resources to provide a comparatively economical package with capabilities once reserved for supercomputers. In this issue, we look at certain applications of cluster computing to problem solving. As the Beowulf project and clustering revolution celebrate more than 10 years in existence, it's interesting to see what remains the same and what has changed. Let's look at a few aspects of the clustering revolution in more detail. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Configuration and performance of a Beowulf cluster for large-scale scientific simulations

    Publication Year: 2005 , Page(s): 14 - 26
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1008 KB) |  | HTML iconHTML  

    To achieve optimal performance on a Beowulf cluster for large-scale scientific simulations, it's necessary to combine the right numerical method with its efficient implementation to exploit the cluster's critical high-performance components. This process is demonstrated using a simple but prototypical problem of solving a time-dependent partial differential equation. Beowulf clusters in virtually every price range are readily available today for purchase in fully integrated form from a large variety of vendors. At the University of Maryland, Baltimore County (UMBC), a medium-sized 64-processor cluster with high-performance interconnect and extended disk storage was bought from IBM. The cluster has several critical components, and this article demonstrates their roles using a prototype problem from the numerical solution of time-dependent partial differential equations (PDEs). The problem was selected to show how judiciously combining a numerical algorithm and its efficient implementation with the right hardware (in this case, the Beowulf cluster) can achieve parallel computing's two fundamental goals: to solve problems faster and to solve larger problems than we can on a serial computer. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Plug-and-play cluster computing: high-performance computing for the mainstream

    Publication Year: 2005 , Page(s): 27 - 33
    Cited by:  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1008 KB) |  | HTML iconHTML  

    To achieve accessible computational power for their research goals, the authors developed the tools to build easy-to-use, numerically intensive parallel computing clusters using the Macintosh platform. Their approach enables the user, without expertise in the operating system, to develop and run parallel code efficiently, maximizing the advancement of scientific research. Accessible computing power has become the main motivation for cluster computing - some wish to tap the proliferation of desktop computers, while others seek clustering because they find access to large supercomputing centers to be difficult or unattainable. Both want to combine smaller machines to provide sufficient access to computational power. In this article, we describe our approach to cluster computing to best achieve these goals for scientific users and, ultimately, for the mainstream end user. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cluster computing with Java

    Publication Year: 2005 , Page(s): 34 - 39
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB) |  | HTML iconHTML  

    Java could be a new lingua franca for uniting disparate computing worlds. In this article, the authors explore two approaches for Java's support of cluster computing - as single and multiple virtual machines - and evaluate the performance of the two approaches via a set of benchmark applications. Java has emerged as a possible solution to unite Web, cluster, multiprocessor, and uniprocessor computing. Its support for multithreaded computation and remote method invocation, improvements in its compilation technology (which have made it competitive with C++ for many applications), and Java-based solutions for building Web services, peer-to-peer applications, and so on, have driven its emergence. In this article, we explore Java's support for using a cluster of computers interconnected via a high-performance network to execute single high-performance applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Resource-aware scientific computation on a heterogeneous cluster

    Publication Year: 2005 , Page(s): 40 - 50
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB) |  | HTML iconHTML  

    Although researchers can develop software on small, local clusters and move it later to larger clusters and supercomputers, the software must run efficiently in both environments. Two efforts aim to improve the efficiency of scientific computation on clusters through resource-aware dynamic load balancing. The popularity of cost-effective clusters built from commodity hardware has opened up a new platform for the execution of software originally designed for tightly coupled supercomputers. Because these clusters can be built to include any number of processors ranging from fewer than 10 to thousands, researchers in high-performance scientific computation at smaller institutions or in smaller departments can maintain local parallel computing resources to support software development and testing, then move the software to larger clusters and supercomputers. As promising as this ability is, it has also led to the need for local expertise and resources to set up and maintain these clusters. The software must execute efficiently both on smaller local clusters and on larger ones. These computing environments vary in the number of processors, speed of processing and communication resources, and size and speed of memory throughout the memory hierarchy as well as in the availability of support tools and preferred programming paradigms. Software developed and optimized using a particular computing environment might not be as efficient when it's moved to another one. In this article, we describe a small cluster along with two efforts to improve the efficiency of parallel scientific computation on that cluster. Both approaches modify the dynamic load-balancing step of an adaptive solution procedure to tailor the distribution of data across the cooperating processes. This modification helps account for the heterogeneity and hierarchy in various computing environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-performance computing: clusters, constellations, MPPs, and future directions

    Publication Year: 2005 , Page(s): 51 - 59
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB) |  | HTML iconHTML  

    In a recent paper, Gordon Bell and Jim Gray (2002) put forth a view of the past, present, and future of high-performance computing (HPC) that is both insightful and thought provoking. Identifying key trends with a grace and candor rarely encountered in a single work, the authors describe an evolutionary past drawn from their vast experience and project an enticing and compelling vision of HPC's future. Yet, the underlying assumptions implicit in their treatment, particularly those related to terminology and dominant trends, conflict with our own experience, common practices, and shared view of HPCs future directions. Taken from our vantage points of the Top500 list," the Lawrence Berkeley National Laboratory NERSC computer center, Beowulf-class computing, and research in petaflops-scale computing architectures, we offer an alternate perspective on several key issues in the form of a constructive counterpoint. One objective of this article is to restore the strength and value of the term "cluster" by degeneralizing its applicability to a restricted subset of parallel computers. We'll further consider this class in conjunction with its complementing terms constellation, Beowulf class, and massively parallel processing systems (MPPs), based on the classification used by the Top500 list, which has tracked the HPC field for more than a decade. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind deconvolution: a matter of norm

    Publication Year: 2005 , Page(s): 60 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    We continue the spectroscopy problem from the last issue, trying to reconstruct a true spectrum from an observed one. Again, we'll use blind deconvolution, but this time we'll impose some constraints on the error matrix E, leading to a more difficult problem to solve but often a more useful reconstruction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Global e-science collaboration

    Publication Year: 2005 , Page(s): 67 - 74
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1544 KB) |  | HTML iconHTML  

    Today's e-science, with its extreme-scale scientific applications, marks a turning point for high-end requirements on the compute infrastructure and, in particular, on optical networking resources. Although ongoing research efforts are aimed at exploiting the vast bandwidth of fiber-optic networks to both interconnect resources and enable high-performance applications, challenges continue to arise in the area of the optical control plane. The ultimate goal in this area is to extend the concept of application-driven networking into the optical space, providing unique features that couldn't be achieved otherwise. Many researchers in the e-science community are adopting grid computing to meet their ever-increasing computational and bandwidth needs as well as help them with their globally distributed collaborative efforts. This recent awareness of the network as a prime resource has led to a sharper focus on interactions with the optical control plane, grid middleware, and other applications. This article attempts to explain the rationale for why high-end e-science applications consider optical network resources to be as essential and dynamic as CPU and storage resources in a grid infrastructure and why rethinking the role of the optical control plane is essential for next-generation optical networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A virtual laboratory for temporal bone microanatomy

    Publication Year: 2005 , Page(s): 75 - 79
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1856 KB) |  | HTML iconHTML  

    Located in the lateral cranial base, the temporal bone is one of the human body's most complicated parts. It contains many tiny, delicate, and detailed anatomical structures, including many irregular orifices, antra (cavities), canals, and fissures. Crucial nerves, blood vessels, and auditory and vestibular organs coexist in this dense bone structure in a complex 3D configuration, causing medical science to once regard the temporal bone as a surgically forbidden area. Today, otolaryngology (ear, nose, and throat) surgeons still find it difficult to envision and master these complex anatomic interrelationships. Here, we present a new method for generating and reconstructing 3D temporal bone models and their applications in stereoscopic virtual environments. Our virtual laboratory and its associated software can run on ordinary PCs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The fast Fourier transform for experimentalists. Part I. Concepts

    Publication Year: 2005 , Page(s): 80 - 88
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB) |  | HTML iconHTML  

    The discrete Fourier transform (DFT) provides a means for transforming data sampled in the time domain to an expression of this data in the frequency domain. The inverse transform reverses the process, converting frequency data into time-domain data. Such transformations can be applied in a wide variety of fields, from geophysics to astronomy, from the analysis of sound signals to CO2 concentrations in the atmosphere. Over the course of three articles, our goal is to provide a convenient summary that the experimental practitioner will find useful. In the first two parts of this article, we'll discuss concepts associated with the fast Fourier transform (FFT), an implementation of the DFT. In the third part, we'll analyze two applications: a bat chirp and atmospheric sea-level pressure differences in the Pacific Ocean. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

Computing in Science & Engineering presents scientific and computational contributions in a clear and accessible format.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
George K. Thiruvathukal
Loyola University