By Topic

Computational Science & Engineering, IEEE

Popular Articles (April 2015)

Includes the top 50 most frequently downloaded documents for this publication according to the most recent monthly usage statistics.
  • 1. An introduction to wavelets

    Publication Year: 1995 , Page(s): 50 - 61
    Cited by:  Papers (313)  |  Patents (29)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (948 KB)  

    Wavelets were developed independently by mathematicians, quantum physicists, electrical engineers and geologists, but collaborations among these fields during the last decade have led to new and varied applications. What are wavelets, and why might they be useful to you? The fundamental idea behind wavelets is to analyze according to scale. Indeed, some researchers feel that using wavelets means adopting a whole new mind-set or perspective in processing data. Wavelets are functions that satisfy certain mathematical requirements and are used in representing data or other functions. Most of the basic wavelet theory has now been done. The mathematics have been worked out in excruciating detail, and wavelet theory is now in the refinement stage. This involves generalizing and extending wavelets, such as in extending wavelet packet techniques. The future of wavelets lies in the as-yet uncharted territory of applications. Wavelet techniques have not been thoroughly worked out in such applications as practical data analysis, where, for example, discretely sampled time-series data might need to be analyzed. Such applications offer exciting avenues for exploration View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2. OpenMP: an industry standard API for shared-memory programming

    Publication Year: 1998 , Page(s): 46 - 55
    Cited by:  Papers (288)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4368 KB)  

    At its most elemental level, OpenMP is a set of compiler directives and callable runtime library routines that extend Fortran (and separately, C and C++ to express shared memory parallelism. It leaves the base language unspecified, and vendors can implement OpenMP in any Fortran compiler. Naturally, to support pointers and allocatables, Fortran 90 and Fortran 95 require the OpenMP implementation to include additional semantics over Fortran 77. OpenMP leverages many of the X3H5 concepts while extending them to support coarse grain parallelism. The standard also includes a callable runtime library with accompanying environment variables View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 3. Data mining: an industrial research perspective

    Publication Year: 1997 , Page(s): 6 - 9
    Cited by:  Papers (13)  |  Patents (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (124 KB)  

    Just what exactly is data mining? At a broad level, it is the process by which accurate and previously unknown information is extracted from large volumes of data. This information should be in a form that can be understood, acted upon, and used for improving decision processes. Obviously, with this definition, data mining encompasses a broad set of technologies, including data warehousing, database management, data analysis algorithms, and visualization. The crux of the appeal for this new technology lies in the data analysis algorithms, since they provide automated mechanisms for sifting through data and extracting useful information. The analysis capability of these algorithms, coupled with today's data warehousing and database management technology, make corporate and industrial data mining possible. The data representation model for such algorithms is quite straightforward. Data is considered to be a collection of records, where each record is a collection of fields. Using this tabular data model, data mining algorithms are designed to operate on the contents, under differing assumptions, and delivering results in differing formats. The data analysis algorithms (or data mining algorithms, as they are more popularly known nowadays) can be divided into three major categories based on the nature of their information extraction: predictive modeling (also called classification or supervised learning), clustering (also called segmentation or unsupervised learning), and frequent pattern extraction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 4. Adifor 2.0: automatic differentiation of Fortran 77 programs

    Publication Year: 1996 , Page(s): 18 - 32
    Cited by:  Papers (91)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3272 KB)  

    Numerical codes that calculate not only a result, but also the derivatives of the variables with respect to each other, facilitate sensitivity analysis, inverse problem solving, and optimization. The paper considers how Adifor 2.0, which won the 1995 Wilkinson Prize for Numerical Software, can automatically differentiate complicated Fortran code much faster than a programmer can do it by hand. The Adifor system has three main components: the AdiFor preprocessor, the ADIntrinsics exception-handling system, and the SparsLinC library View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 5. Accelerating fast multipole methods for the Helmholtz equation at low frequencies

    Publication Year: 1998 , Page(s): 32 - 38
    Cited by:  Papers (76)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2224 KB)  

    The authors describe a diagonal form for translating far-field expansions to use in low frequency fast multipole methods. Their approach combines evanescent and propagating plane waves to reduce the computational cost of FMM implementation. More specifically, we present the analytic foundations for a new version of the fast multipole method for the scalar Helmholtz equation in the low frequency regime. The computational cost of existing FMM implementations, is dominated by the expense of translating far field partial wave expansions to local ones, requiring 189p4 or 189p3 operations per box, where harmonics up to order p2 have been retained. By developing a new expansion in plane waves, we can diagonalize these translation operators. The new low frequency FMM (LF-FMM) requires 40p2+6p2 operations per box View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 6. Geometric hashing: an overview

    Publication Year: 1997 , Page(s): 10 - 21
    Cited by:  Papers (133)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB)  

    Geometric hashing, a technique originally developed in computer vision for matching geometric features against a database of such features, finds use in a number of other areas. Matching is possible even when the recognizable database objects have undergone transformations or when only partial information is present. The technique is highly efficient and of low polynomial complexity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 7. Computer as thinker/doer: problem-solving environments for computational science

    Publication Year: 1994 , Page(s): 11 - 23
    Cited by:  Papers (46)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2054 KB)  

    During the early 1960s, scientists began to envision problem-solving computing environments not only powerful enough to solve complex problems but also able to interact with users on human terms. While many tried to create PSEs over the next few years, by the early 1970s they had abandoned almost all of these attempts. Technology could not yet support PSEs in computational science. But the dream of the 1960s can be the reality of the 1990s: high-performance computers combined with better understanding of computing and computational science have put PSEs well within our reach. The term 'problem-solving environment' means different things to different people. A PSE is a computer system that provides all the computational facilities necessary to solve a target class of problems. These features include advanced solution methods, automatic or semi-automatic selection of solution methods, and ways to easily incorporate novel solution methods. Simple PSEs appeared early in computing without being recognized as such. Some of the capabilities of future problem-solving environments seem like science fiction, but whatever form they eventually take, their scientific and economic impact will be enormous.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 8. WISE design of indoor wireless systems: practical computation and optimization

    Publication Year: 1995 , Page(s): 58 - 68
    Cited by:  Papers (120)  |  Patents (47)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1252 KB)  

    Designing a low-power system for wireless communication within a building might seem simple. Not so-walls can affect signal strength in ways that are hard to calculate. The paper considers how AT&T's WISE software uses CAD, computational geometry, and optimization to quickly plan where to place base-station transceivers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 9. CSE education: An introduction to scientific programming

    Publication Year: 1998 , Page(s): 6 - 10
    Cited by:  Papers (91)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7876 KB)  

    The University of Utah's Department of Computer Science has offered an introductory course on scientific programming, called Engineering Computing, since 1994. Each year, approximately 300 first- and second-year science and engineering majors take the course. They learn how to use a variety of programming techniques to solve the kinds of computational science problems they will encounter in their academic and professional careers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 10. Introduction To Fortran 90 For Engineers And Scientists

    Publication Year: 1998 , Page(s): 87
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (145 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 11. Electromagnetics: computational methods and considerations

    Publication Year: 1995 , Page(s): 42 - 57
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1364 KB)  

    Frequency-domain computational techniques in electromagnetics make specialized use of generic integral-equation and finite-element methods. Familiar advantages and drawbacks apply. Though EM theory has changed little since Maxwell, numerical solution methods have improved sharply; yet, while analysis tools now flourish, practical design software is scarce. Also, EM area editor E. Miller offers a few comments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 12. Rapid design of neural networks for time series prediction

    Publication Year: 1996 , Page(s): 78 - 89
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1692 KB)  

    The article explores the possibility of rapidly designing an appropriate neural net (NN) for time series prediction based on information obtained from stochastic modeling. Such an analysis could provide some initial knowledge regarding the choice of an NN architecture and parameters, as well as regarding an appropriate data sampling rate. Stochastic analysis provides a complementary approach to previously proposed dynamical system analysis for NN design. Based on E. Takens's theorem (1981), an estimate of the dimension m of the manifold from which the time series originated can be used to construct an NN model using 2m+1 external inputs. This design is further extended by M.A.S. Potts and D.S. Broomhead (1991) who first embed the state space of a discrete time dynamical system in a manifold of dimension n>>2m+1, which is further projected to its 2m+1 principal components used as external inputs in a radial basis function NN model for time series prediction. Our approach is to perform an initial stochastic analysis of the data and to choose an appropriate NN architecture, and possibly initial values for the NN parameters, according to the most adequate linear model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 13. Simulating the behavior of MEMS devices: computational methods and needs

    Publication Year: 1997 , Page(s): 30 - 43
    Cited by:  Papers (73)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB)  

    Technologies for fabricating a variety of MEMS devices have developed rapidly, but computational tools that allow engineers to quickly design and optimize these micromachines have not kept pace. Inadequate simulation tools force MEMS designers to resort to physical prototyping. To realistically simulate the behavior of complete micromachines, algorithmic innovation is necessary in several areas View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 14. Fast algorithms for removing atmospheric effects from satellite images

    Publication Year: 1996 , Page(s): 66 - 77
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2388 KB)  

    The varied features of the earth's surface each reflect sunlight and other wavelengths of solar radiation in a highly specific way. This principle provides the foundation for the science of satellite based remote sensing. A vexing problem confronting remote sensing researchers, however, is that the reflected radiation observed from remote locations is significantly contaminated by atmospheric particles. These aerosols and molecules scatter and absorb the solar photons reflected by the surface in such a way that only part of the surface radiation can be detected by a sensor. The article discusses the removal of atmospheric effects due to scattering and absorption, ie., atmospheric correction. Atmospheric correction algorithms basically consist of two major steps. First, the optical characteristics of the atmosphere are estimated. Various quantities related to the atmospheric correction can then be computed by radiative transfer algorithms, given the atmospheric optical properties. Second, the remotely sensed imagery is corrected by inversion procedures that derive the surface reflectance. We focus on the second step, describing our work on improving the computational efficiency of the existing atmospheric correction algorithms. We discuss a known atmospheric correction algorithm and then introduce a substantially more efficient version which we have devised. We have also developed a parallel implementation of our algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 15. Estimating parameters in scientific computation - A survey of experience from oil and groundwater modeling

    Publication Year: 1994
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1653 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 16. A point of contact between computer science and molecular biology

    Publication Year: 1994 , Page(s): 69 - 78
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1259 KB)  

    Molecular biology is rapidly becoming a data-rich science with extensive computational needs. The sheer volume of data poses a serious challenge in storing and retrieving biological information, and the rate of growth is exponential. Linking the heterogeneous data libraries of molecular biology, organizing its diverse and interrelated data sets, and developing effective query options for its databases are all areas for cross-fertilization between molecular biology and computer science. However, even the apparently simple task of analyzing a single sequence of DNA requires complex collaboration. For several years, we have been developing a computer toolkit for analyzing DNA sequences. The biology of gene regulation in mammals has driven the design of the sequence comparison toolkit to emphasize space-efficient algorithms with a high degree of sensitivity and has profoundly affected choice of tools and the development of algorithms. We sketch the biology of this class of problem and show how it specifically drives the software development. The main components of this toolkit are outlined.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 17. Medical image registration using geometric hashing

    Publication Year: 1997 , Page(s): 29 - 41
    Cited by:  Papers (17)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    To carefully compare pictures of the same thing taken from different views, the images must first be registered, or aligned so as to best superimpose them. Results show that two geometric hashing methods, based respectively on curves and characteristic features, can be used to compute 3D transformations that automatically register medical images of the same patient in a practical, fast, accurate, and reliable manner View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 18. Computational methods for design and control of MEMS micromanipulator arrays

    Publication Year: 1997 , Page(s): 17 - 29
    Cited by:  Papers (19)  |  Patents (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB)  

    As improvements in fabrication technology for microelectromechanical systems, or MEMS, increase the availability and diversity of these micromachines, engineers are defining a growing number of tasks to which they can be put. The idea of carrying out tasks using large coordinated groups of MEMS units motivates the development of automated, algorithmic methods for designing and controlling these groups of devices. We report on progress towards computational MEMS, taking on the challenge of design and control of massively parallel arrays of microactuators. Arrays of MEMS devices can move and position tiny parts, such as integrated circuit chips, in flexible and predictable ways by oscillatory or ciliary action. The theory of programmable force fields can model this action, leading to algorithms for various types of micromanipulation that require no sensing of where the part is. Experiments support the theory View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 19. Monitoring complex systems with causal networks

    Publication Year: 1996 , Page(s): 9 - 10
    Cited by:  Patents (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB)  

    Complex industrial systems, such as utility turbine generators, are usually monitored by observing data recorded by sensors placed at various locations in the system. Typically, data are collected continuously and an expert, or a team of experts, monitors the readings. From the readings they assess the “health” of the system. Should readings at some sensors become unusual,the experts then use their diagnostic skills to determine the cause of the problem. It is better to detect problems early and correct them rather than waiting for more serious problems or a major failure. However, there are several problems associated with using human expertise to monitor complex systems which are outlined. There have been considerable efforts to develop expert computer systems that can perform the monitoring and diagnosis. These efforts include the use of ruled based artificial intelligence. At General Electric corporate R&D, one of the authors has been leading an effort to design monitoring systems that use a causal network. They have been shown to deliver much ofthe diagnostic ability needed in various GE applications. Indeed, the GE work has a wide range of applications, and can be used in complex systems such as power generators, transportation equipment (planes, trains, and automobiles), medical equipment, and production plants. Causal networks use a directed graph and probability theory to produce continuous probabilistic information on why a system has abnormal readings at some sensors View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 20. What should computer scientists teach to physical scientists and engineers? 2. Response to Wilson: teach computing in context

    Publication Year: 1996 , Page(s): 54 - 62
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (480 KB)  

    For pt.1 see ibid., p.46 (1996). Greg Wilson started a discussion on what topics computer scientists should teach-given just a week-that would most benefit the physical scientist or engineer. The present paper provides three more opinions, and Wilson's response View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 21. Monte Carlo methods in chemistry

    Publication Year: 1994 , Page(s): 22 - 32
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1481 KB)  

    Monte Carlo methods fulfil an important dual role. At a specific level, they provide a general-purpose numerical approach to problems in a wide range of topics. Using such methods, we can explore the characteristics of specific systems without introducing untestable approximations. To show the generality and breadth of Monte Carlo approaches and to point out characteristics of the methods that offer significant potential for development, we look at several prototypical problems in chemistry. In particular, we apply a variety of Monte Carlo methods to problems in the chemistry of clusters. In chemical parlance, a cluster is a group of atoms, whether bonded into molecules or not, that are close enough to experience interatomic or intermolecular forces. Interesting in their own right, clusters also serve as useful prototypes in the study of interfacial and bulk systems generally. (Interfacial systems contain boundary regions between distinct thermodynamic phases, eg., solid/liquid or gas/solid and bulk systems are homogeneous).<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 22. Simulating asynchronous, decentralized military command and control

    Publication Year: 1996 , Page(s): 69 - 79
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1928 KB)  

    The pace and scope of modern warfare have made some aspects of traditional, centralized decision making obsolete. A decentralized scheme grants greater autonomy to fighting units, which process data locally for efficient decision making. In experimental simulations using a decentralized algorithm, units react to information faster in both offensive and defensive scenarios. We used asynchronous, distributed, discrete event simulation techniques to model the command and control problem. These algorithms are appropriate for problems involving discrete data transfer, geographically dispersed decision making entities, asynchronously generated stimuli, and data feedback. To compare our algorithm with traditional algorithms and identify scenarios for which it is effective, we used a loosely coupled parallel processor as a simulation testbed. To the best of our knowledge, this research is the first attempt to scientifically model decentralized C3 and assess its performance through extensive parallel simulation. State-of-the-art battlefield simulators such as Simnet and CATT provide training environments in which human operators make the decisions. In contrast, our testbed provides an automated environment for fast, efficient, accurate warfare modeling. By “accurate” we mean spatial and timing resolution; we do not claim to represent all battlefield details, such as terrain View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 23. The Puppy paradigm [computational structural engineering course]

    Publication Year: 1997 , Page(s): 4 - 6, 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1112 KB)  

    "Puppy" is a 12.5-m high flower sculpture in the shape of a White West Highland Terrier. The original Puppy support structure, which was made from wood, wilted with the passage of time, weather and irrigation, so a dismountable stainless steel support was commissioned. Because of its complicated nature and modeling history, the Puppy case study allowed the Australian National University's computational science and engineering (CSE) teaching laboratory to introduce the principles of beam structural analysis and plate analysis in its final-year, one-semester elective course for the Interdisciplinary Systems Engineering degree, and then to let students build a model composed of a combination of plates and beams. Its manifestly 3D geometry also led into the visualization segment of our course in a neat way. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 24. The T experiments: errors in scientific software

    Publication Year: 1997 , Page(s): 27 - 38
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB)  

    Extensive tests showed that many software codes widely used in science and engineering are not as accurate as we would like to think. It is argued that better software engineering practices would help solve this problem, but realizing that the problem exists is an important first step View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 25. High-performance computing: Models and high-performance algorithms for global brdf retrieval

    Publication Year: 1998 , Page(s): 16 - 29
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (28928 KB)  

    The authors describe three models for retrieving information related to the scattering of light on the earth's surface. Using these models, they've developed algorithms for the IBM SP2 that efficiently retrieve this information. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 26. Programming languages for CSE: the state of the art

    Publication Year: 1998 , Page(s): 18 - 26
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1296 KB)  

    To meet the diverse demands of building CSE applications, developers can choose from a multitude of programming languages. This survey offers an overview of available programming languages and the contexts for their use View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 27. Portrait of a crack: rapid fracture mechanics using parallel molecular dynamics

    Publication Year: 1997 , Page(s): 66 - 77
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2028 KB)  

    How do materials fracture? The molecular dynamics methods used to model this important problem parallelize well, allowing bigger and more realistic computational experiments. Simulations of how materials crack at the atomic level are yielding surprising results that sometimes contradict existing theory, but that may explain recent physical experiments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 28. The DEVS environment for high-performance modeling and simulation

    Publication Year: 1997 , Page(s): 61 - 71
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    DEVS-C++, a high-performance environment for modeling large-scale systems at high resolution, uses the DEVS (Discrete-EVent system Specification) formalism to represent both continuous and discrete processes. A prototype suggests that the DEVS formalism can be combined with genetic algorithms running in parallel to serve as the basis of a very general, very fast class of simulation environments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 29. Fast Multipole Methods

    Publication Year: 1998 , Page(s): 16 - 18
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (546 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 30. Multigrid methods in science and engineering

    Publication Year: 1996 , Page(s): 55 - 68
    Cited by:  Papers (10)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1988 KB)  

    By combining computation from several scales of mesh fineness, multigrid and multilevel methods can improve speed and accuracy in a wide variety of science and engineering applications. The article sketches the history of the techniques, explains the basics, and gives pointers to the literature and current research View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 31. The accuracy of fast multipole methods for Maxwell's equations

    Publication Year: 1998 , Page(s): 48 - 56
    Cited by:  Papers (35)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1228 KB)  

    The multilevel fast multipole method can provide fast, accurate solutions to electromagnetic scattering problems, provided its users select the FMM degree and FMM cube size appropriately. The article discusses errors associated with truncating multipole expansions and methods for selecting an appropriate set of parameters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 32. Developing component architectures for distributed scientific problem solving

    Publication Year: 1998 , Page(s): 50 - 63
    Cited by:  Papers (13)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1864 KB)  

    Component programming models offer rapid construction for complex distributed applications, without recompiling and relinking code. This survey of the theory and design of component based software illustrates their use and utility with a prototype system for manipulating and solving large, sparse systems of equations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 33. Neural networks in computational science and engineering

    Publication Year: 1996 , Page(s): 36 - 42
    Cited by:  Papers (5)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2088 KB)  

    An artificial neural network (ANN) is a computational system inspired by the structure, processing method and learning ability of a biological brain. In a commonly accepted model of the brain, a given neuron receives electrochemical input signals from many neurons through synapses-some inhibitory, some excitatory-at its receiving branches, or dendrites. If and when the net sum of the signals reaches a threshold, the neuron fires, transmitting a new signal through its axon, across the synapses to the dendrites of the many neurons it is in turn connected with. In the artificial system, “neurons”, essentially tiny virtual processors, are usually implemented in software. Given an input, an artificial neuron uses some function to compute an output. As the output signal is propagated to other neurons, it is modified by “synaptic weights” or inter-neuron connection strengths. The weights determine the final output of the network, and can thus be adjusted to encode a desired functionality View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 34. Data visualization: Visualizing reflection off a curved surface

    Publication Year: 1998 , Page(s): 30 - 39
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (12441 KB)  

    To understand and apply wave physics, it can help to use wave-motion visualization. This article presents geometric algorithms for calculating the shape of a wavefront reflected from a curved surface. The authors model the curved surfaces with equations and simulate the wave reflection with 2D images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 35. Computational aspects of the Pentium affair

    Publication Year: 1995 , Page(s): 18 - 30
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1044 KB)  

    The Pentium affair has been widely publicized. It started with an obscure defect in the floating-point unit of Intel Corporation's flagship Pentium microprocessor. This is the story of how the Pentium floating-point division problem was discovered, and what you need to know about the maths and computer engineering involved before deciding whether to replace the chip, install the workaround provided here, or do nothing. The paper also discusses broader issues of computational correctness View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 36. Challenges in commercializing MEMS

    Publication Year: 1997 , Page(s): 44 - 48
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (88 KB)  

    Several microelectromechanical systems have achieved commercial success. The barriers can still be formidable, though, and the path to success is often much different for MEMS than it was for mainstream semiconductors. Maturing software for comprehensive modeling and design will help in the future. The entire field of MEMS has been enabled by the batch fabrication methods established in the semiconductor industry. We cannot predict the market success of MEMS based products, however, by blindly applying the economy of scale and other economic models governing semiconductor markets. Although the parallels are both undeniable and enabling, the successful MEMS venture today is likely to be one that focuses on differences from, rather than parallels with, mainstream semiconductor markets. It is essential to recognize that the main challenges in commercializing MEMS-on both the business and technical levels-are different from the classic semiconductor problems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 37. The microprocessor for scientific computing in the year 2000

    Publication Year: 1996 , Page(s): 42 - 43
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB)  

    The future of scientific computing, like the future of all computing, demands higher and higher performance from the computing system. In the author's view, that means exploiting concurrency at all levels of granularity, including the microprocessor. For scientific computing there is much good news. For example, the regularity of scientific computations (although Amdahl's law makes it not as good as it might be) allows for multiple instruction streams operating on behalf of a single process. That works well for the multimicro paradigm, and in fact might further suggest putting the multiprocessor on a single chip. However, the author does not believe the single chip multiprocessor is the answer for high performance scientific computing in the year 2000 for two reasons: system partitioning and pin bandwidth. At the uniprocessor level, scientific code makes the job of the compiler and the job of the microarchitecture easier, and that will translate into greater performance sooner than will be possible with integer code. Instruction and data supply will both be handled jointly by the compiler and the microarchitecture View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 38. Students get hands-on research experience at SDSC

    Publication Year: 1996 , Page(s): 13 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (744 KB)  

    Since its inception in 1985, the San Diego Supercomputer Center and its researchers have promoted programs of educational outreach to students and educators at K-12, undergraduate, and graduate levels. The goals are to make computational science more accessible by demonstrating how it functions as a research tool in various disciplines, and to encourage achievement of academic and professional goals. This article highlights the experiences of participants from three SDSC educational outreach programs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 39. A prescription for the multilevel Helmholtz FMM

    Publication Year: 1998 , Page(s): 39 - 47
    Cited by:  Papers (35)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1164 KB)  

    The authors describe a multilevel Helmholtz FMM (fast multipole method) as a way to compute the field caused by a collection of source points at an arbitrary set of field points. Their description focuses on the algorithm's mathematical basics, so that it can be applied to a variety of applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 40. Teamwork: computational science and applied mathematics

    Publication Year: 1997 , Page(s): 13 - 18
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB)  

    Computational science and engineering, or CSE, is a relatively new and rapidly evolving field of study. It draws heavily on other fields, often using interdisciplinary teams of mathematicians and computer scientists working with scientists and engineers to solve problems requiring multiple areas of expertise. In this paper four examples of computational science in action illustrate the relation between CSE and the distinct but complementary disciplines it draws upon, focusing in particular on mathematics. Each example illustrates the need for interdisciplinary work. The training necessary for success in computational science, not surprisingly, is somewhat different from that traditionally prescribed by single academic disciplines. The use of the term applied mathematics is a little arbitrary. Some, applied mathematicians working in areas such as fluid dynamics are unconnected with applications, while the work of some core mathematicians in areas such as string theory and number theory makes fundamental contributions in applied areas such as mathematical physics and cryptography. We take applied mathematics to be that part of mathematics used in science and engineering View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 41. Computational methods in finance: option pricing

    Publication Year: 1996 , Page(s): 66 - 80
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3440 KB)  

    Many computational methods familiar to scientists and engineers are now heavily used in today's financial markets. This survey looks at the history and the state of the art for one branch of computational finance, and explains why neural networks show special promise in setting correct prices for options View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 42. A heap of data

    Publication Year: 1996 , Page(s): 11 - 14
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (725 KB)  

    Previously, we described a fast method for selecting from a list at random, biased by predetermined rates or probabilities (see ibid., vol.2, p.13, 1996). However, sometimes "probabilistically next" is not good enough. What if we have some criterion or priority for selecting from the list? For this type of problem we can introduce the heap, a data structure that allows us to keep track of the maximum or the minimum dynamically. Heaps are an effective way of maintaining a priority queue. They are also good for sorting. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 43. What should computer scientists teach to physical scientists and engineers? 1.

    Publication Year: 1996 , Page(s): 46 - 65
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1327 KB)  

    To help clarify the issues involved in deciding what computing skills to teach to physical scientists and engineers, the article presents a thought experiment. Imagine that every new graduate student in science and engineering at your institution, or every new employee in your company's R&D division, has to take an intensive one week computing course. What would you want that course to cover? Should it concentrate on algorithms and data structures, such as multigrid methods and adaptively refined meshes? Should it introduce students to one or two commonly used packages, such as Matlab and SAS? Or should it try to teach students the craft of programming, giving examples to show why modularity is important and how design cycles work. The author chose one week as the length of our idealized course because it is long enough to permit discussion of several topics, but short enough to force stringent prioritization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 44. PVR: high-performance volume rendering

    Publication Year: 1996 , Page(s): 18 - 28
    Cited by:  Papers (4)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2104 KB)  

    Traditional volume rendering methods are too slow to provide interactive visualization, especially for large 3D data sets. The PVR (parallel volume rendering) system implements parallel volume rendering techniques that speed up the visualization process. Moreover, it helps computational scientists, engineers, and physicians to more effectively apply volume rendering to visualization tasks. The authors describe the PVR system that they have developed in a collaboration between the State University of New York at Stony Brook and Sandia National Laboratories. PVR is an attempt to provide an easy-to-use portable system for high performance visualization with the speed required for interactivity and steering. The current version of PVR consists of about 25000 lines of C and Tcl/Tk code. It has been used at Stony Brook, Sandia, and Brookhaven National Labs to visualize large data sets for over a year View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 45. Modernizing high-performance computing for the military

    Publication Year: 1996 , Page(s): 71 - 74
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1608 KB)  

    The High Performance Computing Modernization Program is the major force improving the Department of Defense's ability to exploit computation to sustain technological superiority. In a technology area critical to maintaining military and national leadership, it continues a 50-year legacy of investment on into the next century. As advanced weapons have become readily available to any country with the resources to buy them on the open market, US national defense demands that we stay a step ahead in weapons design and performance. A solid and continuing investment in high-performance computing will help ensure that we can do so View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 46. First 10 years of the EPCC Summer Scholarship Programme

    Publication Year: 1998 , Page(s): 6 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1096 KB)  

    For the past 10 years, the EPCC (Edinburgh Parallel Computing Centre) Summer Scholarship Programme (SSP) has given undergraduate students from all over the world the chance to work at EPCC for 10 weeks during their summer break. While at EPCC, students attend a week-long training course covering many aspects of computer simulation and high-performance computing. They spend the remaining nine weeks working on a technical project under the supervision of an EPCC staff member. The SSP began in 1987 with two local undergraduates working on parallel computing projects at the University of Edinburgh's Department of Physics. Since the arrangement was formalized within EPCC in 1989, the programme has continued to develop, taking students from an ever-increasing range of disciplines and countries. Today, the SSP is more popular than ever, with the available places many times oversubscribed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 47. Issues in electrical impedance imaging

    Publication Year: 1995 , Page(s): 53 - 62
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1604 KB)  

    Electrical impedance imaging systems apply currents to the body's surface, measure the corresponding voltages, and use inverse methods to reconstruct the conductivity and permittivity in the interior from these data. Quick, accurate maps of these electrical parameters inside the body could improve the effectiveness of critical medical technologies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 48. Searching in parallel for similar strings [biological sequences]

    Publication Year: 1994 , Page(s): 60 - 75
    Cited by:  Papers (6)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2455 KB)  

    Distributed computation, probabilistic indexing and hashing techniques combine to create a novel approach to processing very large biological-sequence databases. Other data-intensive tasks could also benefit. Our indexing-based approach enables fast similarity searching through a large database of strings. Thanks to a redundant table-lookup scheme, recovering database items that match a test sequence requires minimal data access. We have implemented a uniprocessor version of this approach called Flash (Fast Lookup Algorithm for String Homology) as well as a distributed version, dFlash, using a cluster of seven non-dedicated workstations connected through a local area network. In this article, we present an approach for retrieving homologies in databases of proteins.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 49. SciNapse: a problem-solving environment for partial differential equations

    Publication Year: 1997 , Page(s): 32 - 42
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB)  

    The SciNapse code generation system transforms high-level descriptions of partial differential equation problems into customized, efficient and documented C or Fortran code. Modelers can specify mathematical problems, solution techniques and I/O formats with a concise blend of mathematical expressions and keywords. An algorithm template language supports convenient extension of the system's built-in knowledge base. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 50. Is parallelism for you?

    Publication Year: 1996 , Page(s): 18 - 37
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3300 KB)  

    This article offers practical, basic rules of thumb that can help you predict if parallelism might be worthwhile, given your application and the effort you want to invest. The techniques presented for estimating likely performance gains are drawn from the experiences of hundreds of computational scientists and engineers at national labs, universities, and research facilities. The information is more anecdotal than experimental, but it reflects the very real problems that must be overcome if parallel programming is to yield useful benefits View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

This Periodical ceased publication in 1998. The current retitled publication is IEEE Computing in Science and Engineering.

Full Aims & Scope