By Topic

Computational Science & Engineering, IEEE

Issue 2 • Date Summer 1996

Filter Results

Displaying Results 1 - 17 of 17
  • What should computer scientists teach to physical scientists and engineers? 1.

    Publication Year: 1996 , Page(s): 46 - 65
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1327 KB)  

    To help clarify the issues involved in deciding what computing skills to teach to physical scientists and engineers, the article presents a thought experiment. Imagine that every new graduate student in science and engineering at your institution, or every new employee in your company's R&D division, has to take an intensive one week computing course. What would you want that course to cover? Should it concentrate on algorithms and data structures, such as multigrid methods and adaptively refined meshes? Should it introduce students to one or two commonly used packages, such as Matlab and SAS? Or should it try to teach students the craft of programming, giving examples to show why modularity is important and how design cycles work. The author chose one week as the length of our idealized course because it is long enough to permit discussion of several topics, but short enough to force stringent prioritization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Response to Wilson: Computer Scientists Should Not Teach Computational Science

    Publication Year: 1996
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (778 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Response to Wilson: Teach Programming Principles Not "Tools and Tips"

    Publication Year: 1996
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (738 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mathematica as a Tool [Book News & Reviews]

    Publication Year: 1996
    Save to Project icon | Request Permissions | PDF file iconPDF (320 KB)  
    Freely Available from IEEE
  • Introduction to the Numerical Solution of Markov Chains [Book News & Reviews]

    Publication Year: 1996
    Save to Project icon | Request Permissions | PDF file iconPDF (296 KB)  
    Freely Available from IEEE
  • Mathematica for Scientists and Engineers [Book News & Reviews]

    Publication Year: 1996
    Save to Project icon | Request Permissions | PDF file iconPDF (320 KB)  
    Freely Available from IEEE
  • A heap of data

    Publication Year: 1996 , Page(s): 11 - 14
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (725 KB)  

    Previously, we described a fast method for selecting from a list at random, biased by predetermined rates or probabilities (see ibid., vol.2, p.13, 1996). However, sometimes "probabilistically next" is not good enough. What if we have some criterion or priority for selecting from the list? For this type of problem we can introduce the heap, a data structure that allows us to keep track of the maximum or the minimum dynamically. Heaps are an effective way of maintaining a priority queue. They are also good for sorting. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The scientist's infosphere

    Publication Year: 1996 , Page(s): 43 - 44
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB)  

    The importance of national defense in the US required the development of tailor made computing and communication systems. The military coined the term infosphere to refer to the collection of remote instruments, appliances, computing tools, and people made accessible by these systems from a person's working environment, such as the cockpit of a plane or the bridge of a ship. Because military personnel are often mobile, they must use remote instruments and computing programs and they must collaborate with people at distant sites. It is concluded that within a decade, most scientists' infospheres will reside on portable computing devices. The infosphere will allow the scientist to access and control home appliances, office devices, laboratory instruments, and computing tools and to communicate with colleagues everywhere. Communication bandwidths may vary as the scientist moves from place to place, but the infosphere will increasingly free the scientist from the constraints of physical location. This freedom will change the ways in which scientific research is carried out, science is taught, and scientific results are disseminated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • How to cooperate across the ocean

    Publication Year: 1996
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (88 KB)  

    The issue discussed is how researchers in Japan can contribute and cooperate to promote the important field of computational science and engineering. Unfortunately, the term computational science and engineering is not yet popular in Japan; it is often equated with numerical analysis. Many people who could be active in this field just aren't aware of it. The author makes two proposals; an international workshop on computational science and engineering-this is an obvious yet important step to take and he would like to see discussed, among many issues, the basic discipline of computational science and engineering; and a global, seamless infrastructure for CSE View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Network programming and CSE

    Publication Year: 1996 , Page(s): 40 - 41
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB)  

    A remarkable feature of scientific computing has been its relatively strong reuse of software, and therefore its concern with portability. Some things could still be improved, such as language support for the IEEE floating point. This common environment has made possible the extensive catalogs of reusable components in Netlib, Numerical Recipes, NAG, IMSL, and so on. Achieving a similar portability in the rest of scientific computing remains a challenge. Graphics, interprocess communication, network naming rules, and database interfaces are all in flux. The computing world is changing rapidly and unpredictably. At the moment the most promising development is the birth of systems like Java and Inferno that extend the scope of portable programming to include graphical user interfaces, simple visualization, and network services. The article expands on the various uses of Java and similar networking languages View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast algorithms for removing atmospheric effects from satellite images

    Publication Year: 1996 , Page(s): 66 - 77
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2388 KB)  

    The varied features of the earth's surface each reflect sunlight and other wavelengths of solar radiation in a highly specific way. This principle provides the foundation for the science of satellite based remote sensing. A vexing problem confronting remote sensing researchers, however, is that the reflected radiation observed from remote locations is significantly contaminated by atmospheric particles. These aerosols and molecules scatter and absorb the solar photons reflected by the surface in such a way that only part of the surface radiation can be detected by a sensor. The article discusses the removal of atmospheric effects due to scattering and absorption, ie., atmospheric correction. Atmospheric correction algorithms basically consist of two major steps. First, the optical characteristics of the atmosphere are estimated. Various quantities related to the atmospheric correction can then be computed by radiative transfer algorithms, given the atmospheric optical properties. Second, the remotely sensed imagery is corrected by inversion procedures that derive the surface reflectance. We focus on the second step, describing our work on improving the computational efficiency of the existing atmospheric correction algorithms. We discuss a known atmospheric correction algorithm and then introduce a substantially more efficient version which we have devised. We have also developed a parallel implementation of our algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The microprocessor for scientific computing in the year 2000

    Publication Year: 1996 , Page(s): 42 - 43
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB)  

    The future of scientific computing, like the future of all computing, demands higher and higher performance from the computing system. In the author's view, that means exploiting concurrency at all levels of granularity, including the microprocessor. For scientific computing there is much good news. For example, the regularity of scientific computations (although Amdahl's law makes it not as good as it might be) allows for multiple instruction streams operating on behalf of a single process. That works well for the multimicro paradigm, and in fact might further suggest putting the multiprocessor on a single chip. However, the author does not believe the single chip multiprocessor is the answer for high performance scientific computing in the year 2000 for two reasons: system partitioning and pin bandwidth. At the uniprocessor level, scientific code makes the job of the compiler and the job of the microarchitecture easier, and that will translate into greater performance sooner than will be possible with integer code. Instruction and data supply will both be handled jointly by the compiler and the microarchitecture View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rapid design of neural networks for time series prediction

    Publication Year: 1996 , Page(s): 78 - 89
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1692 KB)  

    The article explores the possibility of rapidly designing an appropriate neural net (NN) for time series prediction based on information obtained from stochastic modeling. Such an analysis could provide some initial knowledge regarding the choice of an NN architecture and parameters, as well as regarding an appropriate data sampling rate. Stochastic analysis provides a complementary approach to previously proposed dynamical system analysis for NN design. Based on E. Takens's theorem (1981), an estimate of the dimension m of the manifold from which the time series originated can be used to construct an NN model using 2m+1 external inputs. This design is further extended by M.A.S. Potts and D.S. Broomhead (1991) who first embed the state space of a discrete time dynamical system in a manifold of dimension n>>2m+1, which is further projected to its 2m+1 principal components used as external inputs in a radial basis function NN model for time series prediction. Our approach is to perform an initial stochastic analysis of the data and to choose an appropriate NN architecture, and possibly initial values for the NN parameters, according to the most adequate linear model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hardware for high-performance computing: abstract progress, painful consolidation

    Publication Year: 1996
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB)  

    The history of high performance computing can be displayed in several distinct shapes, each defined by the choice of light used for illumination. Two steady, smooth curves appear if high performance computing is viewed in the abstracted light of technological progress. Both curves trace changes over time roughly, the two decades since Seymour Cray introduced the technology (as well as the name and concept) of vector intensive supercomputing. One curve, representing hypothetical gross performance measured in floating point operations per second (flops), rises majestically toward the brink of teraflops performance. The other, measuring net purchase price per unit of performance, dives gracefully in the opposite direction. The two, considered together, delineate the present happy state in which unprecedentedly high performance is available at prices that were only a dream a few years ago View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Future linear-algebra libraries

    Publication Year: 1996 , Page(s): 38 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB)  

    The ultimate development of fully mature, parallel scalable libraries will necessarily depend on breakthroughs in many supporting technologies. Scalable library development cannot wait, however, until all the enabling technologies are in place for two reasons: the need for such libraries for existing and near-term parallel architectures is immediate; and progress in all the supporting technologies depends on feedback from concurrent efforts in library development. The paper considers the future of linear algebra libraries View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Is parallelism for you?

    Publication Year: 1996 , Page(s): 18 - 37
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3300 KB)  

    This article offers practical, basic rules of thumb that can help you predict if parallelism might be worthwhile, given your application and the effort you want to invest. The techniques presented for estimating likely performance gains are drawn from the experiences of hundreds of computational scientists and engineers at national labs, universities, and research facilities. The information is more anecdotal than experimental, but it reflects the very real problems that must be overcome if parallel programming is to yield useful benefits View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • As Eniac turns 50: perspectives on computer science support for science and engineering

    Publication Year: 1996 , Page(s): 16 - 17
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (544 KB)  

    1996 marks the 50th anniversary of Eniac. Eniac's creation gave birth to an era of remarkable technical progress. Eniac's line of descendants include most computers in use today: laptops, PCs, workstations, as well as supercomputers. As the next century is about to begin, the paper reflects on where the age of computing has brought us so far and where we might find ourselves in the future. It focuses on science and engineering applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

This Periodical ceased publication in 1998. The current retitled publication is IEEE Computing in Science and Engineering.

Full Aims & Scope