By Topic

IBM Journal of Research and Development

Issue 4 • Date July 1993

Filter Results

Displaying Results 1 - 8 of 8
  • Head actuator dynamics of an IBM 5¼-inch disk drive

    Page(s): 479 - 490
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (934 KB)  

    The IBM 5¼-inch disk drive contained in the IBM 9345 DASD Module provides high track density and storage capacity, dynamic test failure rates below three parts per million, and low sensitivity to assembly variations. The design techniques used to achieve the required vibrational characteristics of the head actuator assembly are described. Dynamic stability specifications are derived from drive performance requirements and the actuator servomechanical system design. Modeshapes of the actuator are determined by encoding magnetic patterns onto a disk and using the read/write heads as position transducers in an operational drive. Structural changes in the carriage assembly that might lead to design improvements are explored with models derived using finite element analysis. Taguchi orthogonal matrix experiments are used to reduce the sensitivity of the actuator to dimensional tolerances and assembly processes. The achievement of actuator assembly design objectives is verified from production yields and statistical data obtained during dynamic tests. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical modeling in manufacturing: Adapting a diagnostic tool to real-time applications

    Page(s): 491 - 506
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (997 KB)  

    This paper describes a process for constructing a statistical model to automate the analysis of data from complex diagnostic tools. The method is demonstrated on data taken from an optical emission spectrometer (OES), one of the most powerful tools used in semiconductor manufacturing for detecting the chemical composition and impurity levels in plasma processes. The analysis of OES data currently requires hours of manual effort by an expert spectroscopist, rendering it ineffective for real-time monitoring and control. However, through the use of statistical modeling, the analysis can be performed automatically on a personal computer in a matter of seconds. The process of model construction is examined in general, and methods are developed for demonstrating how information from an expert can be combined with information from the data in order to provide a statistical basis for analysis. The effectiveness of the model is demonstrated on data from typical plasma processes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Flexible simulation of a complex semiconductor manufacturing line using a rule-based system

    Page(s): 507 - 522
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1480 KB)  

    Rule-based systems have been used to produce fast, flexible simulation models for semiconductor manufacturing lines. This paper describes such a rule-based simulator for a semiconductor manufacturing line, and the language in which it is written. The simulator is written in a rule-based declarative style that uses a single-rule “template” to move thousands of product lots through various process steps; the rule is customized as needed with data for each step, route, lot, tool, manpower skill, etc. Since line or product changes require only reading new data from a database, without reprogramming, this provides a modeling environment that is simple, flexible, and maintainable. The model is implemented in ECLPS (Enhanced Common Lisp Production System), also known as a knowledge-based or expert systems language. It handles very large models (thousands of data elements, or more) well and is very fast. Subsequent changes improved the speed several orders of magnitude over that of an older version of the model, primarily through use of a preprocessor to eliminate duplicate and redundant data, and by enforcing data typing to take advantage of special techniques for very fast processing of extremely large matches (hashed indices). ECLPS also provides a built-in simulated time clock and other constructs to simplify simulation applications. The model runs daily at the IBM semiconductor manufacturing plant in Yasu, Japan, where it has been in use for many years, currently on three different semiconductor manufacturing lines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Component procurement and allocation for products assembled to forecast: Risk-pooling effects

    Page(s): 523 - 536
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1011 KB)  

    This paper considers procurement and allocation policies in a manufacturing environment where common components are assembled into various products that have stochastic demands. The components are allocated to the assembly of a product at a time when product demand is still uncertain (assemble to forecast, ATF). The special case of one component shared by N different products is analyzed, and insights into the general problem are obtained for the situation in which the common component can be reallocated to different products as product demands change. An allocation policy is developed for general distributions and prices in an ATF environment. The policy first addresses anomalies in the state of the system and then, for a feasible state, minimizes the expected excess finished-goods inventory. A procurement level that is nearly optimal is obtained from a Monte Carlo simulation in which the probability of satisfying all of the random product demands simultaneously is considered relative to this allocation policy. Numerical studies indicate that the total component and finished-goods inventory is significantly reduced by an allocation policy that incorporates risk pooling while still fulfilling service-level requirements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling the cost of data communication for multi-node computer networks operating in the United States

    Page(s): 537 - 546
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (799 KB)  

    The study reported here examines the cost of data communication for multi-node computer networks operating in the United States. We begin by defining a market basket of private-line transmission services and identifying its constituent prices. Two analytic models are then proposed. The first, which derives a theoretical relationship from microeconomic considerations, gives price movement as a function of the demand for service. The second embodies a learning curve fit to historical data, wherein the slope of this curve (0.71) equals the slope of the historical curve for the advance of integrated-circuit technology. Extrapolations from the two models agree well; moreover, both extrapolations conform to long-established historical trends. These agreements lend plausibility to the idea that the price of data communication unfolds in an orderly way over the long run, and, despite the perturbation introduced by the Bell System divestiture of 1984, future price movements may return to their traditional 11% annual decline. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A load-instruction unit for pipelined processors

    Page(s): 547 - 564
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1446 KB)  

    A special-purpose load unit is proposed as part of a processor design. The unit prefetches data from the cache by predicting the address of the data fetch in advance. This prefetch allows the cache access to take place early, in an otherwise unused cache cycle, eliminating one cycle from the load instruction. The prediction also allows the cache to prefetch data if they are not already in the cache. The cache-miss handling can be overlapped with other instruction execution. It is shown, using trace-driven simulations, that the proposed mechanism, when incorporated in a design, may contribute to a significant increase in processor performance. The paper also compares different prediction methods and describes a hardware implementation for the load unit. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recent publications by IBM authors

    Page(s): 565 - 578
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1145 KB)  

    The information listed here is supplied by the Institute for Scientific Information and other outside sources. Reprints of the papers may be obtained by writing directly to the first author cited. Information on books may be obtained by writing to the publisher. Journals and books are listed alphabetically by title; papers are listed sequentially for each journal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recent IBM patents

    Page(s): 579 - 580
    Save to Project icon | PDF file iconPDF (157 KB)  
    Freely Available from IEEE

Aims & Scope

The IBM Journal of Research and Development is a peer-reviewed technical journal, published bimonthly, which features the work of authors in the science, technology and engineering of information systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Clifford A. Pickover
IBM T. J. Watson Research Center