By Topic

Computer

Issue 4 • Date April 1997

Filter Results

Displaying Results 1 - 20 of 20
  • Andrew Grove's Vision For The Internet

    Publication Year: 1997 , Page(s): 14 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (101 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Students stumble onto internet explorer flaw

    Publication Year: 1997 , Page(s): 18 - 20
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (116 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tomorrow's Internet Is Here Today

    Publication Year: 1997 , Page(s): 22 - 23
    Cited by:  Papers (1)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Binary Version Could Bring Vrml Into Mainstream

    Publication Year: 1997 , Page(s): 25 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (46 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Understanding Fault Tolerance And Reliability

    Publication Year: 1997 , Page(s): 45 - 50
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (519 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tc Seeks To Manage Complexity In Computing

    Publication Year: 1997 , Page(s): 91 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (39 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Top-level Domain Name Controversy

    Publication Year: 1997 , Page(s): 105 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (40 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Embedded Systems Software

    Publication Year: 1997 , Page(s): 107
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (26 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weighty Computer Science Reference

    Publication Year: 1997 , Page(s): 108
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (27 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sources of failure in the public switched telephone network

    Publication Year: 1997 , Page(s): 31 - 36
    Cited by:  Papers (43)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB)  

    What makes a distributed system reliable? A study of failures in the US public switched telephone network (PSTN) shows that human intervention is one key to this large system's reliability. Software is not the weak link in the PSTN system's dependability. Extensive use of built-in self-test and recovery mechanisms in major system components (switches) contributed to software dependability and are significant design features in the PSTN. The network's high dependability indicates that the trade-off between dependability gains and complexity introduced by built-in self-test and recovery mechanisms can be positive. Likewise, the tradeoff between complex interactions and the loose coupling of system components has been positive, permitting quick human intervention in most system failures and resulting in an extremely reliable system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multicode: a truly multilingual approach to text encoding

    Publication Year: 1997 , Page(s): 37 - 43
    Cited by:  Papers (3)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (108 KB)  

    Unicode was designed to extend ASCII for encoding text in different languages, but it still has several important drawbacks. Multicode addresses many of Unicode's drawbacks and should have considerable appeal to programmers who work with text in a variety of languages. Its future, however, depends on the computer industry's acceptance. Multicode can represent Unicode files because it reserves a character set for Unicode. Converting Multicode to Unicode is also straightforward (although the opposite is not). Thus, both approaches can coexist-Multicode for programming ease and Unicode to support unified fonts View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-integrated computing

    Publication Year: 1997 , Page(s): 110 - 111
    Cited by:  Papers (78)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (52 KB)  

    Computers now control many critical systems in our lives, from the brakes on our cars to the avionics control systems on planes. Such computers wed physical systems to software, tightly integrating the two and generating complex component interactions unknown in earlier systems. Thus, it is imperative that we construct software and its associated physical system so they can evolve together. The paper discusses one approach that accomplishes this called model-integrated computing. This works by extending the scope and use of models. It starts by defining the computational processes that a system must perform and develops models that become the backbone for the development of computer-based systems. In this approach, integrated, multiple-view models capture information relevant to the system under design. The paper considers the Multigraph Architecture framework for model-integrated computing developed at Vanderbilt's Measurement and Computing Systems Laboratory View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software-based replication for fault tolerance

    Publication Year: 1997 , Page(s): 68 - 74
    Cited by:  Papers (82)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1656 KB)  

    Replication handled by software on off-the-shelf hardware costs less than using specialized hardware. Although an intuitive concept, replication requires sophisticated techniques for successful implementation. Group communication provides an adequate framework. We present a survey of the techniques developed since the mid-1980s to implement replicated services, emphasizing the relationship between replication techniques and group communication View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward systematic design of fault-tolerant systems

    Publication Year: 1997 , Page(s): 51 - 58
    Cited by:  Papers (49)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (64 KB)  

    After 30 years of study and practice in fault tolerance, high-confidence computing remains a costly privilege of several critical applications. It is time to explore ways to deliver high-confidence computing to all users. The speed of computing will ultimately be limited by the laws of physics, but the demand for affordable high-confidence computing will continue as long as people use computers to enhance the quality of their lives. Eventually, one enterprising chip builder will deliver the first fault-tolerant microprocessor at a competitive price, and soon thereafter fault tolerance will be considered as indispensable to computers as immunity is to humans. The remaining manufacturers will follow suit or go the way of the dinosaurs. Once again, Darwin will be proven right View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault-tolerant, real-time communication in FDDI-based networks

    Publication Year: 1997 , Page(s): 83 - 90
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1260 KB)  

    The first high-speed network to meet the Safenet standard's bandwidth requirements, the Fiber Distributed Data Interface (FDDI) needs help to meet Safenet's fault tolerance requirement. Researchers have proposed a number of FDDI-based network architecture designs for improving fault tolerance. An architecture called FBRN (FDDI-Based Reconfigurable Network) provides enhanced fault tolerance by using (a) multiple FDDI networks to connect hosts, and (b) efficient fault detection and network configuration algorithms. To provide fault-tolerant real-time communication with the FBRN architecture, users must manage network resources properly. We sought to accomplish this by using a fault-tolerant, real-time management mechanism with online and offline components. We focused on achieving high performance by designing efficient and effective online and offline management algorithms to work around multiple faults View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Piranha: a CORBA tool for high availability

    Publication Year: 1997 , Page(s): 59 - 66
    Cited by:  Papers (18)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1700 KB)  

    Despite the most careful planning, system applications can be dogged by unexpected failures. Our firm is keenly aware of the need for availability. To help meet that need, I developed an experimental CORBA-based restart service and monitor called Piranha, which both monitors and manages distributed applications to help systems attain high availability. First, Piranha acts as a network monitor that reports failures through a graphical user interface. Second, Piranha acts as a manager: it automatically restarts failed CORBA objects, replicates stateful objects (objects that maintain an internal set of values) on-the-fly, migrates objects from one host to another and enforces predefined replication degrees-numbers of copies-on groups of objects. As a backdrop to the discussion of Piranha's design and implementation, this article first examines the ways in which a CORBA ORB should support availability. I then explain how Piranha affords availability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pitfalls and strategies in automated testing

    Publication Year: 1997 , Page(s): 114 - 116
    Cited by:  Papers (1)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (64 KB)  

    According to popular mythology, people with little programming experience can use GUI-level regression test tools to quickly and competently create extensive black box test suites that are easy to maintain. Though some efforts to use these tools have been successful, several have failed miserably. This was the focus of a two-day meeting at which 13 experienced testers discussed patterns of success and failure in GUI-based automation. The author integrates highlights of The Los Altos Workshop on Software Testing with his other testing experiences View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software reuse: ostriches beware

    Publication Year: 1997 , Page(s): 119 - 120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (48 KB)  

    When it comes to software reuse, many software executives, like ostriches, are pushing their heads deeper into the sand. The author discusses some active reuse programs in several companies. The number of companies employing active reuse is high enough to establish that reuse on this scale is possible at the current state of the software art View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault injection techniques and tools

    Publication Year: 1997 , Page(s): 75 - 82
    Cited by:  Papers (174)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1280 KB)  

    Fault injection is important to evaluating the dependability of computer systems. Researchers and engineers have created many novel methods to inject faults, which can be implemented in both hardware and software. The contrast between the hardware and software methods lies mainly in the fault injection points they can access, the cost and the level of perturbation. Hardware methods can inject faults into chip pins and internal components, such as combinational circuits and registers that are not software-addressable. On the other hand, software methods are convenient for directly producing changes at the software-state level. Thus, we use hardware methods to evaluate low-level error detection and masking mechanisms, and software methods to test higher level mechanisms. Software methods are less expensive, but they also incur a higher perturbation overhead because they execute software on the target system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparing Internet search engines

    Publication Year: 1997 , Page(s): 117 - 118
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (44 KB)  

    Search engines are sophisticated utilities designed expressly to find information on the global Internet. An expensive combination of high-speed computer networks and specialized software, they are usually created by large corporations and occasionally by universities. They are freely available to anyone with Internet access, and there are no search restrictions. With more than 150 search engines available, choosing the right one (or ones) is important. As with most products, no single engine is best for all searches and all users all the time. After comparing 50 of the most popular and powerful engines, I narrowed the field down to the four I found most useful: Alta Vista, Deja News, Excite and Yahoo View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

Computer, the flagship publication of the IEEE Computer Society, publishes highly acclaimed peer-reviewed articles written for and by professionals representing the full spectrum of computing technology from hardware to software and from current research to new applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Sumi Helal
University of Florida
sumi.helal@gmail.com