By Topic

Computer

Issue 8 • Date Aug. 1997

Filter Results

Displaying Results 1 - 14 of 14
  • Fighting Complexity in Computer Systems

    Page(s): 47 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (247 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computing Professionals Among IEEE Medal Winners

    Page(s): 73 - 75
    Save to Project icon | Request Permissions | PDF file iconPDF (864 KB)  
    Freely Available from IEEE
  • Could LDAP be the next killer DAP?

    Page(s): 88 - 89
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (540 KB)  

    About a year ago, Netscape Communications, with the support of more than 40 other companies, adopted the Lightweight Directory Access Protocol (LDAP). Notably absent from the list of early LDAP supporters, Microsoft came on board eight months later. LDAP is a protocol that allows a program such as a browser or an e-mail package to perform directory lookups across a wide variety of directories, even if they run on different operating systems and directory environments. For example, LDAP could search the internal corporate directory stored on a corporate server or a publicly available directory. In some ways, LDAP itself is not the critical element. A protocol alone is not enough. What's missing is a useful set of directory servers that support LDAP and client software that conveniently queries those servers using LDAP. Well, as it turns out both of those things do exist. In Netscape Navigator 4.0, simply go to Edit I Search Directory, select the proper directory service, type in a person's name, press Search and voila-you get a short list of the people who might match the name in question. From the interface, you are now two clicks away from sending an e-mail message to that person. Now that is a killer DAP-it's clearly obvious just how useful this feature will be View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reuse factors in embedded systems design

    Page(s): 93 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (380 KB)  

    At first glance, embedded systems appear to be poor candidates for incorporating reuse. Many embedded systems actively or passively prevent loss of life, so they usually meet stringent specifications for safety, reliability, and real-time operation. To meet these requirements, the obvious solution is to redesign hardware and software components for each application. At Siemens they have found that embedded system requirements are not compatible with reusing hardware and software components. Engineers, however, must consider certain factors from the beginning of the design View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying software product-line architecture

    Page(s): 49 - 55
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1324 KB)  

    Many organizations today are investing in software product-line architecture-for good reason: a well-executed architecture enables organizations to respond quickly to a redefined mission or to new and changing markets. It allows them to accelerate the introduction of new products and improve their quality, to reengineer legacy systems, and to manage and enhance the many product variations needed for international markets. However, technically excellent product line architectures do fail, often because they are not effectively used. Some are developed but never used; others lose value as product teams stop sharing the common architecture; still others achieve initial success, but fail to keep up with a rapidly growing product mix. Sometimes the architecture deterioration is not noticed at first, masked by what appears to be a productivity increase. To learn what factors determine the effective use of software architecture, the authors looked at Nortel (Northern Telecom), a company with nearly 20 years of experience developing complex software architecture for telecommunications product families. They identified six principles that help reduce the complexity of an evolving family of products and that support and maintain the effective use and integrity of the architecture View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Making the PC easier to use

    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB)  

    To turn PCs into home appliances that will be accepted by most consumers, vendors believe they have to make the computers as easy to use as other home appliances. Currently, say PC industry vendors, the machines take too long to turn on and off, and it is too much work to add peripherals and connect the computers to other types of electronic devices View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Object structures for real-time systems and simulators

    Page(s): 62 - 70
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3240 KB)  

    The market for real-time applications has grown considerably in years, and in response engineering methods have also improved. Today's techniques, while adequate for building moderately complex embedded applications, are inadequate for building the large, highly reliable, very complex real-time applications that are increasingly in demand. To build such large systems, engineering teams need a more uniform, integrated approach than is available today. Ideally, the development approach would make uniform the representations of both application environments and control systems as they proceed through various system engineering phases. The ideal representation (or modeling) scheme should be effective not only for abstracting system designs but also for representing the application environment. It should also be capable of manipulating logical values and temporal characteristics at varying degrees of accuracy. This ideal modeling scheme is not likely to be realized through conventional object models. Although they are natural building blocks for modular systems, conventional object models lack concrete mechanisms to represent the temporal behavior of complex, dynamic systems. This article describes a real-time object structure that can flexibly yet accurately specify the temporal behavior of modeled subjects. This approach supports strong requirements-design traceability, the feasibility of thorough and cost-effective validation, and ease of maintenance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Voice-based interfaces make PCs better listeners

    Page(s): 14 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB)  

    Voice-based interfaces let users vocally input commands or data into PCs. Researchers have recently developed interfaces that let users input material with normal, rather than unnaturally slow and careful, speech View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Good enough quality: beyond the buzzword

    Page(s): 96 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB)  

    The big new force that is propelling the good enough idea is the explosion of market-driven software. With a passion roughly proportional to the price of Microsoft stock, companies are looking for the shortest path to better software, faster, and cheaper. They are willing to take risks, and they have little patience for the traditional moralistic arguments in favor of so-called good practices. Much of the traditional lore of software project management seems irrelevant or stilted when applied to market-driven projects. It's time that we developed approaches and methodologies that apply to the whole craft, not just to space missions, medical devices, or academic experiments. Good enough is a model that encompasses high-reliability products as well as high-entertainment products. Whether you call the idea good enough, or choose another buzzword like economical, pragmatic, or utilitarian, the basic idea remains the same: our behavior should be guided by reason, not compulsion. Beyond the notion of best practices is a more fundamental idea: best thinking. As the good enough idea continues to emerge, the quality of one's thinking, rather than conformance to formalities, will become the issue. Formalities, and the authority behind them, will be re-examined. No wonder so many authorities consider good enough to be a dangerous idea View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The allegory of the humidifier: ROI for systems engineering

    Page(s): 104, 102 - 103
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB)  

    Convincing management of the importance of systems engineering-making time available in the project schedule for systems analysis, generating specifications that define customer needs, and other arguments-always made sense to systems engineers from a risk reduction and cost-avoidance perspective. But as schedule crunches, today's fire drill, and project cost reductions take hold, systems engineering often takes a back seat to getting the hardware out the door. This company could not afford the time and money up front to systems engineer a valid approach. It thus made an initial decision not to invest in a solution that, in the long run, would have avoided an additional 10X to 50X in costs. Repetitive and ineffective bandage solutions, slipped program schedules, and unknown amounts in lost productivity cost real money, too View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 10 potholes in the road to information quality

    Page(s): 38 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    Poor information quality can create chaos. Unless its root cause is diagnosed, efforts to address it are akin to patching potholes. The article describes ten key causes, warning signs, and typical patches. With this knowledge, organisations can identify and address these problems before they have financial and legal consequences View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using genetic algorithms to design mesh networks

    Page(s): 56 - 61
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2032 KB)  

    Designs for mesh communication networks must meet conflicting, interdependent requirements. This sets the stage for a complex problem with a solution that targets optimal topological connections, routing, and link capacity assignments. These assignments must minimize cost while satisfying traffic requirements and keeping network delays within permissible values. Since such a problem is NP-complete, developers must use heuristic techniques to handle the complexity and solve practical problems with a modest number of nodes. One heuristic technique, genetic algorithms, appears to be ideal to handle the design of mesh networks with capability of handling discrete values, multiobjective functions, and multiconstraint problems. Existing applications of genetic algorithms to this problem, however, have only optimized the network topology. They ignore the difficult subproblems of routing and capacity assignment, a crucial determiner of network quality and cost. This article presents a total solution to mesh network design using a genetic algorithm approach. The application is a 10-city network that links Hong Kong and nine other cities in China. The development demonstrates that this method can be used for networks of reasonable size with realistic topology and traffic requirements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Internetworked graphics and the Web

    Page(s): 99 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB)  

    Although the networking and computer graphics fields are considered to be distinct disciplines, they must begin to converge in order to support collaborative exploration and information visualization on the Internet and the World Wide Web. Telecommunication breakthroughs remove bottlenecks and provide new opportunities for interactive 3D graphics across globally interconnected, dissimilar networks. Multicast backbone tools, developed in the networking arena, provide desktop videoconferencing tools for sharing information visualization and virtual reality explorations. The Virtual Reality Modeling Language (VRML), developed in the computer graphics arena, supports the 3D display and fly-through of networked computing resources on the Internet. The computer graphics community considers VRML to be an interactive tool for exploring content on the Web. The telecommunications community calls it an application on the networking infrastructure. The authors define the concept of internetworked graphics to describe the future merger and dependencies of computer graphics applications and the telecommunications networking infrastructure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reengineering with reflexion models: a case study

    Page(s): 29 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (740 KB)  

    Reengineering large and complex software systems is often very costly. The article presents a reverse engineering technique and relates how a Microsoft engineer used it to aid an experimental reengineering of Excel-a product that comprises about 1.2 million lines of C code. The reflexion technique is designed to be lightweight and iterative. To use it, the user first defines a high-level structural model, then extracts a map of the source code and uses a set of computation tools to compare the two models. The approach lets software engineers effectively validate their high-level reasoning with information from the source code. The engineer in this case study-a developer with 10-plus years at Microsoft-specified and computed an initial reflexion model of Excel in a day and then spent four weeks iteratively refining it. He estimated that gaining the same degree of familiarity with the Excel source code might have taken up to two years with other available approaches. On the basis of this experience, the authors believe that the reflexion technique has practical applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

Computer, the flagship publication of the IEEE Computer Society, publishes highly acclaimed peer-reviewed articles written for and by professionals representing the full spectrum of computing technology from hardware to software and from current research to new applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Ron Vetter
University of North Carolina
Wilmington