Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. For technical support, please contact us at onlinesupport@ieee.org. We apologize for any inconvenience.
By Topic

Internet Computing, IEEE

Issue 5 • Date Sep/Oct 2001

Filter Results

Displaying Results 1 - 13 of 13
  • The evolving field of distributed storage

    Publication Year: 2001 , Page(s): 35 - 39
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB)  

    The ongoing evolution of storage and network technologies has supported the rapid growth in the field of distributed storage over the past few years, but a widely-felt demand for more and better storage is a significant driving force behind this growth. The demand arises from an apparent de-facto economic and cultural mandate to store and archive as many bits as possible. This trend poses interesting new challenges as storage systems are asked to store not just bits, but also the their semantics; for what use is an image that can't be seen because its format has been forgotten or a program that can no longer be executed because the machine that ran it no longer exists? View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Congestion pricing. Paying your way in communication networks

    Publication Year: 2001 , Page(s): 85 - 89
    Cited by:  Papers (17)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB)  

    Network congestion is a fundamental problem facing Internet users today. A network where users are selfish, and thus reluctant to defer to other users, may result in the famous "tragedy of the commons", where, in the absence of controls, a shared resource is overconsumed by individuals who consider only their personal costs and not the cost to society as a whole. In terms of the Internet, the "tragedy" could be viewed as congestive collapse, resulting from overconsumption of the shared network resource. It is important to distinguish congestion pricing from other forms of network pricing. Charging network users for the congestion they cause can lead to more efficient network utilization by forcing them to take social costs into account. In a congestion-pricing framework, the congestion charge would replace usage and QoS charges. Users would pay their ISPs a subscription charge to cover fixed costs and a congestion charge only when appropriate. This pricing scheme is feasible because, in the absence of congestion, the marginal cost of a network link is practically zero. Congestion pricing can also benefit network operators. By indicating the level of congestion and the user tolerance of it in their networks, congestion pricing can inform operators about when to re-provision and increase network capacity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maintenance-free global data storage

    Publication Year: 2001 , Page(s): 40 - 49
    Cited by:  Papers (76)  |  Patents (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB)  

    Explores mechanisms for storage-level management in OceanStore, a global-scale distributed storage utility infrastructure, designed to scale to billions of users and exabytes of data. OceanStore automatically recovers from server and network failures, incorporates new resources and adjusts to usage patterns. It provides its storage platform through adaptation, fault tolerance and repair. The only role of human administrators in the system is to physically attach or remove server hardware. Of course, an open question is how to scale a research prototype in such a way to demonstrate the basic thesis of this article - that OceanStore is self-maintaining. The allure of connecting millions or billions of components together is the hope that aggregate systems can provide scalability and predictable behavior under a wide variety of failures. The OceanStore architecture is a step towards this goal View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Services and situations

    Publication Year: 2001 , Page(s): 4 - 5
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB)  

    Ask five people what a service is and you will get five answers. If there is agreement, it is just at a dictionary level - that a service is some capability that is provided and exploited. The different answers offer a litmus test, however, for judging the roles played in constructing distributed systems. The simple service architecture plays host to an interesting dilemma. On one side, the subscriber doing the filtering-out of undesirable results can affect the performance of the composed system noticeably over some infrastructures. On the other side of this dilemma, the best constraints are situational, that is, dependent on the user's situation as inferred by the subscriber. Current provider-centric approaches to services ignore the user's situation altogether. Situational information is usually not available to the service. Services can take some profiles, to accommodate the interests of users but not their changing situations. The usual service architecture makes the subscriber responsible for managing the situation. The subscriber should be able to probe the service in various ways to obtain the best utility from the service, but without compromising privacy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Raincore API for clusters of networking elements

    Publication Year: 2001 , Page(s): 70 - 76
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (128 KB)  

    Clustering technology offers a way to increase overall reliability and performance of Internet information flow by strengthening one link in the chain without adding others. We have implemented this technology in a distributed computing architecture for network elements. The architecture, called Raincore, originated in the Reliable Array of Independent Nodes, or RAIN, research collaboration between the California Institute of Technology and the US National Aeronautics and Space Agency's Jet Propulsion Laboratory. The RAIN project focused on developing high-performance, fault-tolerant, portable clustering technology for spaceborne computing . The technology that emerged from this project became the basis for a spinoff company, Rainfinity, which has the exclusive intellectual property rights to the RAIN technology. The authors describe the Raincore conceptual architecture and distributed services, which are designed to make it easy for developers to port their applications to run on top of a cluster of networking elements. We include two applications: a Web server prototype that was part of the original RAIN research project and a commercial firewall cluster product from Rainfinity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Consensus ontologies. Reconciling the semantics of Web pages and agents

    Publication Year: 2001 , Page(s): 92 - 95
    Cited by:  Papers (14)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (984 KB)  

    As you build a Web site, it is worthwhile asking, "Should I put my information where it belongs or where people are most likely to look for it?" Our recent research into improving searching through ontologies is providing some interesting results to answer this question. The techniques developed by our research bring organization to the information received and reconcile the semantics of each document. Our goal is to help users retrieve dynamically generated information that is tailored to their individual needs and preferences. We believe that it is easier for individuals or small groups to develop their own ontologies, regardless of whether global ones are available, and that these can be automatically and ex-post-facto related. We are working to determine the efficacy of local annotation for Web sources, as well as performing reconciliation that is qualified by measures of semantic distance. If successful, this research will enable software agents to resolve the semantic misconceptions that inhibit successful interoperation with other agents and that limit the effectiveness of searching distributed information sources View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managing scientific metadata

    Publication Year: 2001 , Page(s): 59 - 68
    Cited by:  Papers (36)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (240 KB)  

    Metacat is a network-enabled database framework that lets users store, query, and retrieve XML documents with arbitrary schemas in SQL-compliant relational database systems. The system (available from the Knowledge Network for Biocomplexity, http://knb.ecoinformatics.org/) incorporates RDF-like methods for packaging data sets to allow researchers to customize and revise their metadata. It is extensible and flexible enough to preserve utility and interpretability working with future content standards. Metacat solves several key challenges that impede data confederation efforts in ecological research, or any field in which independent agencies collect heterogeneous data that they wish to control locally while enabling networked access. This distributed solution integrates with existing site infrastructures because it works with any SQL-compliant database system. The framework's open-source based components are widely available, and individual sites can extend and customize the system to support their data and metadata needs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managing data storage in the network

    Publication Year: 2001 , Page(s): 50 - 58
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    The Internet backplane protocol, or IBP, supports logistical networking to allow applications to control the movement and storage of data between nodes. The protocol's name reflects its purpose: to enable applications to treat the Internet as if it were a processor backplane. IBP provides access to remote storage and standard Internet resources and directs communication between them with the IBP API. In short, the motivation behind IBP is to design, develop, implement, and deploy a layer of middleware that allows storage to be exploited as part of the Internet. IBP alpha versions have been in use since February 1999, and version 1.0 has been available since March 2001. The article describes the IBP API and presents some examples that show its strategic potential for builders of distributed applications. We also discuss a layered approach to functionality and deployment that uses IBP as a basic service and builds more useful services on top of it View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Site-based approach to Web cache design

    Publication Year: 2001 , Page(s): 28 - 34
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    A site-based approach to Web caching tracks documents by site rather than individual document names or URLs, bringing different benefits to several different types of applications. One problem, however, is that while maintaining only site information is sufficient in many cases, it is sometimes necessary to track individual documents. For example, although the site-based least-recently-used (LRU) purging purges a whole site when a cache replacement is required, the system might still need information on individual cached documents. Currently, we are attempting to modify a real proxy server to use the site-based approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scalable human-friendly resource names

    Publication Year: 2001 , Page(s): 20 - 27
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB)  

    To fill the gap between what uniform resource names (URNs) provide and what humans need, we propose a new kind of uniform resource identifier (URI) called human-friendly names (HFNs). In this article, we present the design for a scalable HFN-to-URL (uniform resource locator) resolution mechanism that makes use of the Domain Name System (DNS) and the Globe location service to name and locate resources. This new URI proposes to improve both scalability and usability in naming replicated resources on the Web View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SMIL 2.0: XML for Web multimedia

    Publication Year: 2001 , Page(s): 78 - 84
    Cited by:  Papers (9)  |  Patents (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (576 KB)  

    On 7 August 2001, the World Wide Web Consortium (W3C) released version 2.0 of Synchronized Multimedia Integration Language, or SMIL. Three years ago, SMIL 1.0 introduced a basic foundation for Web multimedia and it quickly gained widespread use. With a specification document about 15 times as large as version 1.0, SMIL 2.0 builds on this foundation and marks an enormous step forward in multimedia functionality. Although Web multimedia has long been obtainable with proprietary formats or Java programs, it's been largely inaccessible to most Web authors and isolated from the Web's technical framework. SMIL's HTML-like syntax aims to do for multimedia what HTML did for hypertext: bring it into every living room, with an easy-to-author descriptive format that works with readily available cross-platform players. SMIL lets authors create simple multimedia simply and add, more complex behavior incrementally. But SMIL isn't just HTML-like, it's XML, which makes it part of the W3C's family of XML-related standards including scalable vector graphics (SVG), cascading style sheets (CSS), XPointer, XSLT, namespaces, and XHTML. SMIL's features fall into five categories: media content, layout, timing, linking, and adaptivity. The latter brings altogether new features to the Web, letting authors adapt content to different market groups, user abilities, system configurations, and runtime system delays. The article covers each feature category and its basic constructs using a simple SMIL presentation built with the SMIL 2.0 Language Profile, which is the flagship SMIL-defined language for multimedia browsers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • What can you do with Traceroute?

    Publication Year: 2001
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB)  

    Traceroute has been a staple of network administration since the mid-1980s. This well-known utility traces outgoing paths toward network destinations by sending packets with progressively longer time-to-live (TTL) fields and recording their deaths. When a packet dies, most routers return a notice using one of their interface addresses. Traceroute records the addresses, which we can identify using the Domain Name System (DNS). Traceroute is an interactive tool that is not suitable for Unix-style programming with pipes and filters. We have embedded the program's functions in a filter, which gives us great flexibility in network mapping and other network explorations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Customer service [in e-business]

    Publication Year: 2001 , Page(s): 90 - 91
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB)  

    When you're in business, it's good to have customers, but do you have customer service in mind when you're developing technology for an e-business Web site? If not, you should, because the place where your work and the customers' experience comes together is where you can make it easy-or hard for customers to do business at a site. If you can understand customer intentions at an e-business site, you can factor them into technology choices and mechanisms that support them. Is it easy for a single-minded customer to find and buy a product, or for a holistic-minded user to do a combination of browsing, learning and shopping? While the marketing people decide what goes on a site and the content developers create the look-and-feel, the front-row seat for data mining is with the technical staff who know what information is available in log files, what profiling can be dynamically processed in the background and indexed into the dynamic generation of HTML, and what performance can be expected from the servers and network to support customer service and make e-business interaction productive View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Internet Computing provides journal-quality evaluation and review of emerging and maturing Internet technologies and applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
M. Brian Blake
University of Miami