By Topic

Internet Computing, IEEE

Issue 5 • Date Sept.-Oct. 2002

Filter Results

Displaying Results 1 - 14 of 14
  • New & Trends

    Page(s): 6 - 10
    Save to Project icon | Request Permissions | PDF file iconPDF (409 KB)  
    Freely Available from IEEE
  • Global deployment of data centers

    Page(s): 38 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (542 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Architecture and dependability of large-scale internet services

    Page(s): 41 - 49
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (347 KB)  

    The popularity of large-scale Internet infrastructure services such as AOL, Google, and Hotmail has grown enormously. The scalability and availability requirements of these services have led to system architectures that diverge significantly from those of traditional systems like desktops, enterprise servers, or databases. Given the need for thousands of nodes, cost necessitates the use of inexpensive personal computers wherever possible, and efficiency often requires customized service software. Likewise, addressing the goal of zero downtime requires human operator involvement and pervasive redundancy within clusters and between globally distributed data centers. Despite these services' success, their architectures-hardware, software, and operational-have developed in an ad hoc manner that few have surveyed or analyzed. Moreover, the public knows little about why these services fail or about the operational practices used in an attempt to keep them running 24/7. As a first step toward formalizing the principles for building highly available and maintainable large-scale Internet services, we are surveying existing services' architectures and dependability. This article describes our observations to date. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Device independence and the Web

    Page(s): 81 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (454 KB)  

    Device manufacturers, users, and authors have differing needs and expectations when it comes to Web content. Web software and hardware manufacturers naturally try to differentiate their products by supporting a special combination of capabilities, but few can expect Web authors to create content for their product alone. Users, however, do expect to access the same content from any device with similar capabilities. Even when device capabilities differ, users might still want access to an adapted version of the content. Due to device differences, the adaptation might not produce an identical presentation, but device-independence principles suggest it should be sufficiently functional to let users interact with it successfully. Web application authors cannot afford to create multiple content versions for each of the growing range of device types. Authors would rather create their content once, and adapt it to different devices-but they also want to retain control of presentation quality. Device independence is about trying to satisfy these differing needs, spanning the delivery path between author and user by way of diverse manufacturers' devices. The field's continued evolution within the broader Web standards framework aims to find solutions that are beneficial for all. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wired-wireless integration

    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (251 KB)  

    Within a few years, most wireless devices will interconnect across networks to allow information access from anywhere, anytime. This evolution motivates developers to migrate technologies originally developed for PCs to wireless devices. Supporting this effort, however, requires new software -particularly middleware -to power small devices that connect over slow, sometimes unreliable networks. How well can today's major middleware platforms support wireless access to business applications?. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accelerating dynamic Web content generation

    Page(s): 27 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (398 KB)  

    As a middle-tier, server-side caching engine, the dynamic content accelerator reduces dynamic page-generation processing delays by caching fragments of dynamically generated Web pages. This fragment-level solution, combined with intelligent cache management strategies, can significantly reduce the processing load on the Web application server, letting it handle higher user loads and thus significantly outperforming existing middle-tier caching solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managing access in extended enterprise networks

    Page(s): 67 - 74
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (564 KB)  

    We describe our approach to secure authentication and authorization for extended enterprises, which combines distributed role-based access control (RBAC), a public key infrastructure (PKI), and a privilege management infrastructure (PMI). We have implemented a J2EE-based prototype system, DRBAC-EE, which shows the feasibility of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weaving a computing fabric

    Page(s): 88 - 91
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (323 KB)  

    As sources of information relevant to a particular domain proliferate, we need a methodology for locating, aggregating, relating, fusing, reconciling, and presenting information to users. Interoperability thus must occur not only among the information, but also among the different software applications that process it. Given the large number of potential sources and applications, interoperability becomes an extremely large problem for which manual solutions are impractical. A combination of software agents and ontologies can supply the necessary methodology for interoperability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deep Web structure

    Page(s): 4 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (245 KB)  

    Our current understanding of Web structure is based on large graphs created by centralized crawlers and indexers. They obtain data almost exclusively from the so-called surface Web, which consists, loosely speaking, of interlinked HTML pages. The deep Web, by contrast, is information that is reachable over the Web, but that resides in databases; it is dynamically available in response to queries, not placed on static pages ahead of time. Recent estimates indicate that the deep Web has hundreds of times more data than the surface Web. The deep Web gives us reason to rethink much of the current doctrine of broad-based link analysis. Instead of looking up pages and finding links on them, Web crawlers would have to produce queries to generate relevant pages. Creating appropriate queries ahead of time is nontrivial without understanding the content of the queried sites. The deep Web's scale would also make it much harder to cache results than to merely index static pages. Whereas a static page presents its links for all to see, a deep Web site can decide whose queries to process and how well. It can, for example, authenticate the querying party before giving it any truly valuable information and links. It can build an understanding of the querying party's context in order to give proper responses, and it can engage in dialogues and negotiate for the information it reveals. The Web site can thus prevent its information from being used by unknown parties. What's more, the querying party can ensure that the information is meant for it. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Middleware "dark matter"

    Page(s): 92 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB)  

    Clay Shirky describes PCs as the "dark matter of the Internet" because a lot of them are connected, but they're barely detectable. We can apply a similar analogy to middleware because the "mass" of the middleware universe is much greater than the systems - such as message-oriented middleware (MOM), enterprise application integration (EAI), and application servers based on Corba or J2EE - that we usually think of when we speak of middleware. We tend to forget or ignore the vast numbers of systems based on other approaches. We can't see them, and we don't talk about them, but they're out there solving real-world integration problems - and profoundly influencing the middleware space. These systems are the dark matter of the middleware universe. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Building a multisite Web architecture

    Page(s): 59 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB)  

    Any large organizations that first came online in the late 1990s are now facing the decision whether to upgrade their Web systems or to start anew. Given the speed with which new technologies are introduced in the Web environment, system deployment life cycles have shrunk significantly-but so have system life spans. After only a few years, an organization's Internet infrastructure is likely to need a major overhaul. In late 2001, the systems architecture team to which I belong took on these issues for an organization that wanted to rebuild its Web infrastructure. The existing infrastructure contained multiple single points of failure, could not scale to expected usage patterns, was built on proprietary systems, and had a high management overhead. The legacy infrastructure had grown organically over the previous five years as administrators added unplanned features and functionality, and usage had grown 100-fold since the specifications were initially developed. Because of the age and condition of the legacy systems, we decided to redesign the solution from scratch to overcome the inherent limitations. This case study describes the process our systems architecture team followed for designing and deploying the new architecture. I detail the component selection rationale, with implementation details where allowed. Ours is just one successful approach to deploying a. multisite, fully redundant Web-based system for a large organization; other reasonable and viable ways to build such a system also exist. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Workspaces: a Web-based workflow management system

    Page(s): 18 - 26
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (633 KB)  

    Workspaces is a Web-based WFMS developed at the Technical University of Berlin that attempts to deal with various shortcomings of Web-based workflow management systems by employing XML. The proposed architecture combines concepts from workflow management and coordination technology and uses XML for representation and the Extensible Stylesheet Language (XSL) for processing. The author examines the system architecture and describes the prototype implementation developed with several diploma students for an experimental evaluation of the ideas. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Globally distributed content delivery

    Page(s): 50 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (413 KB)  

    When we launched the Akamai system in early 1999, it initially delivered only Web objects (images and documents). It has since evolved to distribute dynamically generated pages and even applications to the network's edge, providing customers with on-demand bandwidth and computing capacity. This reduces content providers' infrastructure requirements, and lets them deploy or expand services more quickly and easily. Our current system has more than 12,000 servers in over 1,000 networks. Operating servers in many locations poses many technical challenges, including how to direct user requests to appropriate servers, how to handle failures, how to monitor and control the servers, and how to update software across the system. We describe our system and how we've managed these challenges. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Trade-offs in designing Web clusters

    Page(s): 76 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (317 KB)  

    High-volume Web sites often use clusters of servers to support their architectures. A load balancer in front of such clusters directs requests to the various servers in a way that equalizes, as much as possible, the load placed on each. There are two basic approaches to scaling Web clusters: adding more servers of the same type (scaling out, or horizontally) or upgrading the capacity of the servers in the cluster (scaling up, or vertically). Although more detailed and complex models would be required to obtain more accurate results about such systems' behavior, simple queuing theory provides a reasonable abstraction level to shed some insight on which scaling approach to employ in various scenarios. Typical issues in Web cluster design include: whether to use a large number of low-capacity inexpensive servers or a small number of high-capacity costly servers to provide a given performance level; how many servers of a given type are required to provide a certain performance level at a given cost; and how many servers are needed to build a Web site with a given reliability. Using queuing theory, I examine the average response time, capacity, cost, and reliability tradeoffs involved in designing Web server clusters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Internet Computing provides journal-quality evaluation and review of emerging and maturing Internet technologies and applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Michael Rabinovich
Department of Electrical Engineering and Computer Science
Case Western Reserve University