Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Internet Computing, IEEE

Issue 4 • Date July-Aug. 2002

Filter Results

Displaying Results 1 - 17 of 17
  • Database technology on the web

    Publication Year: 2002 , Page(s): 31 - 32
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (605 KB) |  | HTML iconHTML  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managing Web-based data - database models and transformations

    Publication Year: 2002 , Page(s): 33 - 37
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (342 KB)  

    The paper considers the Araneus data model which employs database techniques and wrappers to extract data from and generate Web sites. The project features a logical model that abstracts physical aspects of Web sites. Araneus provides high-level descriptions of pages that let us both extract data from the Web and generate Web sites from databases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managing scientific metadata using XML

    Publication Year: 2002 , Page(s): 52 - 59
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (511 KB)  

    We present our XML-based Distributed Metadata Server (Dimes) - which comprises a flexible metadata model, search software, and a Web-based interface - to support multilevel metadata access, and introduce two prototype systems. Our Scientific Data and Information Super Server (SDISS), which is based on Dimes and GDS, solves accurate data-search and outdated data-link problems by integrating metadata with the data systems. On the implementation front, we combine independent components and open-source technologies into a coherent system to dramatically extend system capabilities. Obviously, our approach can be applied to other scientific communities, such as bioinformatics and space science. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Load testing of Web sites

    Publication Year: 2002 , Page(s): 70 - 74
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (359 KB)  

    Developers typically measure a Web application's quality of service in terms of response time, throughput, and availability. Poor QoS translates into frustrated customers, which can lead to lost business opportunities. At the same time, company expenditures on a Web site's IT infrastructure are a function of the site's expected traffic. Ideally, you want to spend enough, and no more, allocating resources where they will generate the most benefit. For example, you should not upgrade your Web servers if customers experience most delays in the database server or load balancer. Thus, to maximize your ROI, you must determine when and how to upgrade IT infrastructure. One way to assess IT infrastructure performance is through load testing, which lets you assess how your Web site supports its expected workload by running a specified set of scripts that emulate customer behavior at different load levels. I describe the QoS factors load testing addresses, how to conduct load testing, and how it addresses business needs at several requirement levels. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New viruses up the stakes on old tricks

    Publication Year: 2002 , Page(s): 9 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB)  

    Spring was a notable season in the virus world as Klez edged out SirCam as the all-time most virulent virus, and another virus landmark emerged with Simile.D - a polymorphic, entry-point-obfuscating virus that can infect both Windows and Linux platforms. Unlike the costly and destructive Nimba and Code Red viruses, Klez and Simile.D are more nuisance than financial nightmare. Nonetheless, they represent emerging challenges for antivirus developers. Both of these complex viruses take previous tricks to a new level, either by expanding their infection potential, at least in theory (Simile.D), or by combining multiple best-tricks into one insidious package (Klez). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Treating health care [being interactive]

    Publication Year: 2002 , Page(s): 4 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (238 KB)  

    The numbers are staggering. In a chilling report, the U.S. Institute of Medicine (10M) estimates that somewhere between 44,000 and 98,000 Americans die each year from "avoidable medical errors" costing the nation about US$17 billion to US$29 billion annually. Errors include failing to make timely and accurate diagnosis, selecting improper treatment, and following a treatment plan incorrectly. For example, hospital staff might give the wrong drug or dosage, or a surgeon might operate on the wrong body part. Errors in surgery or emergency treatment can be especially serious. The root causes of these errors are inadequate training, poor processes, and information systems that don't expose patient information at relevant times - sometimes leading to confusion about the patient's identity or the intended procedures. Network technologies show great promise in solving some of the dangerous and tragic "avoidable medical errors" that currently plague the health care industry. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managing semantic content for the Web

    Publication Year: 2002 , Page(s): 80 - 87
    Cited by:  Papers (32)  |  Patents (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (924 KB)  

    By associating meaning with content, the Semantic Web will facilitate search, interoperability, and the composition of complex applications. The paper discusses the Semantic Content Organization and Retrieval Engine (SCORE, see vvww.voquette.com), which is based on research transferred from the University of Georgia's Large Scale Distributed Information Systems. SCORE belongs to a new generation of technologies for the emerging Semantic Web. It provides facilities to define ontological components that software agents can maintain. These agents use regular expression based rules in conjunction with various semantic techniques to extract ontology-driven metadata from structured and semistructured content. Automatic classification and information-extraction techniques augment these results and also let the system deal with unstructured text. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A generic content-management tool for Web databases

    Publication Year: 2002 , Page(s): 38 - 42
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (653 KB)  

    WebCUS uses XML and XSL to generate Web-based update interfaces with integrated access control mechanisms for arbitrary database schemas. WebCUS's adaptability allowed us to reduce development time and costs in building update interfaces for managing the content databases of the 2002 Vienna International Festival (VIF) and the Austrian Academy of Sciences (AAS) Web sites. We examine the WebCUS architecture and describe our experiences deploying the system with the VEF and AAS sites. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Current approaches to XML management

    Publication Year: 2002 , Page(s): 43 - 51
    Cited by:  Papers (11)  |  Patents (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB)  

    The Extensible Markup Language has become the standard for information interchange on the Web. We study the data- and document-centric uses of XML management systems (XMLMS). We want to provide XML data users with a guideline for choosing the data management system that best meets their needs. Because the systems we test are first-generation approaches, we suggest a hypothetical design for a useful XML database that could use all the expressive power of XML and XML query languages. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Party's over: bills come due for Internet radio

    Publication Year: 2002 , Page(s): 12 - 13
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB)  

    The future of Internet radio became a little clearer when the U.S. Librarian of Congress cut in half the proposed royalty rate Internet broadcasters must pay to record labels and artists. Artists and record labels were unhappy at the reduction in rate, while small Webcasters predicted bankruptcy for all but the largest Internet broadcasters. Earlier, Librarian of Congress James H. Billington had rejected the findings of his own Copyright Arbitration Royalry Panel (CARP), which recommended in February specific royalty rates for Internet Webcasters to pay to copyright holders and performers. But other than the deductions, he accepted almost all the CARP recommendations. Not normally associated with policy-making, the Librarian oversees the U.S. Copyright Office. The Copyright Office became involved with setting royalty payments for Internet radio through the passage of the Digital Millennium Copyright Act and the Digital Performance Rights in Sound Recordings Act. The paper considers the revenge of the recording industry. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 802.11a. More bandwidth without the wires

    Publication Year: 2002 , Page(s): 75 - 79
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (355 KB)  

    802.11a represents the third generation of wireless networking standards and technology (behind 802.11 and .11b). It was actually approved as a standard earlier than 802.11b, but it presented a greater engineering challenge, and was delayed. Advances in technology (Moore's Law continues to prove true) helped Internet engineers overcome those challenges in a cost-effective manner and prepare the specification for market introduction. The result is the further extension of 802.11 networking capabilities. My previous article, "802.11: Leaving the Wire Behind," (Kapp, 2002) focused on 802.11b wireless networking and the various 802.11 task groups that will directly affect the future of 802.11 networking. In this article, I examine 802.11a networking in depth and compare it to 802.11b and the upcoming 802.11g networking. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Those pesky NATs [network address translators]

    Publication Year: 2002
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (230 KB)  

    Whether buried deep inside ISPs, or camouflaged as DSL routers, network address translators (NATs) have become a ubiquitous tool in the Internet landscape. NATs enable telco and cable operators to prevent commercial use of consumer accounts. They also let home users run open community access wireless networks off a single purchased account. It is what NATs disable, however, that makes them nefarious. The paper considers how NATs work and the problems created by them. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Grid Services Architecture plan gaining momentum

    Publication Year: 2002 , Page(s): 11 - 12
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (201 KB)  

    Researchers at Argonne National Laboratory, the University of Southern California, and IBM have taken a major step toward combining the commercial promise of Web services with the scientific principles of grid computing, in which far-flung resources can be dynamically allocated. The Open Grid Services Architecture defines the mechanisms for creating, managing, and exchanging information, among entities, called grid services. In two recently released documents (www.globus.org/ogsa), the researchers behind OGSA have discussed high-level guiding concepts as well as the lower-level interfaces at its foundation. The paper discusses open source and Web services aspects. It considers how the OGSA project is addressing two major differences between the grid and Web services paradigms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Agents as Web services

    Publication Year: 2002 , Page(s): 93 - 95
    Cited by:  Papers (58)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (274 KB)  

    Web services are extremely flexible. Most advantageously, a developer of Web services need not know who or what will use the services being provided. The paper discusses current standards for Web services, directory services and the Semantic Web. It considers how agents extend Web services in several important ways. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Debye environment for Web data management

    Publication Year: 2002 , Page(s): 60 - 69
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1070 KB)  

    The paper discusses the Debye (Data Extraction By Example) environment which lets users extract and manage semistructured data available from Web sources, using an extended form of nested tables as its fundamental paradigm. Currently, the Debye tools are fully implemented as prototypes for applications that use data from heterogeneous Web sources. For instance, developers have recently used Debye to build a repository that integrates data from the digital libraries of several universities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Conceptual modeling of data-intensive Web applications

    Publication Year: 2002 , Page(s): 20 - 30
    Cited by:  Papers (19)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (795 KB)  

    Many of the Web applications around us are data-intensive; their main purpose is to present a large amount of data to their users. Most online trading and e-commerce sites fall into this category, as do digital libraries and institutional sites describing private and public organizations. Several commercial Web development systems aid rapid creation of data-intensive applications by supporting semiautomatic data resource publishing. Automatic publishing is typically subject to the constraints of database schemas, which limit an application designer's choices. Thus, Web application development often requires adaptation through programming, and programs end up intricately mixing data, navigation, and presentation semantics. Presentation is often a facade for elements of structure, composition, and navigation. Despite this frequently unstructured development process, data-intensive applications, based on large data sets organized within a repository or database, generally follow some typical patterns and rules. We describe these patterns and rules using WebML as a conceptual tool to make such notions explicit. WebML is a conceptual Web modeling language that uses the entity-relationship (ER) model for describing data structures and an original, high-level notation for representing Web content composition and navigation in hypertext form. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Putting the "Web" into Web services. Web services interaction models.2

    Publication Year: 2002 , Page(s): 90 - 92
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB)  

    For pt.1 see ibid., vol.6, no.3, p. 89-91 (2002). As I discussed in my previous column, each different style of middleware promotes one or more interaction models that determine how applications based on that middleware communicate and work with each other. It is difficult to say what the best interaction models would be for Web services, mainly because the World Wide Web Consortium (W3C) is still developing the architecture. The author considers the use of remote procedure calls, Web services and messaging and interface complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Internet Computing provides journal-quality evaluation and review of emerging and maturing Internet technologies and applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
M. Brian Blake
University of Miami