By Topic

Internet Computing, IEEE

Issue 6 • Date Nov.-Dec. 2001

Filter Results

Displaying Results 1 - 14 of 14
  • Peering at peer-to-peer computing

    Publication Year: 2001 , Page(s): 4 - 5
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (109 KB)  

    How would you like to share files with another user without having to explicitly place them in a designated external location? The recent successes of (and controversies surrounding) Napster, Gnutella, and FreeNet have drawn attention to peer-to-peer computing, which allows precisely such interactions between information and service providers and their customers. The author takes a brief look at peer-to-peer computing, or P2P, and its main variants, both those that are popular and those that ought to be. P2P can be defined most easily in terms of what it is not: the client-server model, which is currently the most common model of distributed computing. In the client-server model, an application residing on a client computer invokes commands at a server. In P2P, an application is split into components that act as equals. The client-server model is simple and effective, but it has serious shortcomings, which are discussed. P2P is by no means a new idea. The distributed computing research community has studied it for decades. Networks themselves demonstrate P2P in action: Ethernet is nothing if not a P2P protocol, and network routing operates through routers acting as peers with other routers. The difference in the recent focus on P2P seems to be that it has finally caught the imagination of people building practical systems at the application layer; and for good reason. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Publication Year: 2001 , Page(s): 83 - 86
    Save to Project icon | Request Permissions | PDF file iconPDF (65 KB)  
    Freely Available from IEEE
  • Subject index

    Publication Year: 2001 , Page(s): 86 - 93
    Save to Project icon | Request Permissions | PDF file iconPDF (86 KB)  
    Freely Available from IEEE
  • Balancing security and liberty

    Publication Year: 2001 , Page(s): 96
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (134 KB) |  | HTML iconHTML  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Controlling access to XML documents

    Publication Year: 2001 , Page(s): 18 - 28
    Cited by:  Papers (14)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB)  

    Access control techniques for XML provide a simple way to protect confidential information at the same granularity level provided by XML schemas. In this article, we describe our approach to these problems and the design guidelines that led to our current implementation of an access control system for XML information View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Privacy risks in recommender systems

    Publication Year: 2001 , Page(s): 54 - 63
    Cited by:  Papers (20)  |  Patents (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    Recommender system users who rate items across disjoint domains face a privacy risk analogous to the one that occurs with statistical database queries View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Personalization and privacy

    Publication Year: 2001 , Page(s): 29 - 31
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (373 KB)  

    Personalization has been a hot topic or nearly a decade now, and many new products and advanced algorithms have emerged in that time. Several companies now sell tools such as recommender systems, which take input about users and products and generate recommendations about which products the users will like best. At their best, recommenders can be wonderful tools for users, helping them sort through myriad items they could read, buy, or watch to select those few that are most valuable to them. The algorithms that power these systems have evolved dramatically, and the best can produce rapid recommendations over data sets of millions of users and hundreds of thousands of products. The other edge of the sword is that recommender systems provide perfect tools for marketers and others to invade users' privacy. After all, recommenders; seek to learn everything about our preferences, including what we like to read, what we like to buy, how much money we spend, and what influences us to spend it. How a recommender deals with privacy decides whether its-users view it,as a boon or a bane. If the recommender only uses this information to help us find items to purchase on a Web site, we will probably value the feature - it might even bring us back to shop there again. On the other hand, if the Web site sells our information to other companies, so they can more effectively bother us. with phone calls at dinner time, we'll probably feel our privacy has been invaded. Privacy is a critical issue for recommender systems. In the end, personalization is an important factor in developing effective Web sites because it creates a user experience that is both compelling and sticky. The experience is compelling because it helps users find exactly the information, products, and services they need. It is sticky because a personalized Web site trains itself over time to serve its users better, which makes those users less likely to go to a new site that they would have to train all over again View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SCTP: new transport protocol for TCP/IP

    Publication Year: 2001 , Page(s): 64 - 69
    Cited by:  Papers (30)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (141 KB)  

    For the past 20 years (1980-2000), applications and end users of the TCP/IP suite have employed one of two protocols: the transmission control protocol or the user datagram protocol. Yet some applications already require greater functionality than what either TCP or UDP has to offer, and future applications might require even more. To extend transport layer functionality, the Internet Engineering Task Force approved the stream control transmission protocol (SCTP) as a proposed standard in October 2000. SUP was spawned from an effort started in the IETF Signaling Transport (Sigtrans) working group to develop a specialized transport protocol for call control signaling in voice-over-IP (VoIP) networks. Recognizing that other applications could use some of the new protocol's capabilities, the IETF now embraces SCTP as a general-purpose transport layer protocol, joining TCP and UDP above the IP layer. Like TCP, STCP offers a point-to-point, connection-oriented, reliable delivery transport service for applications communicating over an IP network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inferring user interest

    Publication Year: 2001 , Page(s): 32 - 39
    Cited by:  Papers (20)  |  Patents (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (465 KB)  

    As the World Wide Web continues to grow, people find it impossible to access even a small portion of the information generated in a day from Usenet news, e-mail, and Web postings. Automated filters help us to prioritize and access only the information in which we're interested. Because opinions differ about the importance or relevance of information, people need personalized filters. Implicit indicators captured while users browse the Web can be as predictive of interest levels as explicit ratings View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probability and agents

    Publication Year: 2001 , Page(s): 77 - 79
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (413 KB)  

    To make sense of the information that agents gather from the Web, they need to reason about it. If the information is precise and correct, they can use engines such as theorem provers to reason logically and derive correct conclusions. Unfortunately, the information is often imprecise and uncertain, which means they will need a probabilistic approach. More than 150 years ago, George Boole presented the logic that bears his name. There is concern that classical logic is not sufficient to model how people do or should reason. Adopting a probabilistic approach in constructing software agents and multiagent systems simplifies some thorny problems and exposes some difficult issues that you might overlook if you used purely logical approaches or (worse!) let procedural matters monopolize design concerns. Assessing the quality of the information received from another agent is a major problem in an agent system. The authors describe Bayesian networks and illustrate how you can use them for information quality assessment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Content-independent task-focused recommendation

    Publication Year: 2001 , Page(s): 40 - 47
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (282 KB)  

    A technique that correlates database items to a task adds content-independent context to a recommender system based solely on user interest ratings. In this article, we present a task-focused approach to recommendation that is entirely independent of the type of content involved. The approach leverages robust, high-performance, commercial software. We have implemented it in a live movie recommendation site and validated it with empirical results from user studies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The virtual world gets physical: perspectives on personalization

    Publication Year: 2001 , Page(s): 48 - 53
    Cited by:  Papers (2)  |  Patents (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (291 KB)  

    MusicFX and GroupCast illustrate some benefits possible from extending the personalization of electronic content in the virtual world to applications in the physical world. Utilizing individual preferences in the physical world, particularly in public spaces, infringes on people's privacy more than it does in the virtual world, where it is easier to maintain different addresses and aliases that can shield or mask personal details from online interactions. However, the use of these preferences in a group context, where some degree of plausible deniability exists, may diminish people's concerns. If sufficient benefits are provided - think of a world without "elevator music" - people might even embrace the technologies that will make adaptive environments possible View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • XML and data integration

    Publication Year: 2001 , Page(s): 75 - 76
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (110 KB)  

    XML is rapidly becoming a standard for data representation and exchange. It provides a common format for expressing both data structures and contents. As such, it can help in integrating structured, semistructured, and unstructured data over the Web. Still, it is well recognized that XML alone cannot provide a comprehensive solution to the articulated problem of data integration. There are still several challenges to face, including: developing a formal foundation for Web metadata standards; developing techniques and tools for the creation, extraction, and storage of metadata; investigating the area of semantic interoperability frameworks; and developing semantic-based tools for knowledge discovery View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Internet traffic measurement

    Publication Year: 2001 , Page(s): 70 - 74
    Cited by:  Papers (43)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (135 KB)  

    The Internet's evolution over the past 30 years (1971-2001), has been accompanied by the development of various network applications. These applications range from early text-based utilities such as file transfer and remote login to the more recent advent of the Web, electronic commerce, and multimedia streaming. For most users, the Internet is simply a connection to these applications. They are shielded from the details of how the Internet works, through the-information-hiding principles of the Internet protocol stack, which dictates how user-level data is transformed into network packets for transport across the network and put back together for delivery at the receiving application. For many networking researchers however, the protocols themselves are of interest. Using specialized network measurement hardware or software, these researchers collect information about network packet transmissions. With detailed packet-level measurements and some knowledge of the IP stack, they can use reverse engineering to gather significant information about both the application structure and user behavior, which can be applied to a variety of tasks like network troubleshooting, protocol debugging, workload characterization, and performance evaluation and improvement. Traffic measurement technologies have scaled up to provide insight into fundamental behavior properties of the Internet, its protocols, and its users. The author introduces the tools and methods for measuring Internet traffic and offers highlights from research results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Internet Computing provides journal-quality evaluation and review of emerging and maturing Internet technologies and applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
M. Brian Blake
University of Miami