By Topic

Selected Areas in Communications, IEEE Journal on

Issue 8 • Date Oct 1995

Filter Results

Displaying Results 1 - 12 of 12
  • TCP Vegas: end to end congestion avoidance on a global Internet

    Publication Year: 1995 , Page(s): 1465 - 1480
    Cited by:  Papers (393)  |  Patents (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1448 KB)  

    Vegas is an implementation of TCP that achieves between 37 and 71% better throughput on the Internet, with one-fifth to one-half the losses, as compared to the implementation of TCP in the Reno distribution of BSD Unix. This paper motivates and describes the three key techniques employed by Vegas, and presents the results of a comprehensive experimental performance study, using both simulations and measurements on the Internet, of the Vegas and Reno implementations of TCP View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Security, payment, and privacy for network commerce

    Publication Year: 1995 , Page(s): 1523 - 1531
    Cited by:  Papers (6)  |  Patents (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (884 KB)  

    As the Internet is used to a greater extent in business, issues of protection and privacy will have more importance. Users and organizations must have the ability to control reads and writes to network accessible information, they must be assured of the integrity and confidentiality of the information accessed over the net, and they must have a means to determine the security, competence, and honesty of the commercial service providers with which they interact. They must also be able to pay for purchases made on the network, and they should be free from excessive monitoring of their activities. This paper discusses characteristics of the Internet that make it difficult to provide such assurances and surveys some of the techniques that can used to protect users of the network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A parameterizable methodology for Internet traffic flow profiling

    Publication Year: 1995 , Page(s): 1481 - 1494
    Cited by:  Papers (90)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1372 KB)  

    We present a parameterizable methodology for profiling Internet traffic flows at a variety of granularities. Our methodology differs from many previous studies that have concentrated on end-point definitions of flows in terms of state derived from observing the explicit opening and closing of TCP connections. Instead, our model defines flows based on traffic satisfying various temporal and spatial locality conditions, as observed at internal points of the network. This approach to flow characterization helps address some central problems in networking based on the Internet model. Among them are route caching, resource reservation at multiple service levels, usage based accounting, and the integration of IP traffic over an ATM fabric. We first define the parameter space and then concentrate on metrics characterizing both individual flows as well as the aggregate flow profile. We consider various granularities of the definition of a flow, such as by destination network, host-pair, or host and port quadruple. We include some measurements based on case studies we undertook, which yield significant insights into some aspects of Internet traffic, including demonstrating (i) the brevity of a significant fraction of IP flows at a variety of traffic aggregation granularities, (ii) that the number of host-pair IP flows is not significantly larger than the number of destination network flows, and (iii) that schemes for caching traffic information could significantly benefit from using application information View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ATM cell delay and loss for best-effort TCP in the presence of isochronous traffic

    Publication Year: 1995 , Page(s): 1457 - 1464
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (748 KB)  

    This paper reports the findings of a simulation study of the queueing behavior of “best-effort” traffic in the presence of constant bit-rate and variable bit-rate isochronous traffic. In this study, best-effort traffic refers to ATM cells that support communications between host end systems executing various applications and exchanging information using TCP/IP. The performance measures considered are TCP cell loss, TCP packet loss, mean cell queueing delay, and mean cell queue length. Our simulation results show that, under certain conditions, best-effort TCP traffic may experience as much as 2% cell loss. Our results also show that the probability of cell and packet loss decreases logarithmically with increased buffer size View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Type-of-service routing in datagram delivery systems

    Publication Year: 1995 , Page(s): 1411 - 1425
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1120 KB)  

    The Internet is expected to support various services, including best-effort services and guaranteed services. For best-effort services, we propose a new approach to achieving type-of-service (TOS) classes with adaptive next-hop routing. We consider two TOS classes, namely, delay-sensitive and throughput-sensitive. As in routing protocols such as OSPF and integrated IS-IS, each node has a different next-hop for each destination and TOS class. Traditionally, a node has a single FCFS queue for each outgoing link, and the next-hops are computed using link measurements. In our approach, we attempt to isolate the two traffic classes by using for each outgoing link a separate FCFS queue for each TOS class; the link is shared cyclicly between its TOS queues. The next-hops for the delay-sensitive traffic adapts to link delays of that traffic. The next-hops for the throughput-sensitive traffic adapts to overall link utilizations. We compare our approach with the traditional approach using discrete-event simulation and Lyapunov analysis (for stability of routes). Our approach offers lower end-to-end delay to the delay-sensitive traffic. A related property is that the routes for the delay-sensitive traffic are more stable, i.e., less oscillations. An unexpected property is that the overall end-to-end delay is lower, because the throughput-sensitive traffic moves away to under-utilized routes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Electronic marking and identification techniques to discourage document copying

    Publication Year: 1995 , Page(s): 1495 - 1504
    Cited by:  Papers (86)  |  Patents (29)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB)  

    Modern computer networks make it possible to distribute documents quickly and economically by electronic means rather than by conventional paper means. However, the widespread adoption of electronic distribution of copyrighted material is currently impeded by the ease of unauthorized copying and dissemination. In this paper we propose techniques that discourage unauthorized distribution by embedding each document with a unique codeword. Our encoding techniques are indiscernible by readers, yet enable us to identify the sanctioned recipient of a document by examination of a recovered document. We propose three coding methods, describe one in detail, and present experimental results showing that our identification techniques are highly reliable, even after documents have been photocopied View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed multicast address management in the global Internet

    Publication Year: 1995 , Page(s): 1445 - 1456
    Cited by:  Papers (7)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1216 KB)  

    We describe a distributed architecture for managing multicast addresses in the global Internet. A multicast address space partitioning scheme is proposed, based on the unicast host address and a per-host address management entity. By noting that port numbers are an integral part of end-to-end multicast addressing we present a single, unified solution to the two problems of dynamic multicast address management and port resolution. We then present a framework for the evaluation of multicast address management schemes, and use it to compare our design with three approaches, as well as a random allocation strategy. The criteria used for the evaluation are blocking probability and consistency, address acquisition delay, the load on address management entities, robustness against failures, and processing and communications overhead. With the distributed scheme the probability of blocking for address acquisition is reduced by several orders of magnitude, to insignificant levels, while consistency is maintained. At the same time, the address acquisition delay is reduced to a minimum by serving the request within the host itself. It is also shown that the scheme generates much less control traffic, is more robust against failures, and puts much less load on address management entities as compared with the other three schemes. The random allocation strategy is shown to be attractive primarily due to its simplicity, although it does have several drawbacks stemming from its lack of consistency (addresses may be allocated more than once) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The viewserver hierarchy for interdomain routing: protocols and evaluation

    Publication Year: 1995 , Page(s): 1396 - 1410
    Cited by:  Papers (8)  |  Patents (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1252 KB)  

    We present an interdomain routing protocol based on a new hierarchy, referred to as the viewserver hierarchy. The protocol satisfies policy and type of service (ToS) constraints, adapts to dynamic topology changes including failures that partition domains, and scales well to a large number of domains without losing detail (unlike the usual scaling technique of aggregating domains into superdomains). Domain-level views are maintained by special nodes called viewservers. Each viewserver maintains a view of a surrounding precinct. Viewservers are organized hierarchically. To obtain domain-level source routes, the views of one or more viewservers are merged (up to a maximum of twice the levels in the hierarchy). We also present a model for evaluating interdomain routing protocols, and apply this model to compare our viewserver hierarchy against the simple approach where each node maintains a domain-level view of the entire internetwork. Our results indicate that the viewserver hierarchy finds many short valid paths and reduces the amount of memory requirement by two orders of magnitude View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Networked information resource discovery: an overview of current issues

    Publication Year: 1995 , Page(s): 1505 - 1522
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1984 KB)  

    Users need a new class of information retrieval systems to help them utilize effectively the increasingly vast selection of networked information resources becoming available on the Internet. These systems-usually called network information discovery and retrieval (NIDR) systems-must operate in a highly demanding, very large-scale distributed environment that encompasses huge numbers of autonomously managed and extremely heterogeneous resources. The design of successful NIDR systems demands a synthesis of technologies and practices from computer science, computer-communications networking, information science, librarianship, and information management. This paper discusses the range of potential functional requirements for information resource discovery and selection, issues involved in describing and classifying network resources to support discovery and selection processes, and architectural frameworks for collecting and managing the information bases involved. It also includes a survey and analysis of selected operational prototypes and production systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An empirical evaluation of virtual circuit holding time policies in IP-over-ATM networks

    Publication Year: 1995 , Page(s): 1371 - 1382
    Cited by:  Papers (8)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1136 KB)  

    When carrying Internet protocol (IP) traffic over an asynchronous transfer mode (ATM) network, the ATM adaptation layer must determine how long to hold a virtual circuit opened to carry an IP datagram. We present a formal statement of the problem and carry out a detailed empirical examination of various holding time policies taking into account the issue of network pricing. We offer solutions for two natural pricing models, the first being a likely pricing model of future ATM networks, while the second is based on characteristics of current networks. For each pricing model, we study a variety of simple nonadaptive policies as well as easy to implement policies that adapt to the characteristics of the IP traffic. We simulate our policies on actual network traffic, and find that policies based on least recently used (LRU) perform well, although the best adaptive policies provide a significant improvement over LRU View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A flexible network architecture for data multicasting in “multiservice networks”

    Publication Year: 1995 , Page(s): 1426 - 1444
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1732 KB)  

    The paper describes a canonical model of data-transport architecture that offers a flexible framework for implementations of data multicasting on backbone networks to support multiservice applications (e.g., videoconferencing, digital TV broadcast). The architecture is based on acyclic graph structured communication channels that provide connectivity among data sources and destinations through switches and links in a backbone network. The paper adopts a network-wide logical addressing of communication channels, which allows data multicasting to be realized on specific backbone networks by establishing local bindings between a logical address and the information on network-specific routing of data over switches and links. The approach allows various sources to share the switches and links in a multicast path connecting to destinations. This is a desirable feature in view of the significant reduction in network routing control costs and data transfer costs when dealing with high-volume multisource data (say, in videoconferencing). In addition, logical addressing allows grouping of selected destinations to overlay different “virtual networks” on a base-level multicast channel (e.g., private discussion groups in a conference). As a demonstration of architectural flexibility, the paper describes the embedding of our multicast model on sample backbone networks capable of supporting multiservice applications: interconnected LANs, ATM networks, and high-speed public data networks (viz., SMDS networks) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed, scalable routing based on vectors of link states

    Publication Year: 1995 , Page(s): 1383 - 1395
    Cited by:  Papers (29)  |  Patents (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1216 KB)  

    We have present a new method for distributed routing in computer networks and internets using link-state information. Link vector algorithms (LVA) are introduced for the distributed maintenance of routing information in large networks and internets. According to an LVA, each router maintains a subset of the topology that corresponds to adjacent links and those links used by its neighbor routers in their preferred paths to known destinations. Based on that subset of topology information, the router derives its own preferred paths and communicates the corresponding link-state information to its neighbors. An update message contains a vector of updates; each such update specifies a link and its parameters. The LVA can be used for different types of routing. The correctness of the LVA is verified for arbitrary types of routing when correct and deterministic algorithms are used to select preferred paths at each router and each router is able to differentiate old updates from new. The LVA are shown to have better performance than the ideal link-state algorithm based on flooding and the distributed Bellman-Ford algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Journal on Selected Areas in Communications focuses on all telecommunications, including telephone, telegraphy, facsimile, and point-to-point television, by electromagnetic propagation.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Muriel Médard
MIT