By Topic

Networking, IEEE/ACM Transactions on

Issue 4 • Date Aug. 1997

Filter Results

Displaying Results 1 - 14 of 14
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient computation of unique input/output sequences in finite state machines

    Page(s): 585 - 599
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB)  

    This paper makes two contributions toward computing unique input/output (UIO) sequences in finite-state machines. Our first contribution is to compute all UIO sequences of minimal lengths in a finite-state machine. Our second contribution is to present a generally efficient algorithm to compute a UIO sequence for each state, if it exists. We begin by defining a path vector, vector perturbation, and UIO tree. The perturbation process allows us to construct the complete UIO tree for a machine. Each sequence of input/output from the initial vector of a UIO tree to a singleton vector represents a UIO sequence. Next, we define the idea of an inference rule that allows us to infer UIO sequences of a number of states from the UIO sequence of some state. That is, for a large class of machines, it is possible to compute UIO sequences for all possible states from a small set of initial UIOs. We give a modified depth-first algorithm, called the hybrid approach, that computes a partial UIO tree, called an essential subtree, from which UIO sequences of all possible states can be inferred. Using the concept of projection machines, we show that sometimes it is unnecessary to construct even a partial subtree. We prove that if a machine remains strongly connected after deleting all the converging transitions, then all of the states have UIO sequences. To demonstrate the effectiveness of our approach, we develop a tool to perform experiments using both small and large machines View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The GBAR source model for VBR videoconferences

    Page(s): 554 - 560
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB)  

    Heyman (1992) examined three sequences giving the number of cells per frame of a VBR encoding of videoconferences (talking heads); these sequences were produced by hardware encoders using different coding algorithms. Each sequence had a gamma marginal distribution, and the autocorrelation function was geometric up to lags of at least 3 s, which includes all autocorrelation values larger than 0.1. We present an easy to simulate autoregressive process that has these properties. The model is tested by comparing the cell-loss rate produced when the data trace was used as the sole source in a simulation of an ATM switch to the cell-loss rates produced when traces generated by the model were used as the source View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generating efficient protocol code from an abstract specification

    Page(s): 514 - 524
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB)  

    A protocol compiler takes as input an abstract specification of a protocol and generates an implementation of that protocol. Protocol compilers usually produce inefficient code both in terms of code speed and code size. We show that the combination of two techniques makes it possible to build protocol compilers that generate efficient code. These techniques are: (i) the use of a compiler that generates from the specification a unique tree-shaped automation (rather than multiple independent automata) and (ii) the use of optimization techniques applied at the automation level, i.e., on the branches of the trees. We have developed a protocol compiler that uses both these techniques. The compiler takes as the input a protocol specification written in the synchronous language Esterel. The specification is compiled into a unique automation by the Esterel front end compiler. The automation is then optimized and converted into C code by our protocol optimizer called HIPPCO. HIPPCO improves the code performance and reduces the code size by simultaneously optimizing the performance of the common path and optimizing the size of the uncommon path. We evaluate the gain expected with our approach on a real-life example, namely a working subset of the TCP protocol generated from an Esterel specification. We compare the protocol code generated with our approach to that derived from the standard BSD TCP implementation. The results are very encouraging. HIPPCO-generated code executes up to 25% fewer instructions than the BSD code for input packet processing while only increasing the code size by 25% View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of the cell loss ratio in ATM networks with a fuzzy system and application to measurement-based call admission control

    Page(s): 572 - 584
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB)  

    An important parameter in asynchronous transfer model (ATM)-based network design and management is the cell loss ratio (CLR) in ATM multiplexers. It is a key parameter to many vital functions in the network such as call admission control (CAC), bandwidth allocation, etc. However, the CLR depends usually on many unknown and unpredictable traffic parameters such as input traffic correlations. In this paper, we propose a simple and robust fuzzy-based algorithm to predict the CLR in large-sized systems based on both a small amount of information from small-sized systems, and the asymptotic behavior for very large systems. Unlike the model-based approaches, our approximation avoids the problem of assuming any traffic parameters or arrival process. This algorithm is used with real-time traffic measurement to propose an effective measurement-based call admission control framework for ATM networks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive rate-controlled scheduling for multimedia applications

    Page(s): 475 - 488
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB)  

    We present a framework for integrated scheduling of continuous media (CM) and other applications. The framework, called ARC scheduling, consists of a rate-controlled on-line CPU scheduler, an admission control interface, a monitoring module, and a rate adaptation interface. ARC scheduling allows threads to reserve CPU time for guaranteed progress. It provides firewall protection between threads such that the progress guarantee to a thread is independent of how other threads actually make scheduling requests. Rate adaptation allows a CM application to adapt its rate to changes in its execution environment. We have implemented the framework as an extension to Solaris 2.3. We present experimental results which show that ARC scheduling is highly effective for integrated scheduling of CM and other applications in a general purpose workstation environment. ARC scheduling is a key component of an end system architecture we have designed and implemented to support networking with quality of service guarantees. In particular, it enables protocol threads to make guaranteed progress View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Study of delay jitter with and without peak rate enforcement

    Page(s): 543 - 553
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB)  

    In this paper, the modification of a tagged traffic stream due to statistical multiplexing is studied by presenting a numerical approach for the calculation of the tagged delay jitter and interdeparture processes. Both the single- and multiple-node cases are considered. Unlike the past work, the developed approach is applicable under both the standard first come first served (FCFS) policy and a peak-rate enforcing multiplexing policy. The latter policy can be adopted to reshape the tagged traffic stream within the network. Restoring a shaped traffic profile within the network may be necessary to obtain some of the benefits for which the original shaping is carried out at the network edge. This study also provides results and insight regarding the potential gains of reregulation within the network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ASN.1 protocol specification for use with arbitrary encoding schemes

    Page(s): 502 - 513
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB)  

    Abstract Syntax Notation One (ASN.1) is a language for describing structured information. It is generally used in the specification of communications protocols. To date, it has primarily been used in the specification of OSI application layer protocols and protocols which use basic encoding rules (BER) and its derivatives as encoding schemes. While it is difficult to understand why ASN.1 has not received more widespread use for protocol specification, one possible explanation is its perceived reliance on BER and its derivatives as encoding schemes. This work attempts to address this misconception by providing mappings for ASN.1 constructs directly onto encoding schemes other than BER. By harnessing the techniques described, ASN.1 can be used to describe new and existing protocols regardless of their encoding scheme. The effectiveness of our techniques is demonstrated using a protocol analyzer (sometimes called a packet analyzer). Protocol analyzers decode and display protocol data units (PDUs). A major limitation of many protocol analyzers is their inability to dynamically incorporate new and proprietary PDUs. A “proof of concept” for the solutions suggested in this paper is provided via a protocol analyzer capable of processing protocols directly from an ASN.1 specification View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterative approach to optimizing convergence routing priorities

    Page(s): 530 - 542
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB)  

    This paper shows how to optimize the routing decisions in a nondeterministic routing algorithm called convergence routing in which routes may change depending on the traffic conditions. The routing algorithm guarantees a loss-free delivery of data packets from bursty sources, and a deterministic bound on the route length in arbitrary topology networks. The routing decisions are based on assigning routing priorities to the links such that a packet is forwarded to the highest priority link which is not blocked. Routing priorities are assigned using a local-greedy metric which minimizes the distance (number of hops) to the destination. This work shows that routing decisions using a local-greedy metric are not optimal, and the performance of the algorithm can be improved substantially by using new measures. Thus, various look-ahead metrics which take into account the potential gain on the other switching nodes toward the destination of a packet are suggested. The contributions of this work are: (1) a new analytical model to capture the behavior of a switching node; (2) an iterative optimization technique to set routing priorities according to various look-ahead measures; and (3) heuristics to ensure the stability of the routing priorities. The optimization objective is to maximize the throughput by minimizing the maximum total flow carried on a link in the network under static traffic model. The performance is studied computationally on various networks and traffic matrices. It is shown that up to a 50% performance increase can be obtained by optimizing the routing priorities View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A direct signaling system for flexible access and deployment of telecommunication services

    Page(s): 489 - 501
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    In current telecommunication networks, the value-added services available to subscribers are almost exclusively offered by the carrier providing local telephony service. This results from two factors: the processing for services is either based in or triggered from software executing on the access telecommunication switches, and signaling links over which subscribers request services are terminated on these access switches. In a direct signaling system, the signaling link from a subscriber does not necessarily terminate on an access switch, but rather on an intelligent server we call a call server. The call server invokes service functions and coordinates their interactions. This direct signaling system may be overlaid on various types of access networks, including circuit-switched telephony, packet telephony, wireless local loop, cable, etc. We discuss the design, implementation, and performance of a direct signaling system, including procedures that provide varying amounts of integration with current telecommunication systems. Our results show that a direct signaling system using a low-delay direct signaling link and in which no changes are made to existing telecommunicatione switches incurs approximately 100 ms higher postdial delay than a standard ISDN system. A direct signaling system in which telecommunication switches provide open interfaces and coordinate processing with direct signaling servers incurs approximately 100 ms lower postdial delay than a standard ISDN system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized guaranteed rate scheduling algorithms: a framework

    Page(s): 561 - 571
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB)  

    In this paper, we define a class of generalized guaranteed rate (GR) scheduling algorithms that includes algorithms which allocate a variable rate to the packets of a flow. We define work-conserving generalized virtual clock, packet-by-packet generalized processor sharing, and self-clocked fair queueing scheduling algorithms that can allocate a variable rate to the packets of a flow. We also define scheduling algorithms suitable for servers where packet fragmentation may occur. We demonstrate that if a class of rate controllers is employed for a flow in conjunction with any scheduling algorithm in GR, then the resulting non-work-conserving algorithm also belongs to GR. This leads to the definition of several non-work-conserving algorithms. We then present a method for deriving the delay guarantee of a network of servers when: (1) different rates are allocated to packets of a flow at different servers along the path and the bottleneck server for each packet may be different, and (2) packet fragmentation and/or reassembly may occur. This delay guarantee enables a network to provide various service guarantees to flows conforming to any specification. We illustrate this by utilizing delay guarantee to derive delay bounds for flows conforming to leaky bucket, exponentially bounded burstiness, and flow specification. Our method for determining these bounds is valid in internetworks and leads to tighter results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Alternative specification and verification of a periodic state exchange protocol

    Page(s): 525 - 529
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (132 KB)  

    The formal analysis of a data-transfer protocol based on the novel idea of periodic state exchange has been presented by Gouda et al. (see IEEE Trans. Commun., vol.43, no.9, p.2475-84, 1995). In certain environments, such as the Internet, the requirements for the correctness of the protocol may impose impractical constraints. We present an alternative specification of the periodic state-exchange protocol based on the discovery that the protocol can be treated as a special sliding-window protocol. Although our protocol behaves better in an Internet-like environment it has other shortcomings with respect to the original. The comparison of the alternatives reveals the trade-offs in their design View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ASHs: application-specific handlers for high-performance messaging

    Page(s): 460 - 474
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB)  

    Application-specific safe message handlers (ASHs) are designed to provide applications with hardware-level network performance. ASHs are user-written code fragments that safely and efficiently execute in the kernel in response to message arrival. ASHs can direct message transfers (thereby eliminating copies) and send messages (thereby reducing send-response latency). In addition, the ASH system provides support for dynamic integrated layer processing (thereby eliminating duplicate message traversals) and dynamic protocol composition (thereby supporting modularity). ASHs offer this high degree of flexibility while still providing network performance as good as, or (if they exploit application-specific knowledge) even better than, hard-wired in-kernel implementations. A combination of user-level microbenchmarks and end-to-end system measurements using TCP demonstrates the benefits of the ASH system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Increasing the portability and re-usability of protocol code

    Page(s): 445 - 459
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    Deploying protocols is an expensive and time-consuming process today. One reason is the high cost of developing, testing, and installing protocol implementations. To reduce this difficulty, protocols are developed and executed within environments called protocol subsystems, and protocol software is often ported instead of being coded from scratch. Unfortunately, today a variety of protocol subsystems offer a plethora of features, functionality, and drawbacks; the differences among them often reduce the portability and reusability of protocol code, and therefore present barriers to the deployment of new protocols. In this paper, we consider differences in subsystems and their effect on the portability and reusability of protocols and protocol implementations. We then propose two different approaches, each optimized for a different situation, that allow protocol code implemented in one subsystem to be used without modification within other subsystems, and thus reduce the barriers to protocol deployment. We relate our experiences designing, implementing, and measuring the performance of each approach using, as a baseline, an AppleTalk protocol stack we have developed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
R. Srikant
Dept. of Electrical & Computer Engineering
Univ. of Illinois at Urbana-Champaign