By Topic

Network Protocols, 2003. Proceedings. 11th IEEE International Conference on

Date 4-7 Nov. 2003

Filter Results

Displaying Results 1 - 25 of 32
  • Packet classification using extended TCAMs

    Publication Year: 2003 , Page(s): 120 - 131
    Cited by:  Papers (62)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (366 KB)  

    CAMs are the most popular practical method for implementing packet classification in high performance routers. Their principal drawbacks are high power consumption and inefficient representation of filters with port ranges. A recent paper [Narlikar, et al., 2003] showed how partitioned TCAMs could be used to implement IP route lookup with dramatically lower power consumption. We extend the ideas in [Narlikar, et al., 2003] to address the more challenging problem of general packet classification. We describe two extensions to the standard TCAM architecture. The first organizes the TCAM as a two level hierarchy in which an index block is used to enable/disable the querying of the main storage blocks. The second incorporates circuits for range comparisons directly within the TCAM memory array. Extended TCAMs can deliver high performance (100 million lookups per second) for large filter sets (100,000 filters), while reducing power consumption by a factor of ten and improving space efficiency by a factor of three. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Resilient peer-to-peer streaming

    Publication Year: 2003 , Page(s): 16 - 27
    Cited by:  Papers (89)  |  Patents (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (485 KB) |  | HTML iconHTML  

    We consider the problem of distributing "live" streaming media content to a potentially large and highly dynamic population of hosts. Peer-to-peer content distribution is attractive in this setting because the bandwidth available to serve content scales with demand. A key challenge, however, is making content distribution robust to peer transience. Our approach to providing robustness is to introduce redundance; both in network paths and in data. We use multiple, diverse distribution trees to provide redundancy in network paths and multiple description coding (MDC) to provide redundancy in data. We present a simple tree management algorithm that provides the necessary path diversity and describe an adaptation framework for MDC based on scalable receiver feedback. We evaluate these using MDC applied to real video data coupled with real usage traces from a major news site that experienced a large flash crowd for live streaming content. Our results show very significant benefits in using multiple distribution trees and MDC, with a 22 dB improvement in PSNR in some cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Non-uniform information dissemination for sensor networks

    Publication Year: 2003 , Page(s): 295 - 304
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (341 KB) |  | HTML iconHTML  

    Future smart environments are characterized by multiple nodes that sense, collect, and disseminate information about environmental phenomena through a wireless network. In this paper, we define a set of applications that require a new form of distributed knowledge about the environment, referred to as non-uniform information granularity. By non-uniform information granularity we mean that the required accuracy or precision of information is proportional to the distance between a source node (information producer) and current sink node (information consumer). That is, as the distance between the source node and sink node increases, loss in information precision is acceptable. Applications that can benefit from this type of knowledge range from battlefield scenarios to rescue operations. The main objectives of this paper are two-fold: first, we precisely define non-uniform information granularity, and second we describe the different protocols that achieve non-uniform information dissemination and analyze these protocols based on complexity, energy consumption, and accuracy of information. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The AD-MIX protocol for encouraging participation in mobile ad hoc networks

    Publication Year: 2003 , Page(s): 156 - 167
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1251 KB)  

    Mobile ad hoc networks are autonomous self-organized networks in which each node relies on the other nodes in the network to perform routing on its behalf. Proper functioning of the network is dependent on participation and cooperation of the nodes in routing and packet forwarding. Unfortunately, providing these services may not be in the best interest of a mobile node, since it results in the depletion of the node's resources. Selfish behavior by a node may result in degraded network performance due to denial of service, decrease in network throughput and partitioning of the network. Because it is in a node's interest to not forward traffic, nodes should be given some form of incentive for the services they provide. In this paper, we address the problem of selfishness in mobile ad hoc networks by proposing a protocol called AD-MIX that encourages participation. AD-MIX discourages selfishness by concealing the true destination of packets from intermediate nodes along the path, forcing a node to participate or risk dropping packets destined for itself. Simulation results show that employing AD-MIX encourages participation without a significant increase in overhead. In addition to encouraging participation, AD-MIX also facilitates anonymization and secure communication between nodes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A file-centric model for peer-to-peer file sharing systems

    Publication Year: 2003 , Page(s): 28 - 37
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (391 KB) |  | HTML iconHTML  

    Peer-to-peer systems have quickly become a popular way for file sharing and distribution. In this paper, we focus on the subsystem consisting of peers and their actions relative to a specific file and develop a simple theoretical file-centric model for the subsystem. We begin with a detailed model that tracks the complete system state. To deal with the large system state space, we investigate a decomposed model, which not only greatly reduces the complexity of solving the system, hut also provides a flexible framework for modeling multiple classes of peers and new system features. Using the model, we can study performance measures of a system, such as throughput, success probability of a file search, and number of file replicas in the system. Our model can also be used to understand the impact of user behavior and new system features. As examples, we investigate the effect of freeloaders, holding-enabled downloading and decoys in the paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Power adaptive broadcasting with local information in ad hoc networks

    Publication Year: 2003 , Page(s): 168 - 178
    Cited by:  Papers (10)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (455 KB) |  | HTML iconHTML  

    Network wide broadcasting is an energy intensive function. In this paper we propose a new method that performs transmission power adaptations based on information available locally, to reduce the overall energy consumed per broadcast. In most of the prior work on energy efficient broadcasting it is assumed that the originator of the broadcast has global network information (both topology information as well as the geographical distance between nodes). This can be prohibitive in terms of the consumed overhead. In our protocol, each node attempts to tune its transmit power based on local information (of up to two hops from the transmitting node). We perform extensive simulations to evaluate our protocol. Our simulations take into account the possible loss of packets due to collision effects and the additional re-broadcasts that are necessary due to lower power transmissions. We show that our protocol achieves almost the same coverage as other non power-adaptive broadcast schemes hut with a reduction of approximately 40% in terms of the consumed power as compared to a scheme that does not adapt its power. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the cost-quality tradeoff in topology-aware overlay path probing

    Publication Year: 2003 , Page(s): 268 - 279
    Cited by:  Papers (10)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (815 KB) |  | HTML iconHTML  

    Path probing is essential to maintaining an efficient overlay network topology. However, the cost of a full-scale probing is as high as O(n2), which is prohibitive in large-scale overlay networks. Several methods have been proposed to reduce probing overhead, although at a cost in terms of probing completeness. In this paper, an orthogonal solution is proposed that trades probing overhead for estimation accuracy in sparse networks such as the Internet. The proposed solution uses network-level path composition information (for example, as provided by a topology server) to infer path quality without full-scale probing. The inference metrics include latency, loss rate and available bandwidth. This approach is used to design several probing algorithms, which are evaluated through analysis and simulation. The results show that the proposed method can significantly reduce probing overhead while providing hounded quality estimations for all n × (n - 1) overlay paths. The solution is well suited to medium-scale overlay networks in the Internet. In other environments, it can be combined with extant probing algorithms to further improve performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting routing redundancy via structured peer-to-peer overlays

    Publication Year: 2003 , Page(s): 246 - 257
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB) |  | HTML iconHTML  

    Structured peer-to-peer overlays provide a natural infrastructure for resilient routing via efficient fault detection and precomputation of backup paths. These overlays can respond to faults in a few hundred milliseconds by rapidly shifting between alternate routes. In this paper, we present two adaptive mechanisms for structured overlays and illustrate their operation in the context of Tapestry, a fault-resilient overlay from Berkeley. We also describe a transparent, protocol-independent traffic redirection mechanism that tunnels legacy application traffic through overlays. Our measurements of a Tapestry prototype show it to be a highly responsive routing service, effective at circumventing a range of failures while incurring reasonable cost in maintenance bandwidth and additional routing latency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mobile distributed information retrieval for highly-partitioned networks

    Publication Year: 2003 , Page(s): 38 - 47
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (311 KB) |  | HTML iconHTML  

    We propose and evaluate a mobile, peer-to-peer information retrieval system. Such a system can, for example, support medical care in a disaster by allowing access to a large collections of medical literature. In our system, documents in a collection are replicated in an overlapping manner at mobile peers. This provides resilience in the face of node failures, malicious attacks, and network partitions. We show that our design manages the randomness of node mobility. Although nodes contact only direct neighbors (who change frequently) and do not use any ad hoc routing, the system maintains good IR performance. This makes our design applicable to mobility situations where routing partitions are common. Our evaluation shows that our scheme provides significant savings in network costs, and increased access to information over ad-hoc routing-based approaches; nodes in our system require only a modest amount of additional storage on average. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The temporal and topological characteristics of BGP path changes

    Publication Year: 2003 , Page(s): 190 - 199
    Cited by:  Papers (14)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB) |  | HTML iconHTML  

    BGP has been deployed in Internet for more than a decade. However, the events that cause BGP topological changes are not well understood. Although large traces of routing updates seen in BGP operation are collected by RIPE RlS and University of Oregon RouteViews, previous work examines this data set as individual routing updates. This paper describes methods that group routing updates into events. Since one event (a policy change or peering failure) results in many update messages, we cluster updates both temporally and topologically (based on the path vector information). We propose a new approach to analyzing the update traces, classifying the topological impact of muting events, and approximating the distance to the autonomous system originating the event. Our analysis provides some insight into routing behavior: First, at least 45% path changes are caused by events on transit peerings. Second, a significant number (23-37%) of path changes are transient, in that routing updates indicate temporary path changes, but they ultimately converge on a path that is identical from the previously stable path. These observations suggest that a content provider cannot guarantee end-to-end routing stability based solely on its relationship with its immediate ISP, and that better detection of transient changes may improve routing stability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic clustering for acoustic target tracking in wireless sensor networks

    Publication Year: 2003 , Page(s): 284 - 294
    Cited by:  Papers (26)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (463 KB) |  | HTML iconHTML  

    In the paper, we devise and evaluate a fully decentralized, light-weight, dynamic clustering algorithm for target tracking. Instead of assuming the same role for all the sensors, we envision a hierarchical sensor network that is composed of (a) a static backbone of sparsely placed high-capability sensors which assume the role of a cluster head (CH) upon triggered by certain signal events; and (b) moderately to densely populated low-end sensors whose function is to provide sensor information to CHs upon request. A cluster is formed and a CH becomes active, when the acoustic signal strength detected by the CH exceeds a pre-determined threshold. The active CH then broadcasts an information solicitation packet, asking sensors in its vicinity to join the cluster and provide their sensing information. We address and devise solution approaches (with the use of Voronoi diagram) to realize dynamic clustering: (I1) how CHs cooperate with one another to ensure that for the most of time only one CH (preferably the CH that is closest to the target) is active; (I2) when the active CH solicits for sensor information, instead of having all the sensors in its vicinity reply, only a sufficient number of sensors respond with non-redundant, essential information to determine the target location; and (I3) both packets with which sensors respond to their CHs and packets that CHs report to subscribers do not incur significant collision. Through both probabilistic analysis and ns-2 simulation, we show with the use of Voronoi diagram, the CH that is usually closest to the target is (implicitly) selected as the leader and that the proposed dynamic clustering algorithm effectively eliminates contention among sensors and renders more accurate estimates of target locations as a result of better quality data collected and less collision incurred. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Characterizing overlay multicast networks

    Publication Year: 2003 , Page(s): 61 - 70
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (506 KB) |  | HTML iconHTML  

    Overlay networks among cooperating hosts have recently emerged as a viable solution to several challenging problems, including multicasting, routing, content distribution, and peer-to-peer services. Application-level overlays, however, incur a performance penalty over router level solutions. This paper characterizes this performance penalty for overlay multicast trees via experimental data, simulations, and theoretical models. Experimental data and simulations illustrate that (i) the average delay and the number of hops between parent and child hosts in overlay trees generally decrease, and (ii) the degree of hosts generally decreases, as the level of the host in the overlay tree increases. Overlay multicast routing strategies, together with power-law and small-world Internet topology characteristics, are causes of the observed phenomena. We compare three overlay multicast protocols with respect to latency, bandwidth, router degrees, and host degrees. We also quantify the overlay tree cost. Results reveal that L(n)/U(n) ∝ n0.9 for small n, where L(n) is the total number of hops in all overlay links, U(n) is the average number of hops on the source to receiver unicast paths, and n is the number of members in the overlay multicast session. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the utility of distributed cryptography in P2P and MANETs: the case of membership control

    Publication Year: 2003 , Page(s): 336 - 345
    Cited by:  Papers (21)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (468 KB) |  | HTML iconHTML  

    Peer-to-peer systems enable efficient resource aggregation and are inherently scalable since they do not depend on any centralized authority. However, lack of a centralized authority prompts many security-related challenges. Providing efficient security services in these systems is an active research topic which is receiving much attention in the security research community. In this paper, we explore the use of threshold cryptography in peer-to-peer settings (both Internet- and MANET-based) to provide, in a robust and fault tolerant fashion, security services such as authentication, certificate issuance and access control. Threshold cryptography provides high availability by distributing trust throughout the group and is, therefore, an attractive solution for secure peer-groups. Our work investigates the applicability of threshold cryptography for membership control in peer-to-peer systems. In the process, we discover that one interesting proposed scheme contains an unfortunate (yet serious) flaw. We then present an alternative solution and its performance measurements. More importantly, our preliminary work casts a certain degree of skepticism on the practicality and even viability of using (seemingly attractive) threshold cryptography in certain peer-to-peer settings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Network time synchronization using clock offset optimization

    Publication Year: 2003 , Page(s): 212 - 221
    Cited by:  Papers (7)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (962 KB)  

    Time synchronization is critical in distributed environments. A variety of network protocols, middleware and business applications rely on proper time synchronization across the computational infrastructure and depend on the clock accuracy. The ''network time protocol" (NTP) is the current widely accepted standard for synchronizing clocks over the Internet. NTP uses a hierarchical scheme in order to synchronize the clocks in the network. In this paper we present a novel non-hierarchical peer-to-peer approach for tune synchronization termed CTP - classless time protocol. This approach exploits convex optimization theory in order to evaluate the impact of each clock offset on the overall objective function. We define the clock offset problem as an optimization problem and derive its optimal solution. Based on the solution we develop a distributed protocol that can be implemented over a communication network and prove its convergence to the optimal clock offsets. For compatibility, the CTP may use the exact format and number of messages used by NTP. We also present methodology and numerical results for evaluating and comparing the accuracy of time synchronization schemes. We show that the CTP substantially outperforms hierarchical schemes such as NTP in the sense of clock accuracy with respect to a universal clock, without increasing complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Matchmaker: signaling for dynamic publish/subscribe applications

    Publication Year: 2003 , Page(s): 222 - 233
    Cited by:  Papers (6)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (485 KB) |  | HTML iconHTML  

    The publish/subscribe (pub/sub) paradigm provides content-oriented data dissemination in which communication channels are established between content publishers and content subscribers based on a matching of subscribers interest in the published content provided - a process we refer to as "matchmaking". Once an interest match has been made, content forwarding state can be installed at intermediate nodes (e.g., active routers, application-level relay nodes) on the path between a content provider and an interested subscriber. In dynamic pub/sub applications, where published content and subscriber interest change frequently the signaling overhead needed to perform matchmaking can be a significant overhead. We first formalize the matchmaking process as an optimization problem, with the goal of minimizing the amount of matchmaking signaling messages. We consider this problem for both shared and per-source multicast data (content) distribution topologies. We characterize the fundamental complexity of the problem, and then describe several efficient solution approaches. The insights gained through our analysis are then embodied in a novel active matchmaker signaling protocol (AMSP). AMSP dynamically adapts to applications' changing publication and subscription requests through a link-marking approach. We simulate AMSP and two existing broadcast-based approaches for conducting matchmaking, and find that AMSP significantly reduces signaling overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The impact of false sharing on shared congestion management

    Publication Year: 2003 , Page(s): 84 - 94
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (386 KB) |  | HTML iconHTML  

    Several recent proposals for sharing congestion information across concurrent flows between end-systems overlook an important problem: two or more flows sharing congestion state may in fact not share the same bottleneck. In this paper, we categorize the origins of this false sharing into two distinct cases: (i) networks with QoS enhancements such as differentiated services, where a flow classifier segregates flows into different queues, and (ii) networks with path diversity where different flows to the same destination address are routed differently. We evaluate the impact of false sharing on flow performance and investigate how false sharing can be detected by a sender. We discuss how a sender must respond upon detecting false sharing. Our results show that persistent overload can be avoided with window-based congestion control even for extreme false sharing, but higher bandwidth flows run at a slower rate. We find that delay and reordering statistics can be used to develop robust detectors of false sharing and are superior to those based on loss patterns. We also find that it is markedly easier to detect and react to false sharing than it is to start by isolating flows and merge their congestion state afterward. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A bidding protocol for deploying mobile sensors

    Publication Year: 2003 , Page(s): 315 - 324
    Cited by:  Papers (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (459 KB) |  | HTML iconHTML  

    In some harsh environments, manually deploying sensors is impossible. Alternative methods may lead to imprecise placement resulting in coverage holes. To provide the required high coverage in these situations, we propose to deploy sensor networks composed of a mixture of mobile and static sensors in which mobile sensors can move from dense areas to sparse areas to improve the overall coverage. This paper presents a bidding protocol to assist the movement of mobile sensors. In the protocol, static sensors detect coverage holes locally by using Voronoi diagrams, and bid for mobile sensors based on the size of the detected hole. Mobile sensors choose coverage holes to heal based on the bid. Simulation results show that our algorithm provides suitable tradeoff between coverage and sensor cost. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Planned object duplication strategies in dynamic PRR meshes

    Publication Year: 2003 , Page(s): 50 - 60
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1067 KB)  

    In recent years there has been considerable research on new distributed hash tables (DHTs), improvements on existing DHTs, and DHT-enabled systems. However, little of it focuses on their differences [M. Castro et al., 2002]. To this purpose we introduce a simple modeling framework that allows us to mathematically model the search costs of most classes of DHTs. To illustrate the usefulness of this framework, we examine a class of DHTs, which includes tapestry and pastry, that we call dynamic PRR meshes (DPMs). In particular we examine how planned object duplication (POD) strategies affect the search costs of DPMs that employ them. We introduce 3 new DPMs that employ different POD strategies and compare them with the POD strategies that tapestry and pastry use. Through our model we discover cyclic behaviors in search costs over the number of nodes present in the DPM, the effects of variability in the underlying network and provide comparisons of the performance of all 5 DPMs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving TCP startup performance using active measurements: algorithm and evaluation

    Publication Year: 2003 , Page(s): 107 - 118
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (598 KB) |  | HTML iconHTML  

    TCP slow start exponentially increases the congestion window size to detect the proper congestion window for a network path. This often results in significant packet loss, while breaking off slow start using a limited slow start threshold may lead to an overly conservative congestion window size. This problem is especially severe in high speed networks. In this paper we present a new TCP startup algorithm, called paced start, that incorporates an available bandwidth probing technique into the TCP startup algorithm. Paced start is based on the observation that when we view the TCP startup sequence as a sequence of packet trains, the difference between the data packet spacing and the acknowledgement spacing can yield valuable information about the available bandwidth. Slow start ignores this information, while paced start uses it to quickly estimate the proper congestion window for the path. For most flows. Paced Start transitions into congestion avoidance mode faster than Slow Start, has a significantly lower packet loss rate, and avoids the timeout that is often associated with slow start. This paper describes the paced start algorithm and uses simulation and real system experiments to characterize its properties. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed, self-stabilizing placement of replicated resources in emerging networks

    Publication Year: 2003 , Page(s): 6 - 15
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (468 KB) |  | HTML iconHTML  

    Emerging large scale distributed networking systems, such as P2P file sharing systems, sensor networks, and ad hoc wireless networks, require replication of content, functionality, or configuration to enact or optimize communication tasks. The placement of these replicated resources can significantly impact performance. We present a novel self-stabilizing, fully distributed, asynchronous, scalable protocol that can be used to place replicated resources such that each node is "close" to some copy of any object. We describe our protocol in the context of a graph with colored nodes, where a node's color indicates the replica/task that it is assigned. Our combination of theoretical results and simulation prove stabilization of the protocol, and evaluate its performance in the context of convergence time, message transmissions, and color distance. Our results show that the protocol generates colorings that are close to the optimal under a set of metrics, making such a protocol ideal for emerging networking systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability-aware IBGP route reflection topology design

    Publication Year: 2003 , Page(s): 180 - 189
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (394 KB) |  | HTML iconHTML  

    In the internal border gateway protocol (IBGP), route reflection is widely used as an alternative to full mesh IBGP sessions inside an AS for scalability reason. However, some important issues, such as the impact of route reflection on the reliability of IBGP and the construction of reliable reflection topology with unreliable routers or links, have not been well investigated. This paper addresses the problem of finding reliable route reflection topologies for IBGP networks, which is of great importance to increase the robustness of IBGP operations. We first present a novel reliability model and two new metrics (IBGP expected lifetime and expected session loss) to evaluate the reliability of reflection topologies, and further to investigate the design problem. After studying the solvability conditions under the router capacity constraints, we prove the NP-hardness of the problem, and then design and implement three heuristic solutions using randomization techniques: heuristic selection, greedy search and simulated annealing. Our extensive computational experiments show that the reliability of IBGP reflection network can be significantly improved by our solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data dissemination with ring-based index for wireless sensor networks

    Publication Year: 2003 , Page(s): 305 - 314
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (408 KB) |  | HTML iconHTML  

    In current sensor networks, sensor nodes are capable of not only measuring real world phenomena, but also storing, processing and transferring these measurements. Many data dissemination techniques have been proposed for sensor networks. However, these techniques may not work well in a large scale sensor network where a huge amount of sensing data are generated, but only a small portion of them are queried. In this paper, we propose an index-based data dissemination scheme to address the problem. This scheme is based on the idea that sensing data are collected, processed and stored at the nodes close to the detecting nodes, and the location information of these storing nodes is pushed to some index nodes, which act as the rendezvous points for sinks and sources. We further extend the scheme with an adaptive ring-based index (ARI) technique, in which the index nodes for one event type form a ring surrounding the location which is determined by the event type, and the ring can be dynamically reconfigured for fault tolerance and load balance. Analysis and simulations are conducted to evaluate the performance of the proposed index-based scheme. The results show that the index-based scheme outperforms the external storage-based scheme, the DCS scheme, and the local storage-based schemes with flood-response style. The results also show that using ARI can tolerate clustering failures and achieve load balance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal resource allocation in overlay multicast

    Publication Year: 2003 , Page(s): 71 - 81
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (405 KB) |  | HTML iconHTML  

    The paper targets the problem of optimal resource allocation in overlay multicast, which poses both theoretical and practical challenges. Theoretically, resource allocation among overlay flows is not subject to the network capacity constraint but also the data constraint, mainly due to the dual role of end hosts as both receivers and senders. Practically, existing distributed resource allocation schemes assume the network links to be capable of measuring flow rates, calculating and communicating price signals, none of which actually exists in the Internet today. We address these challenges as follows. First, we formalize the problem using nonlinear optimization theory, which incorporates both network constraint and data constraint. Based on our theoretical framework, we propose a distributed algorithm, which is proved to converge to the optimal point, where the aggregate utility of all receivers is maximized. Second, we propose an end-host-based solution, which relies on the coordination of end hosts to accomplish tasks originally assigned to network links. our solution can be directly deployed without any changes to the existing network infrastructure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Establishing pairwise keys for secure communication in ad hoc networks: a probabilistic approach

    Publication Year: 2003 , Page(s): 326 - 335
    Cited by:  Papers (60)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (305 KB) |  | HTML iconHTML  

    A prerequisite for a secure communication between two nodes in an ad hoc network is that the nodes share a key to bootstrap their trust relationship. In this paper, we present a scalable and distributed protocol that enables two nodes to establish a pairwise shared key on the fly, without requiring the use of any on-line key distribution center. The design of our protocol is based on a novel combination of two techniques - probabilistic key sharing and threshold secret sharing. Our protocol is scalable since every node only needs to possess a small number of keys, independent of the network size, and it is computationally efficient because it only relies on symmetric key cryptography based operations. We show that a pairwise key established between two nodes using our protocol is secure against a collusion attack by up to a certain number of compromised nodes. We also show through a set of simulations that our protocol can be parameterized to meet the desired levels of performance, security and storage for the application under consideration. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient algorithm for OSPF subnet aggregation

    Publication Year: 2003 , Page(s): 200 - 209
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (436 KB) |  | HTML iconHTML  

    Multiple addresses within an OSPF area can be aggregated and advertised together to other areas. This process is known as address aggregation and is used to reduce router computational overheads and memory requirements and to reduce the network bandwidth consumed by OSPF messages. The downside of address aggregation is that it leads to information loss and consequently sub-optimal (non-shortest path) routing of data packets. The resulting difference (path selection error) between the length of the actual forwarding path and the shortest path varies between different sources and destinations. This paper proves that the path selection error from any source to any destination can be bounded using only parameters describing the destination area. Based on this, the paper presents an efficient algorithm that generates the minimum number of aggregates subject to a maximum allowed path selection error. A major operational benefit of our algorithm is that network administrators can select aggregates for an area based solely on the topology of the area without worrying about remaining areas of the OSPF network. The other benefit is that the algorithm enables trade-offs between the number of aggregates and the bound on the path selection error. The paper also evaluates the algorithm's performance on random topologies. Our results show that in some cases, the algorithm is capable of reducing the number of aggregates by as much as 50% with only a relatively small introduction of maximum path selection error. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.