Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Parallel and Distributed Systems, 2004. ICPADS 2004. Proceedings. Tenth International Conference on

Date 7-9 July 2004

Filter Results

Displaying Results 1 - 25 of 97
  • Improving cache performance in mobile computing networks through dynamic object relocation

    Publication Year: 2004 , Page(s): 37 - 45
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (365 KB) |  | HTML iconHTML  

    Caching improves the performance of Web servers by placing frequently accessed data at intermediate nodes close to Web clients. Similarly, in a mobile network, access delay can be reduced by caching data objects near the mobile clients. Existing caching techniques used for Web servers are unsuitable for mobile networks because they do not deal with the issue of client mobility. To ensure cache performance is not affected by client movement, object relocation techniques can be used to dynamically relocate data objects so they remain close to the moving client. Existing relocation techniques rely on path predictions to help make relocation decisions. However, the inaccuracy of path prediction techniques result in high relocation overhead and increased access delay after each handover. In this paper, we propose two new object relocation techniques to deal with the problem of poor path predictions and high object relocation overhead. The first technique called 2PR (two-phase relocation) compensates for the inaccuracy of existing path prediction algorithms by temporarily relocating data objects to a common parent node until the client's location is confirmed. The second technique, called ROLP (return-path object-list passing), reduces the traffic overhead associated with object relocation by using coordination messages between nodes. Test results show that 2PR and ROLP reduce the penalty of poor path predictions and significantly reduces the overhead associated with cache relocation compared to existing schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting network locality in a decentralized read-write peer-to-peer file system

    Publication Year: 2004 , Page(s): 289 - 296
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (286 KB) |  | HTML iconHTML  

    We have developed a completely decentralized multiuser read-write peer-to-peer file system with good locality properties. In our system all data is contained in blocks stored using the Past distributed hash table (DHT), thus taking advantage of the fault tolerance and locality properties of Past and Pastry. We have also introduced a modification to the Past DHT which allows us to further increase performance when using a relaxed but nevertheless useful consistency model. Authentication and integrity are assured using standard cryptographic mechanisms. We have developed a prototype in order to evaluate the performance of our design. Our prototype is programmed in Java and uses the FreePastry open-source implementation of Past and Pastry. It allows applications to choose between two degrees of consistency. Preliminary results obtained through simulation suggest that our system is approximately twice as slow as NFS. In comparison, Ivy and Oceanstore are between two to three times slower than NFS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhancing locality in structured peer-to-peer networks

    Publication Year: 2004 , Page(s): 25 - 34
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (330 KB) |  | HTML iconHTML  

    Distributed hash tables (DHTs), used in a number of structured peer-to-peer systems, provide efficient mechanisms for resource location. A key distinguishing feature of current DHT systems such as Chord, Pastry, and Tapestry is the way they handle locality in the underlying network. Topology-based node identifier assignment, proximity routing, and proximity neighbor selection are examples of heuristics used to minimize message delays in the underlying network. While these heuristics are sometimes effective, they rely on a single global overlay that may install the key of a popular object at a node far from most of the nodes accessing it. Furthermore, a response to a lookup does not contain any locality information about the nodes holding a copy of the object. We address these issues by proposing a novel two-level overlay peer-to-peer architecture. In our architecture, local overlays act as locality-aware caches for the global overlay, grouping nodes close together in the underlying network. Local overlays are constructed by exploiting the structure of the Internet as autonomous systems. We present detailed experimental results demonstrating the practicality of the system, and showing performance gains in response time of up to 60% compared to a single global overlay with state-of-the-art localization schemes. We also present efficient distributed algorithms for maintaining local overlays in the presence of node arrivals and departures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cegor: an adaptive distributed file system for heterogeneous network environments

    Publication Year: 2004 , Page(s): 145 - 152
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB) |  | HTML iconHTML  

    Distributed file systems have been extensively studied in the past, but they are still far from wide acceptance over heterogeneous network environments. Most traditional network file systems target the tight-couple highspeed networks only, and do not work well in the wide-area setting. Several communication optimization techniques are proposed in the context of wide-area file systems, but these approaches do not take into consideration the file characteristics and may instead introduce extra computing overhead when the network condition is good. We envision that the capability of providing adaptive, seamless file access to personal documents across diverse network connections plays an important role in the success of future distributed file systems. In this paper, we propose to build an adaptive distributed file system which provides the "close and go, open and resume" (Cegor) semantics across heterogeneous network connections, ranging from high-bandwidth local area network to low-bandwidth dial-up connection. Our approach relies on a set of new techniques for managing adaptive access to remote files, including three components: system support for secure, transparent reconnection at different places, semantic-view based caching to reduce communication frequencies in the system, and type-specific communication optimization to minimize the bandwidth requirement of synchronizations between clients and servers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient secure multicast with well-populated multicast key trees

    Publication Year: 2004 , Page(s): 215 - 222
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (299 KB) |  | HTML iconHTML  

    Secure group communications is the basis for many recent multimedia and Web technologies. In order to maintain secure and efficient communications within a dynamic group, it is essential that the generation and management of group key(s) be secure and efficient with realtime response. Typically, a logical key hierarchy is used for distribution of group keys to users so that whenever users leave or join the group, new keys are generated and distributed using the key hierarchy. In this paper, we propose well-populated multicast key tree (WPMKT), an efficient technique to handle group dynamics in the key tree and maintain the tree balanced with minimal cost. In WPKT, subtrees are swapped in a way that keeps the key tree balanced and well populated. A t the same time, rekeying overhead due to reorganization is kept at a minimum. Another advantage of WPKT is that rebalancing has no effect on the internal key structure of the swapped subtrees. Results from simulation studies show that under random user deletion, our approach achieves one order of magnitude in overhead less than existing approaches. Under clustered sequential user deletion, our approach achieves almost a linear growth with tree size under individual rebalancing. For periodic rebalancing, we achieved almost half the overhead introduced by other approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Causally ordered delivery for a hierarchical group

    Publication Year: 2004 , Page(s): 453 - 460
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (445 KB) |  | HTML iconHTML  

    Large number of peer processes are cooperating in peer-to-peer systems. In this paper, we discuss a hierarchical group protocol aiming at reducing communication and computation overheads for a scalable group of peer processes. A hierarchical group is composed of subgroups each of which is furthermore composed of subgroups. Even if messages are causally ordered in one subgroup, the messages may not be required to be causally ordered in a whole group. We discuss how to globally causally order messages by ordering mechanisms in each subgroup. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On providing anonymity in wireless sensor networks

    Publication Year: 2004 , Page(s): 411 - 418
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1258 KB) |  | HTML iconHTML  

    Securing wireless sensor networks against denial of service attacks that disrupt communications or target nodes serving key roles in the network, e.g. sinks or routers, is instrumental to network availability and performance. Particularly vulnerable to these attacks are the components of any communications or operation infrastructure in the network. In this paper, we address a class of wireless sensor networks where network protocols leverage a dynamic general-purpose virtual infrastructure; the core components of that infrastructure are a coordinate system, a cluster structure, and a routing structure. Since knowledge of this virtual infrastructure enables 'smart' cost-effective DOS attacks on the network, maintaining the anonymity of the virtual infrastructure is a primary security concern. The main contribution of this work is to propose an energy-efficient protocol for maintaining the anonymity of the network virtual infrastructure. Specifically, our solution defines schemes for randomizing communications such that the coordinate system, cluster structure, and routing structure remain invisible to an external observer of network traffic during the setup phase of the network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Load-balanced anycast routing

    Publication Year: 2004 , Page(s): 701 - 708
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (302 KB) |  | HTML iconHTML  

    For fault-tolerance and load-balance purposes, many modern Internet applications may require that a group of replicated servers dispersed widely over the world. The anycast as a new communication style defined in IPv6 provides the capability to route packets to the nearest server. Better quality of service (QoS) can be achieved by this kind of computing paradigm. DNS, Web service, and distributed database system are three most well known examples. However, before anycasting can be realized, more researches need to be done. The anycast routing scheme is one of the most important issues. In this paper, we propose a load-balanced anycast routing scheme based on the WRS (weighted random selection) method. We suggest that the server capability should be propagated along with other fields in the routing tables. An anycast routing algorithm should take into account the network transmission capability as well as the server processing capability for the selection of a target server. Three weight determination strategies are given. We also develop a simple algorithm to calculate the weights of WRS to achieve optimization under both the heavy and the light system traffic environment. Our approach is locally optimized to minimize the average total delay and well balanced for the server load. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Role ordering scheduler for concurrency control in distributed objects

    Publication Year: 2004 , Page(s): 485 - 492
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    Concepts of roles are significant to design and implement access control models in secure information systems. A role shows a job Junction in an enterprise. In addition to keeping systems secure, objects have to be consistent in presence of multiple transactions. Traditional locking protocols and timestamp ordering schedulers are based on principles "first-comer-winner" and "timestamp order" to make multiple transactions serializable, respectively. Each transaction is associated with a role. We define significancy of role where shows how significant each role is in an enterprise. We discuss a novel type of scheduler for concurrency control so that multiple conflicting transactions are serializable in a significant order of roles of transactions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The net worth of an object-oriented pattern: practical implications of Java RMI

    Publication Year: 2004 , Page(s): 385 - 391
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (254 KB) |  | HTML iconHTML  

    Debugging distributed applications in their run-time environment is notoriously hard and development and testing of application logic must be completed ahead of this step. Using Java RMI allows a developer to separate the two stages (development of the application logic from the deployment of the application in its distributed run-time environment) but the developer must acknowledge a specific pattern from the outset. We present this pattern below: it allows for a stage of fully carried out development of the application in an isolated run-time environment (no network) and makes the switch to a true networked run-time environment completely transparent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An adaptive scheme for vertical handoff in wireless overlay networks

    Publication Year: 2004 , Page(s): 541 - 548
    Cited by:  Papers (22)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (411 KB) |  | HTML iconHTML  

    Vertical handoff is the switching process between heterogeneous wireless networks. Discovering the reachable wireless networks is the first step for vertical handoff. After discovering the reachable candidate networks, the mobile terminal decides whether to perform handoff or not. We present an adaptive scheme for vertical handoff in wireless overlay networks. Our system discovery method effectively discovers the candidate networks for the mobile terminal. Moreover, we propose two adaptive evaluation methods for the mobile terminal to determine the handoff time that relies on the candidates' resources and the running applications. The simulation results show that the proposed system discovery method can balance the power consumption and the system discovery time. Furthermore, the proposed handoff decision method can decide the appropriate time to perform handoff. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallelization of Bayesian network based SNPs pattern analysis and performance characterization on SMP/HT

    Publication Year: 2004 , Page(s): 315 - 322
    Cited by:  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (349 KB) |  | HTML iconHTML  

    Single nucleotide polymorphisms (SNPs) is subtle variation in a genomic DNA sequence of individuals of the same species. It plays a key role in the pharmaceutical industry to understand variations in drug treatment responses between individuals at the molecular level. Discovering patterns around SNPs loci is very important for better understanding the possible origin of SNPs in evolution. Bayesian network has been applied to this problem and got promising results. Since Bayesian network based SNPs pattern analysis demonstrates high computational complexity, we parallelized this workload on Intel Xeon SMP systems. SNPs' task level parallelism is exploited. Experiment results show that memory is bottleneck: on 8-way Xeon SMP hyper-threading enabled system, system memory bandwidth is fully saturated and memory load access latency is roughly 50% longer than on single processor system. Another interesting result is that Intel's hyper-threading technology helps improve the multithreaded workload's performance by 1.6X speedup. Workload profiling shows that parallel SNPs' data sharing nature matches hyper-threading's cache sharing mechanism, and thus greatly reduces cache coherency protocol traffic on shared front side bus. Scalability analysis shows that imbalance and locks are two major factors that may limit the parallel workload speedup on more processor platforms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decentralized reactive clustering for collaborative processing in sensor networks

    Publication Year: 2004 , Page(s): 54 - 61
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (861 KB) |  | HTML iconHTML  

    A sensor network forms a loosely-coupled distributed environment where collaborative processing among multiple sensor nodes is essential in order to compensate for the limitation of each sensor node in its processing capability, sensing capability, and energy usage, as well as to improve the degree of fault tolerance. Due to the sheer amount of nodes deployed, collaboration is usually carried out among nodes within the same cluster. Different clustering protocols can affect the performance of network to a great extent. Most existing clustering protocols either do not adequately address the energy-constraint problem or derive clusters proactively which may not be suitable for event-driven collaborative processing in sensor networks. This paper focuses on the design of clustering protocols for collaborative processing. We propose a decentralized reactive clustering (DRC) protocol where the clustering procedure is initiated only when events are detected. It uses power control technique to minimize energy usage in forming clusters. We compare the performance of DRC with another popular clustering algorithm, LEACH. Simulation results show considerable improvement over LEACH in energy conservation and network lifetime using DRC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Routing permutations on optical baseline networks with node-disjoint paths

    Publication Year: 2004 , Page(s): 65 - 72
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (431 KB) |  | HTML iconHTML  

    Permutation is a frequently-used communication pattern in parallel and distributed computing systems and telecommunication networks. Node-disjoint routing has important applications in guided wave optical interconnects where the optical "crosstalk" between messages passing the same switch should be avoided. In this paper, we consider routing arbitrary permutations on an optical baseline network (or reverse baseline network) with node-disjoint paths. We first prove the equivalence between the set of admissible permutations (or semi-permutations) of a baseline network and that of its reverse network based on a step-by-step permutation routing. We then show that an arbitrary permutation can be realized in a baseline network (or a reverse baseline network) with node-disjoint paths in four passes, which beats the existing results (M. Vaez and C.-T. Lea, 2000 and G. Maier and A. Pattavina, 2001) that a permutation can be realized in an n × n banyan network with node-disjoint paths in O(n12/) passes. This represents the currently best-known result for the number of passes required for routing an arbitrary permutation with node-disjoint paths in unique-path multistage networks. Unlike other unique path MINs (such as omega networks or banyan networks), only baseline networks have been found to possess such four-pass routing property. We present routing algorithms in both self-routing style and central-controlled style. Different from the recent work which also gave a four-pass node-disjoint routing algorithm for permutations, the new algorithm is efficient in transmission time for messages of any length, while the algorithm can work efficiently only for long messages. Comparisons with previous results demonstrate that routing in a baseline network proposed in this paper could be a better choice for routing permutations due to its lowest hardware cost and near-optimal transmission time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Divisible load scheduling on arbitrary distributed networks via virtual routing approach

    Publication Year: 2004 , Page(s): 161 - 168
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (348 KB) |  | HTML iconHTML  

    In this paper, we propose a distributed algorithm for scheduling divisible loads originating from single site on arbitrary networks. We first propose a mathematical model and formulate the scheduling problem as an optimization problem with an objective to minimize the processing time of the loads. A number of theoretical results on the solution of the optimization problem are derived. On the basis of these results, we propose our algorithm using the concept of virtual routing. The proposed algorithm has three attractive features - distributed working style, simple structure, in terms of implementation ease, and offers generalized approach for handling divisible load scheduling for any network topology. This is the first time in the divisible load scheduling literature that a distributed strategy is attempted. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transient performance model for parallel and distributed systems

    Publication Year: 2004 , Page(s): 513 - 520
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (327 KB) |  | HTML iconHTML  

    In studying or designing parallel and distributed systems one should have available a robust analytical model that includes the major parameters that determine the system performance. Jackson networks have been very successful in modeling parallel and distributed systems. However, Jackson networks have their limitations. In particular, the product-form solution of Jackson networks assumes steady state and exponential service centers with certain specialized queueing discipline. In this paper, we present a performance model that can be used to study the transient behavior of parallel and distributed systems with finite workload. When the number of tasks to be executed is large enough, the model approaches the product-form of Jackson networks (steady state solution). We show how to use the model to analyze the performance of parallel and distributed systems. We also use the model to show to what extent the product-form solution of Jackson networks can be used. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed and dynamic voronoi overlays for coverage detection and distributed hash tables in ad-hoc networks

    Publication Year: 2004 , Page(s): 549 - 556
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (364 KB) |  | HTML iconHTML  

    In this paper we study two important problems - coverage-boundary detection and implementing distributed hash tables in ad-hoc wireless networks. These problems frequently arise in service location and relocation in wireless networks. For the centralized coverage-boundary problem we prove a Ω(n log n) lower bound for n devices. We show that both problems can be effectively reduced to the problem of computing Voronoi overlays, and maintaining these overlays dynamically. Since the computation of Voronoi diagrams requires O(n log n) time, our solution is optimal for the computation of the coverage-boundary. We present efficient distributed algorithms for computing and dynamically maintaining Voronoi overlays, and prove the stability properties for the latter - i.e., if the nodes stop moving, the overlay stabilizes to the correct Voronoi overlay. Finally, we present experimental results in the context of the two selected applications, which validate the performance of our distributed and dynamic algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of an energy efficient optimistic TMR scheme

    Publication Year: 2004 , Page(s): 559 - 568
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (581 KB) |  | HTML iconHTML  

    For mission critical real-time applications, such as satellite and surveillance systems, a high level of reliability is desired as well as low energy consumption. In this paper, we propose a general system power model and explore the optimal speed setting to minimize system energy consumption for an optimistic TMR (OTMR) scheme. The performance of OTMR is compared with that of TMR (triple modular redundancy) and duplex with respect to energy and reliability. The results show that OTMR is always better than TMR by achieving higher levels of reliability and consuming less energy. With checkpoint overhead and recovery, duplex is not applicable when system load is high. However, duplex may be more energy efficient than OTMR depending on system static power and checkpointing overhead. Moreover, with one recovery section, duplex achieves comparable levels of reliability as that of OTMR. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of a pre-reckoning algorithm for distributed virtual environments

    Publication Year: 2004 , Page(s): 445 - 452
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB) |  | HTML iconHTML  

    The recently introduced pre-reckoning algorithm is an alternative to the traditional dead reckoning algorithm used in DIS-compliant distributed virtual environments. Before the algorithm can be applied to distributed virtual environments, a detailed evaluation is needed. The pre-reckoning algorithm and the general dead reckoning algorithm are implemented within a simple distributed virtual environment system - CUBE. A comparison between the two algorithms is provided and the evaluation of performance and accuracy of two algorithms is presented. The results indicate that the pre-reckoning algorithm achieves a much more accurate model of the actual trajectory of an entity controlled by a remote host than the general dead reckoning algorithm. Moreover, the pre-reckoning algorithm improves accuracy while generating fewer state update packets. Based on the demonstrated performance, the pre-reckoning has a potential to improve scalability of distributed virtual environments and enhance consistency of users' view of the dynamic shared state. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal parallel block access for range queries

    Publication Year: 2004 , Page(s): 119 - 126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (681 KB) |  | HTML iconHTML  

    Allocation schemes for range queries have been widely used in parallel storage systems to allow fast access to multidimensional data. An allocation scheme distributes data blocks among several devices (e.g. disks) so that the number of parallel block accesses needed per query is minimized. Given a system of k disks, a query that accesses m blocks needs a number of parallel block accesses that is at least OPT = [m/k]. In 2000, Atallah and Prabhakar described an allocation scheme with a guaranteed worst-case performance of OPT + O(log k) parallel block accesses for two dimensions. In this paper, we prove that the scheme of Atallah and Prabhakar has, in fact, guaranteed worst-case performance within an additive constant deviation from OPT: within OPT + 3 parallel block accesses for two dimensions. Also, we identify the type of queries for which the worst-case performance of the scheme is OPT + 1 parallel block accesses. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Resource estimation and task scheduling for multithreaded reconfigurable architectures

    Publication Year: 2004 , Page(s): 323 - 330
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (323 KB) |  | HTML iconHTML  

    Reconfigurable computing is an emerging paradigm of research that offers cost-effective solutions for computationally intensive applications through hardware reuse. There is a growing need in this domain for techniques to exploit parallelism inherent in the target application and to schedule the parallelized application. This paper proposes a method to estimate the optimal number of resources through critical path analysis while keeping resource utilization near optimal. We also propose an algorithm to optimally schedule the parallel threads of execution in linear time. Our algorithm is based on the idea of enhanced partial critical path (ePCP) and handles memory latencies and reconfiguration overheads. Results obtained show the effectiveness of our approach over other critical path based methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • QoS and dynamic systems workshop

    Publication Year: 2004 , Page(s): 617
    Save to Project icon | Request Permissions | PDF file iconPDF (203 KB) |  | HTML iconHTML  
    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel packet marking scheme for IP traceback

    Publication Year: 2004 , Page(s): 195 - 202
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (301 KB) |  | HTML iconHTML  

    Recently, several schemes have been proposed for IP traffic source identification for tracing denial of service (DoS) attacks. Most of these schemes require very large number of packets to conduct the traceback process, which results in lengthy and complicated procedure. In this paper, we address this issue by proposing a scheme, called probabilistic pipelined packet marking (PPPM), which employs the concept of "pipeline" for propagating marking information from one marking router to another so that it eventually reaches the destination. The key benefit of this pipeline process lies in drastically reducing the number of packets that is required for the traceback process. We evaluate the effectiveness of the proposed scheme for various performance metrics through combination of analytical and simulation studies. Our studies show that the proposed scheme offers high attack source detection percentage, and attack source localization distance of less than two hops under different attack scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new look at egocentric algorithms

    Publication Year: 2004 , Page(s): 333 - 340
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (439 KB) |  | HTML iconHTML  

    With respect to approximate agreement, some categories of voting algorithms have been developed. This research presents an approach to one family of algorithms called egocentric algorithms, and analyzes the conditions under which the approach performs better than the existing egocentric voting algorithms. In addition, the approach provides some insight into the voting process of such algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scaling unstructured peer-to-peer networks with multi-tier capacity-aware overlay topologies

    Publication Year: 2004 , Page(s): 17 - 24
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (409 KB) |  | HTML iconHTML  

    The peer to peer (P2P) file sharing systems such as Gnutella have been widely acknowledged as the fastest growing Internet applications ever. The P2P model has many potential advantages due to the design flexibility of overlay networks and the server-less management of cooperative sharing of information and resources. However, these systems suffer from the well-known performance mismatch between the randomly constructed overlay network topology and the underlying IP layer topology for packet routing. This paper proposes to structure the P2P overlay topology using a capacity-aware multi-tier topology to better balance load at peers with heterogeneous capacities and to prevent low capacity nodes from downgrading the performance of the system. To study the benefits and cost of the multi-tier capacity aware topology with respect to basic and advanced routing protocols, we also develop a probabilistic broadening scheme for efficient routing, which further utilizes capacity-awareness to enhance the P2P routing performance of the system. We evaluate our design through simulations. The results show that our multi-tier topologies alone can provide eight to ten times improvements in the messaging cost, two to three orders of magnitude improvement in terms of load balancing characteristics, and seven to eight times lower topology construction and maintenance costs when compared to Gnutellas random power-law topology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.