By Topic

Distributed Computing Systems Workshops, 2005. 25th IEEE International Conference on

Date 6-10 June 2005

Filter Results

Displaying Results 1 - 25 of 144
  • Proceedings. 25th IEEE International Conference on Distributed Computing Systems Workshops

    Save to Project icon | Request Permissions | PDF file iconPDF (53 KB)  
    Freely Available from IEEE
  • 25th IEEE International Conference on Distributed Computing Systems Workshops - Title Page

    Page(s): i - iii
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • 25th IEEE International Conference on Distributed Computing Systems Workshops - Copyright Page

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • 25th IEEE International Conference on Distributed Computing Systems Workshops - Table of contents

    Page(s): v - xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (79 KB)  
    Freely Available from IEEE
  • Message from the Workshops Chair

    Page(s): xv
    Save to Project icon | Request Permissions | PDF file iconPDF (26 KB)  
    Freely Available from IEEE
  • Message from the ADSN Chairs

    Page(s): xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (25 KB)  
    Freely Available from IEEE
  • Workshop Committee Members

    Page(s): xxv - xxxii
    Save to Project icon | Request Permissions | PDF file iconPDF (47 KB)  
    Freely Available from IEEE
  • Adding confidentiality to application-level multicast by leveraging the multicast overlay

    Page(s): 5 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    While scalability, routing and performance are core issues for application-level multicast (ALM) protocols, an important but less studied problem is security. In particular, confidentiality (i.e. data secrecy, achieved through data encryption) in ALM protocols is needed. Key management schemes must be simple, scalable, and must not degrade the performance of the ALM protocol. We explore three key management schemes that leverage the underlying overlay to distribute the key(s) and secure ALM. We evaluate their impact on three well-known ALM protocols: ESM, ALMI and NICE. Through analysis and simulations, we show that utilizing the ALM overlay to distribute key(s) is feasible. For a given ALM protocol, choice of the best key management scheme depends on the application needs: minimizing rekeying latency or minimizing data multicasting latency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Byzantine fault-tolerant mutual exclusion algorithm and its application to Byzantine fault-tolerant storage systems

    Page(s): 12 - 19
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (128 KB) |  | HTML iconHTML  

    This paper presents a new distributed mutual exclusion protocol that can tolerate Byzantine faults. We use the protocol to create Byzantine fault-tolerant storage systems. We show a necessary and sufficient condition to achieve distributed Byzantine fault-tolerant mutual exclusion. The condition is n ≥ 3f+1 where n is the number of servers and f is the number of Byzantine failure servers, which is just the result as yielded by Martin et al.'s Byzantine fault-tolerant storage algorithm. The message complexity of Martin et al.'s algorithm is 3n for write operations and 3n+cn for read operations, where c is the number of concurrent writes to the read operations. Our protocol requires (3+3c') (n+3f+1)/2 messages for read or write operations, where c' is the number of concurrent conflicting operations, c' is at most one for read requests. Thus, when the number of concurrent operations to write requests is small and the number of faults is small, our protocol is more efficient than that of Martin et al. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Zmail: zero-sum free market control of spam

    Page(s): 20 - 26
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    The problem of spam is a classic "tragedy of the commons" (G. Hardin, 1968). We propose the Zmail protocol as a way to preserve email as a "free" common resource for most users, while imposing enough cost on bulk mailers so that market forces control the volume of spam. A key idea behind Zmail is that the most important resource consumed by email is not the transmission process but the end user's attention. Zmail requires the sender of an email to pay a small amount (called an "e-penny") which is paid directly to the receiver of the email. Zmail is thus a "zero-sum" email protocol. Users who receive as much email as they send, on average, neither pay nor profit from email, once they have set up initial balances with their ESPs (email service providers) to buffer the fluctuations. Spammers incur costs that moderate their behavior. Importantly, Zmail requires no definition of what is and is not spam, so spammers' efforts to evade such definitions become irrelevant. We describe methods within Zmail for supporting "free" goods such as volunteer mailing lists, and for limiting exploitation by email viruses and zombies. Zmail is not a micro-payment scheme among end-users. It is an accounting relationship among "compliant ESPs", which reconcile payments to and from their users. Zmail can be implemented on top of the existing SMTP email protocol. We describe an incremental deployment strategy that can start with as few as two compliant ESPs and provides positive feedback for growth as Zmail spreads over the Internet. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • State checksum and its role in system stabilization

    Page(s): 29 - 34
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (93 KB) |  | HTML iconHTML  

    Although a self-stabilizing system that suffers from a transient fault is guaranteed to converge to a legitimate state after a finite number of steps, the convergence can be slow if the harmful effects of the fault are allowed to propagate into many processes in the system. Moreover, some safety properties of the system may be violated during the convergence. To address these problems, we propose in this paper the concept of a state checksum - a redundancy that can be added to the state of a self-stabilizing system so that some classes of faults become visible to the system, and the system can limit the propagation of their harmful effects, and maintain its safety properties during the convergence. To make these concepts concrete, we discuss the case study of a token ring and show how to use fault-detecting and fault-correcting checksums to detect visible faults, limit the propagation of their harmful effects, and ensure that the safety properties of the ring are maintained during the convergence from these faults. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An optimal snap-stabilizing multi-wave algorithm

    Page(s): 35 - 41
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (688 KB) |  | HTML iconHTML  

    In real-time systems, correct information is needed fast, so developing fast and accurate algorithms is a must. Algorithms must be resilient to transient faults and topology changes. The capability to adapt to heterogeneous and changing requirements is the core of assurance in distributed systems. A snap-stabilizing algorithm, starting from an arbitrary system configuration, always behaves according to its specification. In this paper, we propose a snap-stabilizing k-wave algorithm (called, kW) implementing k distinct consecutive waves (k > 2) for trees, with O(h) rounds of delay and at most k+4 states per process. The leaf nodes use only four states. The algorithm is optimal with respect to its time and state space complexity, and it can be generalized to arbitrary networks using any of the existing self-stabilizing spanning tree construction algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reconciling the theory and practice of (un)reliable wireless broadcast

    Page(s): 42 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB) |  | HTML iconHTML  

    Theorists and practitioners have fairly different perspectives on how wireless broadcast works. Theorists think about synchrony; practitioners think about backoff. Theorists assume reliable communication; practitioners worry about collisions. The examples are endless. Our goal is to begin to reconcile the theory and practice of wireless broadcast, in the presence of failures. We propose new models for wireless broadcast and use them to examine what makes a broadcast model good. In the process, we pose some interesting questions that help to bridge the gap. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic load balancing using network transferable computer

    Page(s): 51 - 57
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (896 KB)  

    This paper proposes a new dynamic load balancing (DLB) method for network traffic. In client-server systems, intense access to a particular server host often causes excessive traffic on a path connected to the server. Although mirror servers are used for load balancing of host performance, this may not be sufficient to balance the load of network traffic. In the DLB method a server has the capability to move to another network, so that flows of packets toward/from the server change and a part of packets avoid going through the crowded path. This reduction of the traffic on the congested path achieves load balancing of network traffic. The DLB method is based on network transferable computer (NTC) and mobile IP. Also a management system is provided. The management system has the following responsibilities. (1) Analyzing packets for the server. (2) Calculating the fluctuation rate of the amount of packets toward the server. (3) Estimating the future amount of packets. (4) Determining whether the server move or not and a new location of the server if necessary. The evaluation of this method is underlined by simulations, which show effective reduction of traffic on the target path. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementation issues of parallel downloading methods for a proxy system

    Page(s): 58 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    The parallel downloading technology is expected to utilize broad bandwidth and fetch a file quickly, retrieving each piece from a different server in parallel. When users retrieve a file from mirror servers in parallel, it is important that the files on the mirror servers should be identical. We have already proposed a proxy system which can ensure the sameness of files for users. In this paper we combine the parallel downloading technology and the proxy server technology in order to download a file quickly and to serve the latest files. However, our original parallel downloading method proposed previously has taken neither the downloading order of pieces of a file nor the requirement for the buffer space into account. In order to provide users with the required file in order as a byte stream, the proxy server should reorder the pieces fetched from multiple servers and fill the delayed blocks in as soon as possible. Thus, "substituting download" is newly introduced, which requests the delayed block also from another server to complete downloading earlier. We have clarified a tradeoff between the buffering time and the redundant traffic generated by duplicate requests to multiple servers using the substituting download through experiments on the Internet. As a result, some adjustments to balance the tradeoff work well and our method are proved both to download a file fast and to limit the buffer space. This technology is to be used to ensure the reliable and swift services as network assurance systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Voting multi-dimensional data with deviations for Web services under group testing

    Page(s): 65 - 71
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    Web services (WS) need to be trustworthy to be used in critical applications. A technique called WS group testing has been proposed which can significantly reduce the cost of testing and ranking a large number of WS. A main feature of WS group testing is that it is able to establish the test oracles for the given test inputs from multiple WS and infer the oracles by plural voting. Efficient voting of complex and large number of data is critical to the success of group testing. Current voting techniques are not designed to deal with such a situation. This paper presents efficient voting algorithms that determine the plural value on multi-dimensional data and large number of data. The algorithm uses a clustering method to classify data into regions to identify the plural value. Experiments are designed and performed to concept-prove the algorithms and their applications with group testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new TCP congestion control method considering adaptability over satellite Internet

    Page(s): 75 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    Most standard implementations of congestion control method perform poorly in the satellite Internet due to both a high bit error rate and a long propagation delay. This paper proposes a new TCP congestion control method called TCP-STAR to improve both TCP performance and adaptability of network conditions over the satellite Internet. The performance and the adaptability of network conditions are most important metrics for assurance. TCP-STAR has three new mechanisms, namely congestion window setting based on available bandwidth (CWS), lift window control (LWC), and acknowledgement error notification (AEN). CWS can avoid the reduction of the transmission rate when data losses are caused by bit error. LWC is able to increase the congestion window quickly based on the estimated available bandwidth. AEN can avoid the reduction of the throughput by mis-retransmission of data. The mis-retransmission is caused by ack losses or delay. Simulation experiments show that TCP-STAR improves the throughput comparing with other TCP variants and exhibits the adaptability of network conditions. Furthermore, the fairness of TCP-STAR is better than that of other TCP variants. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving multipath reliability in topology-aware overlay networks

    Page(s): 82 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    Use of multiple paths between node pairs can enable an overlay network to bypass Internet link failures. Selecting high quality primary and backup paths is challenging, however. To maximize communication reliability, an overlay multipath routing protocol must account for both the failure probability of a single path and link sharing among multiple paths. We propose a practical solution that exploits physical topology information and end-to-end path quality measurement results to select high quality path pairs. Simulation results show the proposed approach is effective in achieving higher multipath reliability in overlay networks at reasonable communication cost. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bandwidth clustering for reliable and prioritized network routing using split agent-based method

    Page(s): 89 - 94
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    Recent researches have highlighted the importance of developing a network with distributed problem solving abilities thus enhancing reliability with equal share of network resources. While several centralized schemes have been proposed for efficient path marking and capacity reservation, the decentralized approach is one of the motivating reasons for employing adaptive behavior of swarm-based agents. Algorithmically complex problems such as reliable network routing need to be faced with a dynamically adaptive approach. In this work the bandwidth clustering method is presented, a method developed using the split agent-based routing technique (SART). The SART method is applied in the network, performing path marking activating at the same time the bandwidth clustering method by which nodes within paths are clustered with respect to several levels of available bandwidth. A path freshness degree is estimated and modeled in order to ensure reliability of data traffic flow. Thorough examination is carried out for the performance, path survivability and recoverability using a SART-bandwidth clustering scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Message from the SDCS Chairs

    Page(s): xvii
    Save to Project icon | Request Permissions | PDF file iconPDF (23 KB)  
    Freely Available from IEEE
  • InFilter: predictive ingress filtering to detect spoofed IP traffic

    Page(s): 99 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    Cyber-attackers often use incorrect source IP addresses in attack packets (spoofed IP packets) to achieve anonymity, reduce the risk of trace-back and avoid detection. We present the predictive ingress filtering (InFilter) approach for network-based detection of spoofed IP packets near cyber-attack targets. Our InFilter hypothesis states that traffic entering an IP network from a specific source frequently uses the same ingress point. We have empirically validated this hypothesis by analysis of trace-routes to 20 Internet targets from 24 looking-glass sites, and 30-days of border gateway protocol-derived path information for the same 20 targets. We have developed a system architecture and software implementation based on the InFilter approach that can be used at border routers of large IP networks to detect spoofed IP traffic. Our implementation had a detection rate of about 80% and a false positive rate of about 2% in testbed experiments using Internet traffic and real cyber-attacks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Active timing-based correlation of perturbed traffic flows with chaff packets

    Page(s): 107 - 113
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB) |  | HTML iconHTML  

    Network intruders usually launch their attacks through a chain of intermediate stepping stone hosts in order to hide their identities. Detecting such stepping stone attacks is difficult because packet encryption, timing perturbations, and meaningless chaff packets can all be utilized by attackers to evade from detection. In this paper, we propose a method based on packet matching and timing-based active watermarking that can successfully correlate interactive stepping stone connections even if there are chaff packets and limited timing perturbations. We provide several algorithms that have different trade-offs among detection rate, false positive rate and computation cost. Our experimental evaluation with both real world and synthetic data indicates that by integrating packet matching and active watermarking, our approach has overall better performance than existing schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Specifying information-flow controls

    Page(s): 114 - 120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    The core problem in risk analysis - determining exploitable paths between attackers and system assets - is essentially a problem of determining information flow. It is relatively straightforward to interpret design models for service-based distributed systems in information-flow terms, but the analysis results must be integrated into the system engineering process, and any resulting security controls must be meaningful to system practitioners as well as security analysts. The work reported addresses these practical problems; it shows that information flow analysis can be integrated into the requirements traceability process, ensuring that security controls are specific about the properties they require. Communication between information-analyst and system practitioner is also addressed by tuning the analysis to reflect the exploitability of threat paths, and by defining security controls as patterns of information-flow constraints, rather than single predicates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MAFIC: adaptive packet dropping for cutting malicious flows to push back DDoS attacks

    Page(s): 123 - 129
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    In this paper, we propose a new approach called MAFIC (malicious flow identification and cutoff) to support adaptive packet dropping to fend off DDoS attacks. MAFIC works by judiciously issuing lightweight probes to flow sources to check if they are legitimate. Through such probing, MAFIC would drop malicious attack packets with high accuracy while minimizes the loss on legitimate traffic flows. Our NS-2 based simulation indicates that MAFIC algorithm drops packets from unresponsive potential attack flows with an accuracy as high as 99% and reduces the loss of legitimate flows to less than 3%. Furthermore, the false positive and negative rates are low-only around 1% for a majority of the cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performing BGP experiments on a semi-realistic Internet testbed environment

    Page(s): 130 - 136
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1208 KB) |  | HTML iconHTML  

    We have built a router testbed that is connected to the Deter/Emist experimental infrastructure. Our goal is to create a semi-realistic testbed to conduct BGP experiments, measure and visualize their impact on network performance and stability. Such testbed is also useful for evaluating different security countermeasures. Our testbed architecture includes four components: routing topology, background traffic, data analysis and visualization. This paper describes how we launch two specific BGP attacks, (a) multiple origin AS and (b) route flap damping attacks, and the lessons learned. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.