By Topic

Distributed Computing Systems, 2001. 21st International Conference on.

Date Apr 2001

Filter Results

Displaying Results 1 - 25 of 90
  • A command and control support system using CORBA

    Page(s): 735 - 738
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (324 KB) |  | HTML iconHTML  

    A C4I (command, control, computers, communication and intelligence) support system spans a large variety of requirements and usually serves many users with diverse needs. Also, in order to properly display information to the decision-makers in a timely way, it must integrate data from other systems, not necessarily built with the same technology. The Operations Theater Surveillance System (SATO), presented in this paper, was designed in 1998 for the Mercury project, which is developing the Brazilian Navy's new C2S (Command and Control Support) system. This paper, as an experience report, focuses on the software constructs used in the system, with an architectural perspective. SATO displays and manages all the information presented to the users. In addition, as Mercury's main subsystem, it lays the foundation for integrating other subsystems and legacy systems. Data enters Mercury from all crisis control centers (CCCs) distributed throughout the country. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Proceedings 21st International Conference on Distributed Computing Systems

    Save to Project icon | Request Permissions | PDF file iconPDF (379 KB)  
    Freely Available from IEEE
  • Support for speculative update propagation and mobility in Deno

    Page(s): 509 - 516
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (842 KB) |  | HTML iconHTML  

    This paper presents the replication framework of Deno, an object replication system specifically designed for mobile and weakly-connected environments. Deno uses weighted voting for availability and pair-wise, epidemic information flow for flexibility. This combination allows the protocols to operate with less than full connectivity, to easily adapt to changes in group membership, and to make few assumptions about the underlying network topology. Deno has been implemented and runs on top of Linux and Win32 platforms. We use the Deno prototype to characterize the performance of two versions of Deno's protocol. The first version enables globally serializable execution of update transactions. The second supports a weaker consistency level that still guarantees transactionally-consistent access to replicated data. We demonstrate that the incremental cost of providing global serializability is low, and that speculative dissemination of updates can significantly improve commit performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Page(s): 747 - 749
    Save to Project icon | Request Permissions | PDF file iconPDF (153 KB)  
    Freely Available from IEEE
  • Self-stabilizing PIF algorithm in arbitrary rooted networks

    Page(s): 91 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB) |  | HTML iconHTML  

    We present a deterministic distributed Propagation of Information with Feedback (PIF) protocol in arbitrary rooted networks. The proposed algorithm does not use a preconstructed spanning tree. The protocol is self-stabilizing, meaning that starting from an arbitrary state (in response to an arbitrary perturbation modifying the memory state), it is guaranteed to behave according to its specification. Every PIF wave initiated by the root inherently creates a tree in the graph. So, the tree is dynamically created according to the progress of the PIF wave. This allows our PIF algorithm to take advantage of the relative speed of different components of the network. The proposed algorithm can be easily used to implement any self-stabilizing system which requires a (self-stabilizing) wave protocol running on an arbitrary network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and implementation of a composable reflective middleware framework

    Page(s): 644 - 653
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (932 KB) |  | HTML iconHTML  

    With the evolution of the global information infrastructure, service providers will need to provide effective and adaptive resource management mechanisms that can serve more concurrent clients and deal with applications that exhibit quality-of-service (QoS) requirements. Flexible, scalable and customizable middleware can be used as an enabling technology for next-generation systems that adhere to the QoS requirements of applications that execute in highly dynamic distributed environments. To enable application-aware resource management, we are developing a customizable and composable middleware framework called CompOSE|Q (Composable Open Software Environment with QoS), based on a reflective meta-model. In this paper, we describe the architecture and runtime environment for CompOSE|Q and briefly assess the performance overhead of the additional flexibility. We also illustrate how flexible communication mechanisms can be supported efficiently in the CompOSE|Q framework View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unifying stabilization and termination in message-passing systems

    Page(s): 99 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (636 KB) |  | HTML iconHTML  

    We dispel the myth that it is impossible for any stabilizing message passing program to be terminating. We identify fixpoint-symmetry as a necessary condition for a message passing stabilizing program to be terminating. Our results do confirm that a number of well-known input-output problems (e.g., leader election and consensus) do not admit a terminating and stabilizing solution. On the flip side, they show that reactive problems such as mutual exclusion and reliable-transmission do admit such solutions. We go on to present stabilizing and terminating programs for both problems. Also, we describe a way to add termination to a stabilizing program, and demonstrate it in the context of our design of a solution to the reliable-transmission problem View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and evaluation of redistribution strategies for wide-area commodity distribution

    Page(s): 154 - 161
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (692 KB) |  | HTML iconHTML  

    The proliferation of e-commerce has enabled a new set of applications that allow globally distributed purchasing of commodities such as books, CDs, travel tickets, etc., over the Internet. These commodities can be represented online by tokens, which can be distributed among servers to enhance the performance and availability of such applications. There are two fundamental approaches for distributing such tokens-partitioning and replication. Partitioning-based approaches eliminate the need for tight quorum synchronization required by replication-based approaches. The effectiveness of partitioning, however, relies on token redistribution techniques that allow dynamic migration of tokens to where they are needed. We propose pair-wise token redistribution strategies to support applications that involve wide-area commodity distribution. Using a detailed simulation model and real Internet message traces, we investigate the performance of our redistribution strategies and a previously proposed replication based scheme. Our results reveal that, for the types of applications and environment we address, partitioning-based approaches perform superior primarily due to their ability to provide higher server autonomy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Token based group mutual exclusion for asynchronous rings

    Page(s): 691 - 694
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB) |  | HTML iconHTML  

    We propose a group mutual exclusion algorithm for unidirectional rings. Our algorithm does not require the processes to have any id. Moreover, processes maintain no special data structures to implement any queues. The space requirement of processes depends only on the number of shared resources, and is equal to 4×log(m+1)+2 bits. The size of messages is 2×log(m+1) bits only. Every resource request generates O(n2) messages in the worst case, but zero messages in the best case View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic database management for PCS networks

    Page(s): 683 - 686
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB)  

    This paper presents a dynamic database management method for location management of personal communications service (PCS) networks. The proposed method provides dynamics copies of user location information in the nearest home location register (HLR) database, which allows mobile users to access the system efficiently View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic migration algorithms for distributed object systems

    Page(s): 119 - 126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (668 KB) |  | HTML iconHTML  

    Complex distributed object systems require dynamic migration algorithms that allocate and reallocate objects to respond to changes in the load or in the availability of the resources. We present the Cooling and Hot-Spot migration algorithms that reallocate objects when the load on a processor is high or when the latency of a task is high. The algorithms have been implemented as a feedback loop in the Eternal Resource Management System where information obtained from monitoring the behavior of the objects and the usage of the processors' resources is used to dynamically balance the load on the processors and improve the latency of the tasks. The cost of moving an object is justified by amortization over many method invocations, and constrains the rate at which objects are moved. The experimental results show that our algorithms guarantee steady flow of operation for the tasks and gracefully migrate objects from the processors when processor overloads and high task latencies are detected View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast reconciliations in fluid replication

    Page(s): 449 - 458
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (844 KB) |  | HTML iconHTML  

    Mobile users can increasingly depend on high speed connectivity. Despite this, using distributed file services across the wide area is painful. Fast approaches sacrifice one or more of safety, visibility, and consistency in the name of performance. Instead, we propose fluid replication, the ability to create replicas where and when needed. These replicas, called WayStations, maintain consistency with home servers through periodic reconciliations. Two techniques make reconciliation fast; this is crucial to the success of fluid replication. First, we defer propagation of updates, and only invalidate files during a reconciliation. Second, rather than depend on operation logs, we provide the subtrees in which all updates have occurred. These subtrees, named by their least common ancestors, or LCAs, can be constructed incrementally, and reduce the burden of checking serializability during a reconciliation. While these techniques provide better performance, they are not without risk. Bulk invalidation can lead to false sharing, optimistic updates are subject to conflict, and deferred updates may cause performance problems if they are needed elsewhere. To address these concerns, we performed a trace-based evaluation of our algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiprocessor preprocessing algorithms for uniprocessor on-line scheduling

    Page(s): 219 - 226
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (704 KB) |  | HTML iconHTML  

    H. Chetto and M. Chetto (1989) presented an algorithm for the online admission control and run-time scheduling of aperiodic real-time jobs in preemptive uniprocessor environments that are executing systems of periodic hard real-time tasks. This algorithm requires a significant degree of preprocessing of the system of periodic tasks - in general, this preprocessing takes a time that is exponential in the representation of the periodic task system. In this paper, we develop techniques for speeding up the preprocessing phase of the Chetto & Chetto algorithm, by adapting it for implementation in parallel environments. We validate the effectiveness of our parallelization both by theoretical results and through simulation experiments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive beacon placement

    Page(s): 489 - 498
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (752 KB) |  | HTML iconHTML  

    Beacon placement strongly affects the quality of spatial localization, a critical service for context-aware applications in wireless sensor networks; yet this aspect of localization has received little attention. Fixed beacon placement approaches such as uniform and very dense placement are not always viable and will be inadequate in very noisy environments in which sensor networks may be expected to operate (with high terrain and propagation uncertainties). We motivate the need for empirically adaptive beacon placement and outline a general approach based on exploration and instrumentation of the terrain conditions by a mobile human or robot agent. We design, evaluate and analyze three novel adaptive beacon placement algorithms using this approach for localization based on RF-proximity. In our evaluation, we find that beacon density rather than noise level has a more significant impact on beacon placement algorithms. Our beacon placement algorithms are applicable to a low (beacon) density regime of operation. Noise makes moderate density regimes more improvable View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LIME: a middleware for physical and logical mobility

    Page(s): 524 - 533
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (980 KB)  

    LIME is a middleware supporting the development of applications that exhibit physical mobility of hosts, logical mobility of agents, or both. LIME adapts a coordination perspective inspired by work on the Linda model. The context for computation, represented in Linda by a globally accessible, persistent tuple space, is represented in LIME by transient sharing of the tuple spaces carried by each individual mobile unit. Linda tuple spaces are also extended with a notion of location and with the ability to react to a given state. The hypothesis underlying our work is that the resulting model provides a minimalist set of abstractions that enable rapid and dependable development of mobile applications. In this paper, we illustrate the model underlying LIME, present its current design and implementation, and discuss initial lessons learned in developing applications that involve physical mobility View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RAD: a compile-time solution to buffer overflow attacks

    Page(s): 409 - 417
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (712 KB) |  | HTML iconHTML  

    Buffer overflow attack can inflict upon almost arbitrary programs and is one of the most common vulnerabilities that can seriously compromise the security of a network-attached computer system. This paper presents a compiler-based solution to the notorious buffer overflow attack problem. Using this solution, users can prevent attackers from compromising their systems by changing the return address to execute injected code, which is the most common method used in buffer overflow attacks. Return address defender (RAD) is a simple compiler patch that automatically creates a safe area to store a copy of return addresses and automatically adds protection code into applications that it compiles to defend programs against buffer overflow attacks. Using it to protect a program does not need to modify the source code of the protected programs. Moreover, RAD does not change the layout of stack frames, so binary code it generated is compatible with existing libraries and other object files. Empirical performance measurements on a fully operational RAD prototype show that programs protected by RAD only experience a factor of between 1.01 to 1.31 slow-down. In this paper we present the principle of buffer overflow attacks, a taxonomy of defense methods, the implementation details of RAD, and the performance analysis of the RAD prototype View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust TCP congestion recovery

    Page(s): 199 - 206
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (732 KB)  

    Presents a new robust TCP (Transmission Control Protocol) congestion recovery scheme to (1) handle bursty packet losses while preserving the self-clocking capability; (2) detect a TCP connection's new equilibrium during congestion recovery, thus improving both link utilization and effective throughput; and (3) make the TCP behavior during congestion recovery very close to that during congestion avoidance, thus “extending” the performance model for congestion avoidance to that for TCP loss recovery. Furthermore, the new recovery scheme requires only a slight modification to the sender side of TCP implementation, thus making it widely deployable. The performance of the proposed scheme is evaluated for scenarios with many TCP flows under the drop-tail and RED (random early detection) gateways in the presence of bursty packet losses. The evaluation results show that the new scheme achieves at least as great a performance improvement as TCP SACK (Selective ACKnowledgments) and consistently outperforms TCP New-Reno. Moreover, its steady-state TCP behavior is close to the ideal TCP congestion behavior. Since the proposed scheme does not require selective acknowledgments nor receiver modifications, its implementation is much simpler than TCP SACK View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A general resource allocation synchronization problem

    Page(s): 557 - 564
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (644 KB) |  | HTML iconHTML  

    We introduce a new synchronization problem called GRASP. We show that this problem is very general, in that it can provide solutions with strong properties to a wide make of previously-studied and new problems. We present a shared-memory solution to this problem that is based on a new solution to the dining philosophers problem with constant failure locality. We use the powerful tool of wait-free transactions to simplify our solution without restricting concurrency View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Anonymous Gossip: improving multicast reliability in mobile ad-hoc networks

    Page(s): 275 - 283
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (716 KB) |  | HTML iconHTML  

    In recent years, a number of applications of ad-hoc networks have been proposed. Many of them are based on the availability of a robust and reliable multicast protocol. We address the issue of reliability and propose a scalable method to improve packet delivery of multicast routing protocols and decrease the variation in the number of packets received by different nodes. The proposed protocol works in two phases. In the first phase, any suitable protocol is used to multicast a message to the group, while in the second concurrent phase, the gossip protocol tries to recover lost messages. Our proposed gossip protocol is called Anonymous Gossip (AG) since nodes need not know the other group members for gossip to be successful. This is extremely desirable for mobile nodes, that have limited resources, and where the knowledge of group membership is difficult to obtain. As a first step, anonymous gossip is implemented over MAODV without much overhead and its performance is studied. Simulations show that the packet delivery of MAODV is significantly improved and the variation in number of packets delivered is decreased View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cost effective mobile agent planning for distributed information retrieval

    Page(s): 65 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (756 KB) |  | HTML iconHTML  

    The number of agents and the execution time are two significant performance factors in mobile agent planning (MAP). Fewer agents cause lower network traffic and consume less bandwidth. Regardless of the number of agents used, the execution time for a task must be kept minimal, which means that use of the minimal number of agents must not impact on the execution time unfavorably. As the population of the mobile agent application domain grows, the importance of these two factors also increases. After a careful review of these two factors, we propose two heuristic algorithms for finding the minimal number of traveling agents for retrieving information from a distributed computing environment, while keeping the latency minimal. Although agent planning, specifically MAP, is quite similar to the famous traveling salesman problem (TSP), agent planning has a different objective function from that of TSP. TSP deals with the optimal total routing cost, while MAP attempts to minimize the execution time to complete tasks of information retrieval. In this paper, we suggest two cost-effective MAP algorithms, BYKY1 (Baek-Yeo-Kim-Yeom 1) and BYKY2, which can be used in distributed information retrieval systems to find the factors mentioned above. At the end of each algorithm, 2OPT, a well-known TSP algorithm, is called to optimize each agent's local routing path. Experimental results show that BYKY2 produces near-optimal performance. These algorithms are more realistic and applicable directly to the problem domains than those of previous works View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive approaches to relieving broadcast storms in a wireless multihop mobile ad hoc network

    Page(s): 481 - 488
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (684 KB) |  | HTML iconHTML  

    In a multihop mobile ad hoc network, broadcasting is an elementary operation to support many applications. In (Ni et al., 1999), it is shown that naively broadcasting by flooding may cause serious redundancy, contention, and collision in the network, which we refer to as the broadcast storm problem. Several threshold-based schemes are shown to perform better than flooding in (Ni et al., 1999). However, how to choose thresholds also poses a dilemma between reachability and efficiency under different host densities. We propose several adaptive schemes, which can dynamically adjust thresholds based on local connectivity information. Simulation results show that these adaptive schemes can offer better reachability as well as efficiency as compared to the results in (Ni et al., 1999) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed network simulations using the dynamic simulation backplane

    Page(s): 181 - 188
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB) |  | HTML iconHTML  

    Presents an approach for creating distributed, component-based simulations of communication networks by interconnecting models of sub-networks drawn from different network simulation packages. This approach supports the rapid construction of simulations for large networks by reusing existing models and software, and fast execution using parallel discrete event simulation techniques. A dynamic simulation backplane is proposed that provides a common format and protocol for message exchange, and services for transmitting data and synchronizing heterogeneous network simulation engines. In order to achieve plug-and-play interoperability, the backplane uses existing network communication standards and dynamically negotiates among the participant simulators to define a minimal subset of required information that each simulator must supply, as well as other optional information. The backplane then automatically creates a message format that can be understood by all participating simulators and dynamically creates the content of each message by using callbacks to the simulation engines. We describe our approach to interoperability as well as an implementation of the backplane. We present results that demonstrate the proper operation of the backplane by distributing a network simulation between two different simulation packages, ns2 and GloMoSim. Performance results show that the overhead for the creation of the dynamic messages is minimal. Although this work is specific to network simulations, we believe our methodology and approach can be used to achieve interoperability in other distributed computing applications as well View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The home model and competitive algorithms for load balancing in a computing cluster

    Page(s): 127 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (636 KB) |  | HTML iconHTML  

    Most implementations of a computing cluster (CC) use greedy-based heuristics to perform load balancing. In some cases, this is in contrast to theoretical results about the performance of online load balancing algorithms. We define the home model in order to better reflect the architecture of a CC. In this new theoretical model, we assume a realistic cluster structure in which every job has a “home” machine which it prefers to be executed on, e.g. due to I/O considerations or because it was created there. We develop several online algorithms for load balancing in this model. We first provide a theoretical worst-case analysis, showing that our algorithms achieve better competitive ratios and perform less reassignments than algorithms for the unrelated machines model, which is the best existing theoretical model to describe such clusters. We then present an empirical average-case performance analysis by means of simulations. We show that the performance of our algorithms is consistently better than that of several existing load balancing methods, e.g. the greedy and the opportunity cost methods, especially in a dynamic and changing CC environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust double auction protocol against false-name bids

    Page(s): 137 - 145
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (684 KB) |  | HTML iconHTML  

    Internet auctions have become an integral part of electronic commerce (EC) and a promising field for applying agent technologies. Although the Internet provides an excellent infrastructure for large-scale auctions, we must consider the possibility of a new type of cheating, i.e., a bidder trying to profit from submitting several bids under fictitious names (false-name bids). Double auctions are an important subclass of auction protocols that permit multiple buyers and sellers to bid to exchange a good, and have been widely used in stock, bond, and foreign exchange markets. If there exists no false-name bid, a double auction protocol called PMD protocol has proven to be dominant-strategy incentive compatible. On the other hand, if we consider the possibility of false-name bids, the PMD protocol is no longer dominant-strategy incentive compatible. We develop a new double auction protocol called the Threshold Price Double auction (TPD) protocol, which is dominant strategy incentive compatible even if participants can submit false-name bids. The characteristics of the TPD protocol is that the number of trades and prices of exchange are controlled by the threshold price. Simulation results show that this protocol can achieve a social surplus that is very close to being Pareto efficient View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enforcing perfect failure detection

    Page(s): 350 - 357
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (604 KB) |  | HTML iconHTML  

    Perfect failure detectors can correctly decide whether a computer is crashed. However it is impossible to implement a perfect failure detector in purely asynchronous systems. We show how to enforce perfect failure detection in timed distributed systems with hardware watchdogs. The two main system model assumptions are: each computer can measure time intervals with a known maximum error; and each computer has a watchdog that crashes the computer unless the watchdog is periodically updated. We have implemented a system that satisfies both assumptions using a combination of off-the-shelf software and hardware View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.