By Topic

Network Computing and Applications, 2004. (NCA 2004). Proceedings. Third IEEE International Symposium on

Date 30 Aug.-1 Sept. 2004

Filter Results

Displaying Results 1 - 25 of 65
  • Performance analysis of cryptographic protocols on handheld devices

    Page(s): 169 - 174
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB) |  | HTML iconHTML  

    The past few years have witnessed an explosive growth in the use of wireless mobile handheld devices as the enabling technology for accessing Internet-based services, as well as for personal communication needs in ad hoc networking environments. Most studies indicate that it is impossible to utilize strong cryptographic functions for implementing security protocols on handheld devices. Our work refutes this. Specifically, we present a performance analysis focused on three of the most commonly used security protocols for networking applications, namely SSL, S/MIME and IPsec. Our results show that the time taken to perform cryptographic functions is small enough not to significantly impact real-time mobile transactions and that there is no obstacle to the use of quite sophisticated cryptographic protocols on handheld mobile devices. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Traveling token for dynamic load balancing

    Page(s): 329 - 332
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (244 KB) |  | HTML iconHTML  

    Load distribution improves performance of a computer network by transferring tasks from heavily loaded computers, where service is poor, to lightly loaded computers, where the tasks can take advantage of an available computing capacity that would otherwise go unused. In This work, a traveling token concept for dynamic load balancing protocol is proposed. Instead of each computer probing other computers often resulting in unsuccessful probing, we take a different approach that makes the identity of the heavily loaded and lightly loaded computers better known to other computers. The results of our simulation show that the proposed protocol outperforms other existing protocols implemented in our simulator almost in all cases we tested. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The LC* assignment policy for cluster-based servers

    Page(s): 177 - 184
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (566 KB) |  | HTML iconHTML  

    A cluster-based server consists of a front-end dispatcher and multiple back-end servers. The dispatcher receives incoming jobs, and then decides how to assign them to back-end servers, which in turn serve the jobs according to some discipline. Cluster-based servers have been broadly deployed as they combine good performance with low cost. Several assignment policies have been proposed for cluster-based servers, most of which aim to balance the load among back-end servers. There are two main strategies for load balancing: The first strategy aims at balancing the amount of work at back-end servers, while the second strategy aims at balancing the number of jobs assigned to back-end servers. Example of policies using these strategies are JSQ (join shortest queue) and LC (least connected), respectively. We propose a policy, called LC*, which combines the two aforementioned strategies. The paper shows experimentally that when preemption is admitted (i.e. jobs are executed concurrently by back-end servers), LC substantially outperforms both JSQ and LC. This improved performance is achieved by using only information readily available to the dispatcher, and therefore LC* is a practical policy in regards to implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • VIPSEC: virtualized and pluggable security services infrastructure for adaptive grid computing

    Page(s): 362 - 365
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (270 KB) |  | HTML iconHTML  

    Large scale distributed systems like the computational grid combine network access with multiple computing and storage units. The need for efficient and secure data transportation over potentially insecure channels creates new security and privacy issues, which are exacerbated by the heterogeneous nature of the collaborating resources. Traditional security approaches require adequate overhauling to address these paradigms. We propose a new two-pronged approach to address grid security issues. First, the virtualization of security services provides an abstraction layer on the top of the security infrastructure, which harmonizes the heterogeneity of underlying security mechanisms. Second, the pluggable nature of the various security services permits the users and resource providers to configure the security architecture according to their requirements and satisfaction level. This approach allows the security infrastructure to develop with minimal impact on the grid resource management functionalities, which are still being developed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Downloading replicated, wide-area files - a framework and empirical evaluation

    Page(s): 89 - 96
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (402 KB) |  | HTML iconHTML  

    The challenge of efficiently retrieving files that are broken into segments and replicated across the wide-area is of prime importance to wide-area, peer-to-peer, and grid file systems. Two differing algorithms addressing this challenge have been proposed and evaluated. While both have been successful in differing performance scenarios, there has been no unifying work that can view both algorithms under a single framework. We define such a framework, where download algorithms are defined in terms of four dimensions: the number of simultaneous downloads, the degree of work replication, the failover strategy, and the server selection algorithm. We then explore the impact of varying parameters along each of these dimensions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Resilient peer-to-peer multicast from the ground up

    Page(s): 351 - 355
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (362 KB) |  | HTML iconHTML  

    This work introduces Nemo, a novel peer-to-peer multicast protocol that aims at achieving this elusive goal. Based on two techniques: (1) co-leaders; and, (2) triggered negative acknowledgments (NACKs), Nemo's design emphasizes conceptual simplicity and minimum dependencies (Anderson et al., 2002), thus achieving, in a cost-effective manner, performance characteristics resilient to the natural instability of its target environment. Simulation-based and wide-area experimentations show that Nemo can achieve high delivery ratios (up to 99.98%) and low end-to-end latency similar to those of comparable protocols, while significantly reducing the cost in terms of duplicate packets (reductions > 85%) and control related traffic, making the proposed algorithm a more scalable solution to the problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel switch architecture for high-performance computing and signal processing networks

    Page(s): 215 - 222
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (285 KB) |  | HTML iconHTML  

    This work describes low-latency switch architecture for high performance packet-switched networks. The switch architecture is a combination of input buffers capable of avoiding head-of-line blocking and an internal switch interconnect capable of allowing different input ports to access a single output port simultaneously. The switch was designed for the RapidIO protocol, but provides improved performance in other switched fabrics as well. OPNET Modeler was used to develop models of the proposed switch architecture and to evaluate the performance of the switch for three different network topologies. Models of two standard switch architectures were also developed and simulated for comparison. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ViSMI: software distributed shared memory for Infiniband clusters

    Page(s): 185 - 191
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (418 KB) |  | HTML iconHTML  

    This work describes ViSMI, a software distributed shared memory system for cluster systems connected via InfiniBand. ViSMI implements a kind of home-based lazy release consistency protocol, which uses a multiple-writer coherence scheme to alleviate the traffic introduced by false sharing. For further performance gain, InfiniBand features and optimized page invalidation mechanisms are applied in order to reduce synchronization overhead. First experimental results show that ViSMI introduces good performance comparable to similar software DSMs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Araneola: a scalable reliable multicast system for dynamic environments

    Page(s): 5 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (362 KB) |  | HTML iconHTML  

    We present Araneola, a scalable reliable application-level multicast system for highly dynamic wide-area environments. Araneola supports multi-point to multi-point reliable communication in a fully distributed manner while incurring constant load on each node. For a tunable parameter k ≥ 3, Araneola constructs and dynamically maintains an overlay structure in which each node's degree is either k or k + 1, and roughly 90% of the nodes have degree k. Empirical evaluation shows that Araneola's overlay structure achieves three important mathematical properties of k-regular random graphs (i.e., random graphs in which each node has exactly k neighbors) with N nodes: (i) its diameter grows logarithmically with N; (ii) it is generally k-connected; and (iii) it remains highly connected following random removal of linear-size subsets of edges or nodes. The overlay is constructed at a very low cost: each join, leave, or failure is handled locally, and entails the sending of only about 3k messages in total. Given this overlay, Araneola disseminates multicast messages by gossiping over the overlay's links. We show that compared to a standard gossip-based multicast protocol, Araneola achieves substantial improvements in load, reliability, and latency. Finally, we present an extension to Araneola in which the basic overlay is enhanced with additional links chosen according to geographic proximity and available bandwidth. We show that this approach reduces the number of physical hops messages traverse without hurting the overlay's robustness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An iterative switching algorithm with (possibly) one iteration

    Page(s): 223 - 231
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (373 KB) |  | HTML iconHTML  

    We present a new iterative switching algorithm called π-RGA for an input queued switch. In an iterative switching algorithm, each iteration matches some input and an output port for packet transmission, i.e. each iteration computes a matching, therefore, if input i is matched to output j, a packet (if any) is forwarded from i to j. The matching computed in one iteration is not necessarily maximal (more input and output ports can still be matched), and hence, the size of the matching may grow with more iterations. Therefore, multiple iterations are generally performed in each matching phase of the switch to achieve a high throughput. The reason why an iteration computes a non-maximal matching is efficiency: the matching is computed in a distributed manner without a global state of the switch. This is done using a Request Grant Accept handshake in each iteration, and we restrict our attention in This work to this family of algorithms. PIM based on T. E. Anderson et al. (1993), iSLIP based on N. McKeown (1999), iLQF and iOCF based on N. McKeown (1995), DRR based on Y. Li et al. (2000), and pDRR as stated in Fast scheduler solutions to the problems of priorities for polarized data traffic by G. Damm et al. are examples of such iterative switching algorithms found in the literature. The work on n-RGA is motivated by the assumption that the number of iterations is (possibly) limited to only one iteration, and that high throughput is to be maintained for an arbitrary traffic pattern, even with that one and only iteration. The limit to one iteration emanates from the need to make a matching phase as short as possible for the switch to scale at very high speeds. The key concept behind n-RGA is the stabilization of the matching by keeping parts of the previously computed matching. This stabilization makes it possible to maintain an eventually good size matching with one iteration only. Unlike other approaches, however, n-RGA does not maintain information about the quality of the matching. TT-RGA provides high throughput in practice under uniform and non-uniform traffic patterns with one iteration. We also prove that n-RGA provides throughput and delay guarantees with a speedup of 2 and one iteration under a constant burst traffic model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards JMS compliant group communication - a semantic mapping

    Page(s): 131 - 140
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7748 KB) |  | HTML iconHTML  

    Group communication provides communication primitives with various semantics and their use greatly simplifies the development of highly available services. However, despite tremendous advances in research and numerous prototypes, group communication stays confined to small niches and academic prototypes. In contrast, message-oriented middleware such as the Java message service (JMS) is widely used, and has become a de-facto standard. We believe that the lack of a well-defined and easily understandable standard is the reason that hinders the deployment of group communication systems. Since JMS is a well-established technology, an interesting solution is to extend JMS adding group communication primitives to it. Foremost, this requires extending the traditional semantics of group communication in order to take into account various features of JMS, e.g., durable/nondurable subscriptions and persistent/non-persistent messages. The resulting new group communication specification, together with the corresponding API, defines group communication primitives compatible with JMS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards an approach for automatically repairing compromised network systems

    Page(s): 389 - 392
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (294 KB) |  | HTML iconHTML  

    The widely accepted method to repair a compromised system is to wipe the system clean and reinstall. We think that there may be alternative methods. Specifically, we envision systems that are capable of automatically recovering from system compromises. Our proposed approach is a repair agent that resides in an isolated area on the system. We use a virtual machine approach to isolate the repair agent. The repair agent should roll back any undesirable changes, determine the point of entry, and prevent further compromise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intrusion tolerance for Internet applications

    Page(s): 35 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (327 KB) |  | HTML iconHTML  

    The Internet has become essential to most enterprises and many private individuals. However, both the network and computer systems connected to it are still too vulnerable and attacks are becoming evermore frequent. To face this situation, traditional security techniques are insufficient and fault tolerance techniques are becoming increasingly cost-effective. Nevertheless, intrusions are very special faults, and this has to be taken into account when selecting the fault tolerance techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Request distribution-aware caching in cluster-based Web servers

    Page(s): 311 - 316
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (263 KB) |  | HTML iconHTML  

    This work presents a performance analysis of request distribution-aware caching in cluster-based Web servers. We use the Zipf-like request distribution curve to guide static Web document caching. A combination of cooperative caching and exclusive caching provides for a cluster-wide caching system that avoids document replication accross the cluster. We explore the benefits of cooperative caching algorithms that use request distribution information to steer their behavior over general purpose cooperative caching algorithms. Exclusive caching exercises a fine-grained control over replication of data blocks across the cluster. The performance of the system has been assessed by using the WebStone benchmark. Our cluster-based server employs Linux kernel-level implementations of cooperative caching and exclusive caching. Current results show that request distribution-aware caching outperforms general-purpose caching algorithms, makes up for the performance loss of non-replicated data solutions and compares favorably to fully-replicated solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Are processors free? Impact on RDMA and protocol off-load technologies

    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (195 KB) |  | HTML iconHTML  

    The growing prevalence of multi-core and multi-threaded processors combined the rapid commodization of hardware raises numerous questions about how best to utilize this processing power. Will the cost of a processor core drop to a point where processors are considered free? Will such a cost point enable processor cores to be cost-effectively dedicated to platform-specific services such as storage and inter-processor communication infrastructures and applications? What impact might this trend have on RDMA (remote direct memory access) and protocol off-load technologies such as InfiniBand and RDMA/TCP (aka iWARP)? This presentation examines the potential impact and how these technologiies might evolve in the comin years. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards flexible finite-state-machine-based protocol composition

    Page(s): 281 - 286
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (269 KB) |  | HTML iconHTML  

    We propose a novel approach to the composition of group communication protocols. In this approach, components are modelled as finite state machines communicating via signals. We introduce two building blocks, called adaptor and adaplexor, that ease the development and the composition of group communication protocol stacks, and we discuss how isolation can be achieved in this setting. To validate our architectural concepts, we have implemented the proposed group communication architecture in SDL. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • GeoPeer: a location-aware peer-to-peer system

    Page(s): 39 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (971 KB) |  | HTML iconHTML  

    This work presents a novel peer-to-peer system that is particularly well suited to support context-aware computing. The system, called GeoPeer, aims to combine the advantages of peer-to-peer systems that implement distributed hash tables with the suitability of geographical routing for supporting location-constrained queries and information dissemination. GeoPeer is comprised of two fundamental components: a Delaunay triangulation used to build a connected lattice of nodes and a mechanism to manage long range contacts that allows good routing performance, despite unbalanced distribution of nodes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The response to IT complexity: autonomic computing

    Page(s): 151 - 157
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5402 KB) |  | HTML iconHTML  

    Autonomic computing (AC) is an initiative that addresses the challenge of managing information technology (IT). The AC approach is to develop technologies and methodologies that make systems more self-managing and more resilient to changes in configurations, workloads and other factors. By so doing, AC reduce the total cost of ownership of IT systems, enable IT systems to deliver business value more rapidly, and increase the quality of service of IT systems. This work highlights the concepts of AC, the business drivers behind it and how the domains of AC address the challenges in today's IT environments. It describes principles of an overarching unified architecture, common building blocks for autonomic systems, some areas of progress and the challenges ahead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An adaptive admission control algorithm for bandwidth brokers

    Page(s): 243 - 250
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (291 KB) |  | HTML iconHTML  

    This work focuses on the operation of the bandwidth broker, an entity that is responsible for managing the bandwidth within a network domain and for the communication with bandwidth brokers of neighboring domains. A very important aspect of the bandwidth broker is its admission control module that determines whether the bandwidth reservation requests are going to be accepted or not. We summarize the status of the current research in this field and propose architecture for the admission control module that aims at achieving a satisfactory balance between maximizing the resource utilization for the network provider and minimizing the overhead of the module. This is achieved by gathering and examining sets of book-ahead requests and by adapting the size of the set to be examined so that the network utilization and the computation overhead are appropriately balanced. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementation of distributed key generation algorithms using secure sockets

    Page(s): 393 - 398
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (588 KB) |  | HTML iconHTML  

    Distributed key generation (DKG) protocols are indispensable in the design of any cryptosystem used in communication networks. DKG is needed to generate public/private keys for signatures or more generally for encrypting/decrypting messages. One such DKG (due to Pedersen) has recently been generalized to a provably secure protocol by Gennaro et al. We propose and implement an efficient algorithm to compute the (group generator) parameter g required in the DKG protocol. We also implement the DKG due to Gennaro et al. on a network of computers using secure sockets. We run tests which show the efficiency of the implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards a quality of service aware public computing utility

    Page(s): 376 - 379
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (258 KB) |  | HTML iconHTML  

    This work describes a design for a quality of service aware public computing utility (PCU). The goal of the PCU is to utilize the idle capacity of the shared public resources and augment the capacity with dedicated resources as necessary, to provide high quality of service to the clients at the least cost. Our PCU design combines peer-to-peer (P2P) and grid computing ideas in a novel manner to construct a utility-based computing environment. In This work, we present the overall architecture and describe two major components: a P2P overlay substrate for connecting the resources in a global network and a community-based decentralized resource management system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the relationship between packet size and router performance for heavy-tailed traffic

    Page(s): 235 - 242
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1210 KB) |  | HTML iconHTML  

    The problem of characterizing the relationship between packet size and network delay has received little attention in the field. Research in that area has been limited to either simulation studies or empirical observations that are detached from analytic traffic modeling. From a queuing viewpoint, it is simple to show that these three variables are inter-related, which necessitates a more careful study. We present a traffic model of a router fed by ON/OFF-type sources with heavy-tailed burst sizes. The traffic model considered is consistent with the evidence that Web traffic is heavy-tailed. The analysis cases that are considered establish a quantitative characterization of the complex relationship among packet payload and header sizes, traffic burstiness, and router queuing delay. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance of the NAS parallel benchmarks on grid enabled clusters

    Page(s): 356 - 361
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (257 KB) |  | HTML iconHTML  

    As grids become more available and mature in real world settings, users are faced with considerations regarding the efficiency of applications and their capability of utilizing additional nodes distributed over a wide area network. When both tightly coupled clusters and loosely gathered grids are available, a cost effective organization will schedule applications that can execute with minimal performance degradation over wide-area networks on grids, while reserving clusters for applications with high communication costs. We analyze the performance of the NAS parallel benchmarks using both MPICH-G2 and MPICH with the ch_p4 device. We compare the results of these communication devices on both tightly and loosely coupled systems, and present an analysis of how parallel applications perform in real-world environments. We make recommendations as to where applications run most efficiently, and under what conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using a task-specific QoS for controlling sensing requests and scheduling

    Page(s): 269 - 276
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    Typically, management of networked computational and sensing nodes is based upon a quality of service metric (QoS) that is based on some generic principles, like "be fair in allocating resources" or "utilize the CPU capacity to the maximum". The consequences of accepting such a starting point is that (1) task-specific resource requirements are not taken into consideration, and (2) computational and communication resources are saturated without paying attention to whether such a high load is necessary or not. We describe some of our efforts on how to improve the situation described above. In particular, we discuss one of the approaches that we are currently investigating that can be summarized by the following three points. (1) We use a task-specific QoS (TS-QoS) as a variable that is controlled by our system. (2) Requests for resources are generated based upon the feedback provided by the TS-QoS, where the request generator's parameters are adjusted using a simple PID controller. (3) A dynamic programming based algorithm is used for scheduling resource. Simulations using sensor resources show some of the advantages of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distance measurement in volunteer computing networks: a completely decentralized approach

    Page(s): 399 - 404
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (263 KB) |  | HTML iconHTML  

    Volunteer computing is a relatively new computing paradigm that is often the only solution to large computational tasks. Due to the dynamic structure of the peer-to-peer systems that usually form the backbone of volunteer computing, obtaining accurate distance metrics using traditional techniques is infeasible. Thus, techniques that could improve system performance, such as dynamic load balancing, are very difficult to implement. We show that the master-worker paradigm used by the overwhelming majority of volunteer computing applications results in certain network topology and protocol characteristics that render distance measurement method with zero network overhead feasible and preferable. Therefore, we present a simple distance measurement method that is based on passive monitoring of application-level traffic. We subsequently test the method using simulation as well as a reference implementation. The paper concludes by summarizing the strong points and limitations of the proposed method and proposing further research directions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.