By Topic

Network Computing and Applications, 2009. NCA 2009. Eighth IEEE International Symposium on

Date 9-11 July 2009

Filter Results

Displaying Results 1 - 25 of 64
  • [Front cover]

    Publication Year: 2009 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (248 KB)  
    Freely Available from IEEE
  • [Title page i]

    Publication Year: 2009 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (209 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Publication Year: 2009 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (174 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2009 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (133 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2009 , Page(s): v - ix
    Save to Project icon | Request Permissions | PDF file iconPDF (165 KB)  
    Freely Available from IEEE
  • Message from the NCA 2009 Chairs

    Publication Year: 2009 , Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (98 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Conference organizers

    Publication Year: 2009 , Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (113 KB)  
    Freely Available from IEEE
  • Program Committee

    Publication Year: 2009 , Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (123 KB)  
    Freely Available from IEEE
  • Sustainability and the Office of CyberInfrastructure

    Publication Year: 2009 , Page(s): 1 - 3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (211 KB) |  | HTML iconHTML  

    The National Science Foundationpsilas Office of CyberInfrastructure (OCI) supports a broad set of infrastructure, including hardware, software, and services to enable computational science in a variety of disciplines. Issues of sustainability and reusability in this context have become a priority. This paper addresses several approaches to sustainability, and how they OCIs task forces are addressing this critical need. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Experimental Study of Diversity with Off-the-Shelf AntiVirus Engines

    Publication Year: 2009 , Page(s): 4 - 11
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (821 KB) |  | HTML iconHTML  

    Fault tolerance in the form of diverse redundancy is well known to improve the detection rates for both malicious and non-malicious failures. What is of interest to designers of security protection systems are the actual gains in detection rates that they may give. In this paper we provide exploratory analysis of the potential gains in detection capability from using diverse AntiVirus products for the detection of self-propagating malware. The analysis is based on 1599 malware samples collected by the operation of a distributed honeypot deployment over a period of 178 days. We sent these samples to the signature engines of 32 different antivirus products taking advantage of the virus total service. The resulting dataset allowed us to perform analysis of the effects of diversity on the detection capability of these components as well as how their detection capability evolves in time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulating Fixed Virtual Nodes for Adapting Wireline Protocols to MANET

    Publication Year: 2009 , Page(s): 12 - 19
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (205 KB) |  | HTML iconHTML  

    The virtual node layer (VNLayer) is a programming abstraction for mobile ad hoc networks (MANETs). It defines simple virtual servers at fixed locations in a network, addressing a central problem for MANETs, which is the absence of fixed infrastructure. Advantages of this abstraction are that persistent state is maintained in each region, even when mobile nodes move or fail, and that simple wireline protocols can be deployed on the infrastructure, thereby taming the difficulties inherent in MANET setting. The major disadvantage is the messaging overhead for maintaining the persistent state. In this paper, we use simulation to determine the magnitude of the messaging overhead and the impact on the performance of the protocol. The overhead of maintaining the servers and the persistent state is small in bytes, but the number of messages required is relatively large. In spite of this, the latency of address allocation is relatively small and almost all mobile nodes have an address for 99 percent of their lifetime. Our ns-2 based simulation package (VNSim) implements the VNLayer using a leader-based state replication strategy to emulate the virtual nodes. VNSim efficiently simulates a virtual node system with up to a few hundred mobile nodes. VNSim can be used to simulate any VNLayer-based application. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • QuoCast: A Resource-Aware Algorithm for Reliable Peer-to-Peer Multicast

    Publication Year: 2009 , Page(s): 20 - 27
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (506 KB) |  | HTML iconHTML  

    This paper presents QuoCast, a resource-aware protocol for reliable stream diffusion in unreliable environments, where processes may crash and communication links may lose messages. QuoCast is resource-aware in the sense that it takes into account memory, CPU, and bandwidth constraints. Memory constraints are captured by the limited knowledge each process has of its neighborhood. CPU and bandwidth constraints are captured by a fixed quota on the number of messages that a process can use for streaming. Both incoming and outgoing traffic are accounted for. QuoCast maximizes the probability that each streamed packet reaches all consumers while respecting their incoming and outgoing quotas. The algorithm is based on a tree-construction technique that dynamically distributes the forwarding load among processes and links, based on their reliabilities and on their available quotas. The evaluation results show that the adaptiveness of QuoCast to several contraints provides better reliability when compared to other adaptive approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Seed Scheduling for Peer-to-Peer Networks

    Publication Year: 2009 , Page(s): 28 - 35
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (717 KB) |  | HTML iconHTML  

    The initial phase in a content distribution (file sharing) scenario is delicate due to the lack of global knowledge and the dynamics of the overlay. An unwise distribution of the pieces in this phase can cause delays in reaching steady state, thus increasing file download times. We devise a scheduling algorithm at the seed (source peer with full content), based on a proportional fair approach, and we implement it on a real file sharing client. In dynamic overlays, our solution improves by up to 25% the average downloading time of a standard protocol ala BitTorrent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Proximity-Aware Distributed Mutual Exclusion for Effective Peer-to-Peer Replica Management

    Publication Year: 2009 , Page(s): 36 - 43
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    A distributed hash table (DHT) with replicated objects enjoys improved performance and fault-tolerance but calls for effective replica management. This paper deals with proximity-aware distributed mutual exclusion (PADME) for P2P replica management on a DHT. Three main components are involved in PADME: (1) a few nodes designated as the sink candidates for collecting and consolidating replica updates, (2) a node selected from sink candidates to execute gathered replica updates, and (3) a proximity-sorted replica list to guide propagating the updated result effectively and reliably across all replica holders. Simulation results demonstrate that PADME exhibits at least two orders of magnitude less update message traffic than known leading distributed mutual exclusion-based algorithms for DHT replica management (namely, Sigma and E2E) under various cases examined. As a result, PADME outperforms Sigma (or E2E) by an order of magnitude (or up to 50%) in terms of the update throughput, while drastically lowering its update latency by up to 3 orders (or an order) of magnitude. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Streamline: An Architecture for Overlay Multicast

    Publication Year: 2009 , Page(s): 44 - 51
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (243 KB) |  | HTML iconHTML  

    We propose Streamline, a two-layered architecture designed for media streaming in overlay networks. The first layer is a generic, customizable and lightweight protocol which is able to construct and maintain different types of meshes, exhibiting different properties. We discuss two types of overlay networks and explain how the first layer protocol builds these networks in a distributed manner. The second layer is responsible for data propagation to the nodes in the mesh by constructing an optimized diffusion tree. In order to cover the vulnerabilities of the diffusion tree, we propose a masking mechanism which enables the nodes to instantly switch to alternative data paths when necessary. Our simulations reveal that, the structure and properties of the underlying mesh are key to the performance of the system and Streamline can tolerate high node churn without degrading delivery rate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards Improved Overlay Simulation Using Realistic Topologies

    Publication Year: 2009 , Page(s): 52 - 59
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (326 KB) |  | HTML iconHTML  

    Simulation of distributed applications and overlay networks is challenging. Often the results generated in simulation do not match experimental results. Distributed testbeds like Planet-Lab help to bridge the gap, but they do not offer enough nodes to do an Internet scale evaluation. In this paper we use a tool called TopDNS for generating realistic topologies for simulations, using the Planet-Lab to collect measurement data. We show, that simulation results may differ significantly from earlier results using synthesized topologies. We provide a data analysis to explain the observed results and to provide a better understanding of latency between hosts in certain DNS name spaces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of Round-Robin Implementations of Processor Sharing, Including Overhead

    Publication Year: 2009 , Page(s): 60 - 65
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (221 KB) |  | HTML iconHTML  

    It has been observed in recent years that in many applications service time demands are highly variable. Without foreknowledge of exact service times of individual jobs, processor sharing is an effective theoretical strategy for handling such demands. In practice, however, processor sharing must be implemented by time-slicing with a round-robin discipline. In this paper, we investigate how round-robin performs with the consideration of job switching overhead. Because of recent results, we assume that the best strategy is for new jobs to preempt the one in service. By analyzing time-slicing with overhead, we derive the effective utilization parameter, and give a good approximation regarding the lower bound of time-slice under a given system load and overhead. The simulation results show that for both exponential and non-exponential distributions, the system blowup points agree with what the effective utilization parameter tells us. Furthermore, with the consideration of overhead, an optimum time-slice value exists for a particular environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparison of Price-Based Static and Dynamic Job Allocation Schemes for Grid Computing Systems

    Publication Year: 2009 , Page(s): 66 - 73
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (529 KB) |  | HTML iconHTML  

    Grid computing systems are a cost-effective alternative to traditional high-performance computing systems. However, the computing resources of a grid are usually far apart and connected by Wide Area Networks resulting in considerable communication delays. Hence, efficient allocation of jobs to computing resources for load balancing is essential in these grid systems. In this paper, two price-based dynamic job allocation schemes for computational grids are proposed whose objective is to minimize the execution cost for the grid users' jobs. One scheme tries to provide a system-optimal solution so that the expected price for the execution of all the jobs in the grid system is minimized, while the other tries to provide a job-optimal solution so that all the jobs in the system of the same size will be charged approximately the same expected price independent of the computers allocated for their execution to provide fairness. The performance of the proposed dynamic schemes is compared with static job allocation schemes using simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Rule Based Co-operative Approach for Cell Selection in High Speed Cellular Networks

    Publication Year: 2009 , Page(s): 74 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (249 KB) |  | HTML iconHTML  

    In this era of perpetual connectivity, with the introduction of new data-intensive real-time applications such as video-conferencing, video-on-demand, and interactive gaming, the need for doing more with limited network resources has become more than ever before. The continuous drive toward the notion of pervasive computing and communication requires a shift toward more adaptive and collaborative network resource control. In this paper, a rule based collaborative approach is applied using a Rete based rule engine to manage radio resources in high speed cellular networks. In particular, the cell selection problem in high speed downlink packet access (HSDPA) is addressed. As a prerequisite to this work, an extensive HSDPA module in Opnet simulator was developed along with several packet scheduling algorithms. In order to ensure the best user experience, our approach allows the network and the mobile device to cooperatively choose the best serving cell for an incoming user. This approach incorporates many factors in the decision making such as QoS requirements, device capability, available network resources, and channel condition. Our results show that this cooperative approach outperforms the traditional channel condition based approach by as much as 85%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sharing Private Information Across Distributed Databases

    Publication Year: 2009 , Page(s): 82 - 89
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB) |  | HTML iconHTML  

    In industries such as healthcare, there is a need to electronically share privacy-sensitive data across distinct organizations. We show how this can be done while allowing organizations to keep their legacy databases and maintain ownership of the data that they currently store. Without sending or mirroring data to any trusted, centralized entity, we demonstrate how queries can be answered in a distributed manner that preserves the privacy of the original data. This paper explains our distributed query execution engine, outlines how to bootstrap the system when only real world identifiers such as a name and date-of-birth are initially known, and offers details on the tradeoff between privacy and performance. We evaluate the scalability of this approach through simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Energy-Aware Prefetching for Parallel Disk Systems: Algorithms, Models, and Evaluation

    Publication Year: 2009 , Page(s): 90 - 97
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (373 KB) |  | HTML iconHTML  

    Parallel disk systems consume a significant amount of energy due to the large number of disks. To design economically attractive and environmentally friendly parallel disk systems, in this paper we design and evaluate an energy-aware prefetching strategy for parallel disk systems consisting of a small number of buffer disks and large number of data disks. Using buffer disks to temporarily handle requests for data disks, we can keep data disks in the low-power mode as long as possible. Our prefetching algorithm aims to group many small idle periods in data disks to form large idle periods, which in turn allow data disks to remain in the standby state to save energy. To achieve this goal, we utilize buffer disks to aggressively fetch popular data from regular data disks into buffer disks, thereby putting data disks into the standby state for longer time intervals. A centrepiece in the prefetching mechanism is an energy-saving prediction model, based on which we implement the energy-saving calculation module that is invoked in the prefetching algorithm. We quantitatively compare our energy-aware prefetching mechanism against existing solutions, including the dynamic power management strategy. Experimental results confirm that the buffer-disk-based prefetching can significantly reduce energy consumption in parallel disk systems by up to 50 percent. In addition, we systematically investigate the energy efficiency impact that varying disk power parameters has on our prefetching algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • From Auto-adaptive to Survivable and Self-Regenerative Systems Successes, Challenges, and Future

    Publication Year: 2009 , Page(s): 98 - 101
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (231 KB) |  | HTML iconHTML  

    This paper charts the course of adaptive behavior in intrusion tolerance, starting from pre-programmed and user-controlled reactive adaptation to highly sophisticated autonomic and cognitively driven adaptation. The goal of intrusion-tolerance is to provide mission continuity even under conditions of sustained cyber attacks. We describe key themes of our previous work in adaptive cyber defense and introduction of autonomic response capabilities and discuss challenges that warrant further research. We also discuss the potential impact of new trends in distributed systems, e.g., service-oriented architecture and cloud computing, on future survivable systems, and point out new opportunities for developing sophisticated auto-adaptive capabilities for increased survivability . View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generic Danger Detection for Mission Continuity

    Publication Year: 2009 , Page(s): 102 - 107
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (301 KB) |  | HTML iconHTML  

    Mobile ad-hoc networks (MANETs) have become the environment of choice for providing edge connectivity to mobile forces. In particular, next-generation military systems leverage MANET technology to provide information assets to troops. However, MANETs face a number of serious security exposures, which are a superset of traditional networks. In prior work, we have described BITSI, the Biologically-Inspired Tactical Security Infrastructure, which attempts to address these challenges. BITSI uses a variety of techniques inspired by biological systems to provide effect-based security that is centered upon mission enablement. One of these techniques is the application of danger theory to mission continuity. In this paper, we explore different ways of implementing danger detection within BITSI, and show how generic approaches that are low-cost both computationally and in terms of implementation can provide acceptable results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Survivability through Run-Time Software Evolution

    Publication Year: 2009 , Page(s): 108 - 113
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB) |  | HTML iconHTML  

    In this paper we present an architectural framework designed to increase the survivability of software agent nodes in a distributed system. A multi-layered model replaces the original node software. Original computational requirements of the node are retained in the lowest level. The upper layers of the model provide protective and supportive services. Model components are mutated to create behaviorally identical but structurally distinct software. The lowest level of the framework is composed of replicated components drawn from a pool of mutations. The diverse population provides for identification of faulty or compromised components through a voting technique implemented in a higher level of the architecture. Failure recovery automatically creates a different component population. Unless multiple components are simultaneously compromised, the node continues functioning during failure recovery. Preliminary test results of a prototype implementation are given. The tests show that the architecture is feasible and that, in the simulated test environment, overhead is acceptably low. Recovery from single component failures and multiple component failure is demonstrated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automating Intrusion Response via Virtualization for Realizing Uninterruptible Web Services

    Publication Year: 2009 , Page(s): 114 - 117
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (269 KB) |  | HTML iconHTML  

    We present a virtualization-based Web server system, a prototype, and experimental results for providing uninterrupted Web services in the presence of intrusion attacks and software faults. The proposed system utilizes replicated virtual servers managed by a closed-loop feedback controller. Using anomaly and intrusion sensor outputs, the controller calculates cost-weighted actions against threats to ensure Web service continuity. We will show that the system can handle broad classes of attacks. Experiment results show that our prototype retains 60% of its peak throughput under 8 DoS attacks per second over extended periods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.