By Topic

Quality of Service, 2009. IWQoS. 17th International Workshop on

Date 13-15 July 2009

Filter Results

Displaying Results 1 - 25 of 42
  • [Title page]

    Publication Year: 2009 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (513 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2009 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (761 KB)  
    Freely Available from IEEE
  • Paper list

    Publication Year: 2009 , Page(s): 1 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (761 KB)  
    Freely Available from IEEE
  • Passive access capacity estimation for QoS measurement

    Publication Year: 2009 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (486 KB) |  | HTML iconHTML  

    The passive estimation of Internet access capacity is interesting both from a scientific perspective, because it requires the development of techniques and tools to extract such a capacity from a noisy set of traffic measurements; and from an industrial perspective, because it supports, in principle, the possibility to measure the Service Levels of IP Access Service offered by Internet service providers. This paper proposes models, techniques and tools aimed at passively estimating the maximum achievable downlink network-layer rate (capacity) of an access link to the Internet from inside a network. We propose a method that extends the well-known packet-dispersion approach to network capacity estimation by considering longer TCP packet sequences to minimize the impact of measurement noise and to obtain reliable estimation without the need of a large amount of data. The proposed approach has been validated on some small-scale experiments performed on residential ADSL lines under different interfering traffic conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cooperative multicast scheduling with random network coding in WiMAX

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (670 KB) |  | HTML iconHTML  

    The Multicast and Broadcast Service (MBS) in WiMAX has emerged as the next-generation wireless infrastructure to broadcast data or digital video. Multicast scheduling protocols play a critical role in achieving efficient multicast transmissions in MBS. However, the current state-of-the-art protocols, based on the shared-channel single-hop transmission model, do not exploit any potential advantages provided by the channel and cooperative diversity in multicast sessions, even while WiMAX OFDMA provides such convenience. The inefficient multicast transmission leads to the under-utilization of scarce wireless bandwidth. In this paper, we revisit the multicast scheduling problem, but with a new perspective in the specific case of MBS in WiMAX, considering the use of multiple ODFMA channels, multiple hops, and multiple paths simultaneously. Participating users in the multicast session are dynamically enabled as relays and concurrently communicate with others to supply more data. During the transmission, random network coding is adopted, which helps to significantly reduce the overhead. We design practical scheduling protocols by jointly studying the problems of channel and power allocation on relays, which are very critical for efficient cooperative communication. Protocols that are theoretically and practically feasible are provided to optimize multicast rates and to efficiently allocate resources in the network. Finally, with simulation studies, we evaluate our proposed protocols to highlight the effectiveness of cooperative communication and random network coding in multicast scheduling with respect to improving performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ensuring data storage security in Cloud Computing

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (89)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (783 KB) |  | HTML iconHTML  

    Cloud computing has been envisioned as the next-generation architecture of IT enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server (s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A calculus for information-driven networks

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (381 KB) |  | HTML iconHTML  

    Information-driven networks include a large category of networking systems, where network nodes are aware of information delivered and thus can not only forward data packets but may also perform information processing. In many situations, the quality of service (QoS) in information-driven networks is provisioned with the redundancy in information. Traditional performance models generally adopt evaluation measures suitable for packet-oriented service guarantee, such as packet delay, throughput, and packet loss rate. These performance measures, however, do not align well with the actual need of information-driven networks. New performance measures and models for information-driven networks, despite their importance, have been mainly blank, largely because information processing is clearly application dependent and cannot be easily captured within a generic framework. To fill the vacancy, we develop a new performance evaluation framework particularly tailored for information-driven networks, based on the recent development of stochastic network calculus. Particularly, our model captures the information processing and the QoS guarantee with respect to stochastic information delivery rates, which have never been formally modeled before. This analytical model is very useful in deriving theoretical performance bounds for a large body of systems where QoS is stochastically guaranteed with a certain level of information delivery. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the fair coexistence of loss- and delay-based TCP

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (521 KB) |  | HTML iconHTML  

    Delay-based TCP variants continue to attract a large amount of attention in the networking community. Potentially, they offer the possibility to efficiently use network resources while at the same time achieving low queueing delay and virtually zero packet loss. One major impediment to the deployment of delay-based TCP variants is their inability to coexist fairly with standard loss-based TCP. In this paper we propose a simple strategy to make the fair coexistence possible and to ensure that delay-based flows will revert back to the delay-based operation when loss-based flows are no longer present. Analytical and ns-2 simulation results are presented to validate the proposed algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Co-Con: Coordinated control of power and application performance for virtualized server clusters

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (747 KB) |  | HTML iconHTML  

    Today's data centers face two critical challenges. First, various customers need to be assured by meeting their required service-level agreements such as response time and throughput. Second, server power consumption must be controlled in order to avoid failures caused by power capacity overload or system overheating due to increasing high server density. However, existing work controls power and application-level performance separately and thus cannot simultaneously provide explicit guarantees on both. This paper proposes Co-Con, a novel cluster-level control architecture that coordinates individual power and performance control loops for virtualized server clusters. To emulate the current practice in data centers, the power control loop changes hardware power states with no regard to the application-level performance. The performance control loop is then designed for each virtual machine to achieve the desired performance even when the system model varies significantly due to the impact of power control. Co-Con configures the two control loops rigorously, based on feedback control theory, for theoretically guaranteed control accuracy and system stability. Empirical results demonstrate that Co-Con can simultaneously provide effective control on both application-level performance and underlying power consumption. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalized distributed rate limiting

    Publication Year: 2009 , Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1686 KB) |  | HTML iconHTML  

    The distributed rate limiting (DRL) paradigm is a recently proposed mechanism for decentralized control of cloud-based services. DRL is a simple and efficient approach to resolve the issues of pricing and resource control/engineering of cloud based services. The existing DRL schemes focus on very specific performance metrics (such as loss rate and fair-share) and their design heavily depends on the assumption that the traffic is generated by elastic TCP sources. In this paper we tackle the DRL problem for general workloads and performance metrics and propose an analytic framework for the design of stable DRL algorithms. The closed-form nature of our results allows simple design rules which, together with extremely low communication overhead, makes the presented algorithms practical and easy to deploy with guaranteed convergence properties under a wide range of possible scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Resilient Jumbo frames in wireless LANs

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB) |  | HTML iconHTML  

    With the phenomenal growth of wireless networks and applications, it is increasingly important to deliver content efficiently and reliably over wireless links. However, wireless performance is still far from satisfactory due to limited wireless spectrum, inherent lossy wireless medium, and imperfect packet scheduling. While significant research has been done to improve wireless performance, much of the existing work focuses on individual design space. We take a holistic approach to optimizing wireless performance and resilience. We propose Fast Resilient Jumbo frames (FRJ), which exploit the synergy between three important design spaces: (i) frame size selection, (ii) partial packet recovery, and (iii) rate adaptation. While these design spaces are seemingly unrelated, we show that there are strong interactions between them and effectively leveraging these techniques can provide increased robustness and performance benefits in wireless LANs. FRJ uses jumbo frames to boost network throughput under good channel conditions and uses partial packet recovery to efficiently recover packet losses under bad channel conditions. FRJ also utilizes partial recovery aware rate adaptation to maximize throughput under partial recovery. Using real implementation and testbed experiments, we show that FRJ out-performs existing approaches in a wide range of scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tree-structured data regeneration with network coding in distributed storage systems

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (649 KB) |  | HTML iconHTML  

    Distributed storage systems, built on peer-to-peer networks, can provide large-scale data storage and high data reliability by redundant schemes, such as replica, erasure codes and linear network coding. Redundant data may get lost due to the instability of distributed systems, such as permanent node departures, hardware failures, and accidental deletions. In order to maintain data availability, it is necessary to regenerate new redundant data in another node, referred to as a newcomer. Regeneration is expected to be finished as soon as possible, because the regeneration time can influence the data reliability and availability of distributed storage systems. It has been acknowledged that linear network coding can regenerate redundant data with less network traffic than replica and erasure codes. However, previous regeneration schemes are all star-structured regeneration schemes, in which data are transferred directly from existing storage nodes, referred to as providers, to the newcomer, so the regeneration time is always limited by the path with the narrowest bandwidth between newcomer and provider, due to bandwidth heterogeneity. In this paper, we exploit the bandwidth between providers and propose a tree-structured regeneration scheme using linear network coding. In our scheme, data can be transferred from providers to the newcomer through a regeneration tree, defined as a spanning tree covering the newcomer and all the providers. In a regeneration tree, a provider can receive data from other providers, then encode the received data with the data this provider stores, and finally send the encoded data to another provider or to the newcomer. We prove that a maximum spanning tree is an optimal regeneration tree and analyze its performance. In a trace-based simulation, the results show the tree-structured scheme can reduce the regeneration time by 75%-82% and improve data availability by 73%-124%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High quality P2P-Video-on-Demand with download bandwidth limitation

    Publication Year: 2009 , Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (410 KB) |  | HTML iconHTML  

    This paper investigates a VoD distribution architecture that exploits the increasing uplink and local storage capacities of customer equipment in a peer-to-peer (P2P) manner in order to offload the central video servers and the core network segment. We investigate an environment where (i) the peers' upload speeds vary in time and (ii) on the subscriber's downlink a strict bandwidth limit constrains the VoD delivery, and where (iii) this downlink limit is not significantly higher than the video's own bit rate while (iv) the subscribers' upload capacities are not cut down. In such an environment providing quality for a true VoD service requires carefully selected mechanisms. We show how the components (storage policy, uplink speed management) of a P2P-VoD system should be changed to be feasible under these conditions. The main component of the system determines the minimal required server speed as a function of the prebuffered content, the uploaders' behaviours, and the given play back fault probability. Additionally, by using simulation we investigate the optimal downlink bandwidth limit for a subscriber population with different average upload speeds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Routing with QoS information aggregation in hierarchical networks

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (433 KB) |  | HTML iconHTML  

    In this paper, we consider the problem of routing with two additive constraints in the hierarchical networks, such as the Internet. In order for scalability, the supported QoS information in the hierarchical networks has to be aggregated. We propose a novel method for aggregating the QoS information. To the best of our knowledge, our approach is the first study to use the area-minimization optimization, the de facto optimization problem of the QoS information aggregation. We use a set of real numbers to approximate the supported QoS between different domains. The size of the set is predefined so that advertisement overhead and the space requirement will not grow exponentially as the network size grows. The simulation results show that the proposed method outperforms the existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Uncovering global icebergs in distributed monitors

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (410 KB) |  | HTML iconHTML  

    Security is becoming an increasingly important QoS parameter for which network providers should provision. We focus on monitoring and detecting one type of network event, which is important for a number of security applications such as DDoS attack mitigation and worm detection, called distributed global icebergs. While previous work has concentrated on measuring local heavy-hitters using ldquosketchesrdquo in the non-distributed streaming case or icebergs in the non-streaming distributed case, we focus on measuring icebergs from distributed streams. Since an iceberg may be ldquohiddenrdquo by being distributed across many different streams, we combine a sampling component with local sketches to catch such cases. We provide a taxonomy of the existing sketches and perform a thorough study of the strengths and weaknesses of each of them, as well as the interactions between the different components, using both real and synthetic Internet trace data. Our combination of sketching and sampling is simple yet efficient in detecting global icebergs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling the interactions of congestion control and switch scheduling

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (669 KB) |  | HTML iconHTML  

    In this paper, we study the interactions of user-based congestion control algorithms and router-based switch scheduling algorithms. We show that switch scheduling algorithms that were designed without taking into account these interactions can exhibit a completely different behavior when interacting with feedback-based Internet traffic. Previous papers neglected or mitigated these interactions, and typically found that flow rates reach a fair equilibrium. On the contrary, we show that these interactions can lead to extreme unfairness with temporary flow starvation, as well as to large rate oscillations. For instance, we prove that this is the case for the MWM switch scheduling algorithm, even with a single router output and basic TCP flows. We also show that the iSLIP switch scheduling algorithm achieves fairness among ports, instead of fairness among flows. Finally, we fully characterize the network dynamics for both these switch scheduling algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal multi-path routing and bandwidth allocation under utility max-min fairness

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB) |  | HTML iconHTML  

    An important goal of bandwidth allocation is to maximize the utilization of network resources while sharing the resources in a fair manner among network flows. To strike a balance between fairness and throughput, a widely studied criterion in the network community is the notion of max-min fairness. However, the majority of work on max-min fairness has been limited to the case where the routing of flows has already been defined and this routing is usually based on a single fixed routing path for each flow. In this paper, we consider the more general problem in which the routing of flows, possibly over multiple paths per flow, is an optimization parameter in the bandwidth allocation problem. Our goal is to determine a routing assignment for each flow so that the bandwidth allocation achieves optimal utility max-min fairness with respect to all feasible routings of flows. We present evaluations of our proposed multi-path utility max-min fair allocation algorithms on a statistical traffic engineering application to show that significantly higher minimum utility can be achieved when multi-path routing is considered simultaneously with bandwidth allocation under utility max-min fairness, and this higher minimum utility corresponds to significant application performance improvements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • HOBRP: A hardware optimized packet scheduler that provides tunable end-to-end delay bound

    Publication Year: 2009 , Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (489 KB) |  | HTML iconHTML  

    A packet scheduler is a primary component of the improved quality of service (QoS) model for today's Internet. Although many fair packet schedulers have been proposed through theoretical consideration, practical high-speed packet schedulers remain elementary. The disparity arises because existent schedulers either lack of necessary QoS guarantee or have an unacceptable cost of computation and storage. In this paper, we propose a simple and efficient packet scheduler called hardware optimized bit reversal permutation (HOBRP) based scheduler. Besides some common merits including low time- and space-complexity, bounded end-to-end delay guarantee and constant fairness index that many well-known schedulers have already owned, our HOBRP still possesses two additional features: One is that the end-to-end delay bound of HOBRP is tunable, which makes itself flexible enough to provide different levels of delay bounds for diverse types of application flows. The other is that all the operations and structures used by HOBRP are very simple and easy to be pipelined and paralleled, which benefits an intuitive high-speed hardware design scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliable navigation of mobile sensors in wireless sensor networks without localization service

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1570 KB) |  | HTML iconHTML  

    This paper deals with the problem of guiding mobile sensors (or robots) to a phenomenon across a region covered by static sensors. We present a distributed, reliable and energy-efficient algorithm to construct a smoothed moving trajectory for a mobile robot. The reliable trajectory is realized by first constructing among static sensors a distributed hop count based artificial potential field (DH-APF) with only one local minimum near the phenomenon, and then navigating the robot to that minimum by an attractive force following the reversed gradient of the constructed field. Besides the attractive force towards the phenomenon, our algorithm adopts an additional repulsive force to push the robot away from obstacles, exploiting the fast sensing devices carried by the robot. Compared with previous navigation algorithms that guide the robot along a planned path, our algorithm can (1) tolerate the potential deviation from a planned path, since the DH-APF covers the entire deployment region; (2) mitigate the trajectory oscillation problem; (3) avoid the potential collision with obstacles; (4) save the precious energy of static sensors by configuring a large moving step size, which is not possible for algorithms neglecting the issue of navigation reliability. Our theoretical analysis of the above features considers practical sensor network issues including radio irregularity, packet loss and radio conflict. We implement the proposed algorithm over TinyOS and test its performance on the simulation platform with a high fidelity provided by TOSSIM and Tython. Simulation results verify the reliability and energy efficiency of the proposed mobile sensor navigation algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On optimal information capture by energy-constrained mobile sensor

    Publication Year: 2009 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB) |  | HTML iconHTML  

    A mobile sensor is used to cover a number of points of interest (PoIs) where dynamic events appear and disappear according to given random processes. It has been shown in that for Step and Exponential utility functions, the quality of monitoring (QoM), i.e., the fraction of information captured about all events, increases as the speed of the sensor increases. This work, however, does not consider the energy of motion, which is an important constraint for mobile sensor coverage. In this paper, we analyze the expected information captured per unit of energy consumption (IPE) as a function of the event type, the event dynamics, and the speed of the mobile sensor. Our analysis uses a realistic energy model of motion, and it allows the sensor speed to be optimized for information capture. We present simulation results to verify and illustrate the analytical results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rate and delay controlled core networks: An experimental demonstration

    Publication Year: 2009 , Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (984 KB) |  | HTML iconHTML  

    Growing demand for streaming, voice and interactive gaming applications emphasize the importance of quality of service (QoS) provisioning in the Internet, particularly the need for maximum end-to-end delay guarantee. Current methods of QoS provisioning have either scalability concerns or cannot guarantee end-to-end delay with acceptable packet loss unless bandwidth is over-provisioned. While low jitter guarantee is required for streaming applications, maximum end-to-end delay is also required for VoIP and interactive games. We propose, analyze the stability and demonstrate the viability of three combined rate and end-to-end delay controls. The stability analysis is done on a fluid network model with greedy flows showing that all controls are globally asymptotically stable without information time lags and one of them is also globally asymptotically stable with arbitrary time lags; however it substantivally under-utilizes the network. Another control, which numerically demonstrates stability with arbitrary time lags, is implemented in edge and core routers of our WAN-in-Lab with long haul links. The prototype implementation confirms its viability and its advantage over the differentiated service architecture. The viability of the two other controls is shown by detailed NS2 packet-based simulations of an eight-node real core network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Supporting application network flows with multiple QoS constraints

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (447 KB) |  | HTML iconHTML  

    There is a growing need to support real-time applications over the Internet. Real-time interactive applications often have multiple quality-of-service (QoS) requirements which are application specific. Traditional provisioning of QoS in the Internet through IP routing - Intserv or Diffserv - faces many technical challenges, and is also deterred by the huge deployment issues. As an alternative, application providers often build their own application-specific overlay networks to meet their QoS requirements. In this paper, we present a unified framework which can serve diverse applications with multiple QoS constraints. Our scalable flow route management architecture, called MCQoS, employs a hybrid approach using a path vector protocol to disseminate aggregated path information combined with on-demand path discovery to find paths that match the diverse QoS requirements. It uses a distributed algorithm to dynamically adapt to an alternate path when the current path fails to satisfy the required QoS constraints. We do large-scale simulation and analysis to show that our approach is both efficient and scalable, and that it substantially outperforms the state of the art protocols in accuracy. Our simulation results show that MCQoS can reduce the false negative percentage to less than 1% compared with 5-10% in other approaches, and eliminates false positives, whereas other schemes have false positive rates of 10-20% with minimal increase in protocol overhead. Finally, we implemented and deployed our system on the Planetlab testbed for evaluation in a real network environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrated control of matching delay and CPU utilization in information dissemination systems

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (724 KB) |  | HTML iconHTML  

    The demand for high performance information dissemination is increasing in many applications, such as e-commerce and security alerting systems. These applications usually require that the desired information be matched between sources and sinks based on established subscriptions in a timely manner while a maximal system throughput be achieved to find more matched results. Existing work primarily focuses on only one of the two requirements, either timeliness or throughput. This can lead to an unnecessarily underutilized system or poor guarantees on matching delays. In this paper, we propose an integrated solution that controls both the matching delay and CPU utilization in information dissemination systems to achieve bounded matching delay for high-priority information and maximized system throughput in an example information dissemination system. Our solution is based on optimal control theory for guaranteed control accuracy and system stability. Empirical results on a physical testbed demonstrate that our controllers can guarantee the timeliness requirements while achieving maximized system throughput. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid multi-channel multi-radio wireless mesh networks

    Publication Year: 2009 , Page(s): 1 - 5
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (363 KB) |  | HTML iconHTML  

    Many efforts have been devoted to maximizing network throughput in a multi-channel multi-radio wireless mesh network. Current solutions are based on either pure static or pure dynamic channel allocation approaches. In this paper, we propose a hybrid multi-channel multi-radio wireless mesh networking architecture, where each mesh node has both static and dynamic interfaces. We first present an Adaptive Dynamic Channel Allocation protocol (ADCA), which considers optimization for both throughput and delay in the channel assignment. In addition, we also propose an Interference and Congestion Aware Routing protocol (ICAR) in the hybrid network with both static and dynamic links, which balances the channel usage in the network. Compared to previous work, our simulation results show that ADCA reduces the packet delay considerably without degrading the network throughput. Moreover, the hybrid architecture shows much better adaptivity to changing traffic than pure static architecture without dramatic increase in overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Congestion location detection: Methodology, algorithm, and performance

    Publication Year: 2009 , Page(s): 1 - 9
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (474 KB) |  | HTML iconHTML  

    We address the following question in this study: Can a network application detect not only the occurrence, but also the location of congestion? Answering this question will not only help the diagnostic of network failure and monitor server's QoS, but also help developers to engineer transport protocols with more desirable congestion avoidance behavior. The paper answers this question through new analytic results on the two underlying technical difficulties: 1) synchronization effects of loss and delay in TCP, and 2) distributed hypothesis testing using only local loss and delay data. We present a practical congestion location detection (CLD) algorithm that effectively allows an end host to distributively detect whether congestion happens in the local access link or in more remote links. We validate the effectiveness of CLD algorithm with extensive experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.