By Topic

Network and Service Management, IEEE Transactions on

Issue 2 • Date June 2011

Filter Results

Displaying Results 1 - 8 of 8
  • Table of contents

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (32 KB)  
    Freely Available from IEEE
  • Dirichlet-Based Trust Management for Effective Collaborative Intrusion Detection Networks

    Page(s): 79 - 91
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB) |  | HTML iconHTML  

    The accuracy of detecting intrusions within a Collaborative Intrusion Detection Network (CIDN) depends on the efficiency of collaboration between peer Intrusion Detection Systems (IDSes) as well as the security itself of the CIDN. In this paper, we propose Dirichlet-based trust management to measure the level of trust among IDSes according to their mutual experience. An acquaintance management algorithm is also proposed to allow each IDS to manage its acquaintances according to their trustworthiness. Our approach achieves strong scalability properties and is robust against common insider threats, resulting in an effective CIDN. We evaluate our approach based on a simulated CIDN, demonstrating its improved robustness, efficiency and scalability for collaborative intrusion detection in comparison with other existing models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scheduling Grid Tasks in Face of Uncertain Communication Demands

    Page(s): 92 - 103
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (357 KB) |  | HTML iconHTML  

    Grid scheduling is essential to Quality of Service provisioning as well as to efficient management of grid resources. Grid scheduling usually considers the state of the grid resources as well application demands. However, such demands are generally unknown for highly demanding applications, since these often generate data which will be transferred during their execution. Without appropriate assessment of these demands, scheduling decisions can lead to poor performance. Thus, it is of paramount importance to consider uncertainties in the formulation of a grid scheduling problem. This paper introduces the IPDT-FUZZY scheduler, a scheduler which considers the demands of grid applications with such uncertainties. The scheduler uses fuzzy optimization, and both computational and communication demands are expressed as fuzzy numbers. Its performance was evaluated, and it was shown to be attractive when communication requirements are uncertain. Its efficacy is compared, via simulation, to that of a deterministic counterpart scheduler and the results reinforce its adequacy for dealing with the lack of accuracy in the estimation of communication demands. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving Application Placement for Cluster-Based Web Applications

    Page(s): 104 - 115
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6403 KB) |  | HTML iconHTML  

    Dynamic application placement for clustered web applications heavily influences system performance and quality of user experience. Existing approaches claim that they strive to maximize the throughput, keep resource utilization balanced across servers, and minimize the start/stop cost of application instances. However, they fail to minimize the worst case of server utilization; the load balancing performance is not optimal. What's more, some applications need to communicate with each other, which we called dependent applications; the network cost of them also should be taken into consideration. In this paper, we investigate how to minimize the resource utilization of servers in the worst case, aiming at improving load balancing among clustered servers. Our contribution is two-fold. First we propose and define a new optimization objectives: limiting the worst case of each individual server's utilization, formulated by a min-max problem. A novel framework based on binary search is proposed to detect an optimal load balancing solution. Second, we define system cost as the weighted combination of both placement change and inter-application communication cost. By maximizing the number of instances of dependent applications that reside in the same set of servers, the basic load-shifting and placement-change procedures are enhanced to minimize whole system cost. Extensive experiments have been conducted and effectively demonstrate that: 1) the proposed framework achieves a good allocation for clustered web applications. In other words, requests are evenly allocated among servers, and throughput is still maximized; 2) the total system cost maintains at a low level; 3) our algorithm has the capacity of approximating an optimal solution within polynomial time and is promising for practical implementation in real deployments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Monitoring the Impact of P2P Users on a Broadband Operator's Network over Time

    Page(s): 116 - 127
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    Since their emergence peer-to-peer (P2P) applications have been generating a considerable fraction of the over all transferred bandwidth in broadband networks. Residential broadband service has been moving from one geared towards technology enthusiasts and early adopters to a commodity for a large fraction of households. Thus, the question whether P2P is still the dominant application in terms of bandwidth usage becomes highly relevant for broadband operators. In this work we present an adaption to a previously published method for classifying broadband users into a P2P- and a non-P2P group based on the amount of communication partners ("peers") they have in a dedicated timeframe. Based on this classification, we derive their impact on network characteristics like the number of active users and their aggregate bandwidth. Privacy is assured by anonymization of the data and by not taking into account the packet payloads. We apply our method to real operational data collected 2007 and 2010, respectively, from a major German DSL provider's access link which transported all traffic each user generates and receives. In 2010 the fraction of P2P users clearly decreased compared to previous years. Nevertheless we find that P2P users are still large contributors to the total amount of traffic seen especially in upstream direction. However in 2010 the impact from P2P on the bandwidth peaks in the busy hours has clearly decreased while other applications have a growing impact, leading to an increased bandwidth usage per subscriber in the peak hours. Further analysis also reveals that the P2P users' traffic still does not exhibit strong locality. We compare our findings to those available in the literature and propose areas for future work on network monitoring, P2P applications, and network design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Control of False Negative and False Positive Errors with Separate Adaptive Thresholds

    Page(s): 128 - 140
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (434 KB) |  | HTML iconHTML  

    Component level performance thresholds are widely used as a basic means for performance management. As the complexity of managed applications increases, manual threshold maintenance becomes a difficult task. Complexity arises from having a large number of application components and their operational metrics, dynamically changing workloads, and compound relationships between application components. To alleviate this problem, we advocate that component level thresholds should be computed, managed and optimized automatically and autonomously. To this end, we have designed and implemented a performance threshold management application that automatically and dynamically computes two separate component level thresholds: one for controlling Type I errors and another for controlling Type II errors. Our solution additionally facilitates metric selection thus minimizing management overheads. We present the theoretical foundation for this autonomic threshold management application, describe a specific algorithm and its implementation, and evaluate it using real-life scenarios and production data sets. As our present study shows, with proper parameter tuning, our on-line dynamic solution is capable of nearly optimal performance thresholds calculation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spectral Models for Bitrate Measurement from Packet Sampled Traffic

    Page(s): 141 - 152
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1341 KB) |  | HTML iconHTML  

    In network measurement systems, packet sampling techniques are usually adopted to reduce the overall amount of data to collect and process. Being based on a subset of packets, they introduce estimation errors that have to be properly counteracted by using a fine tuning of the sampling strategy and sophisticated inversion methods. This problem has been deeply investigated in the literature with particular attention to the statistical properties of packet sampling and to the recovery of the original network measurements. Herein, we propose a novel approach to predict the energy of the sampling error in the real time estimation of traffic bitrate, based on spectral analysis in the frequency domain. We start by demonstrating that the error introduced by packet sampling can be modeled as an aliasing effect in the frequency domain. Then, we derive closed-form expressions for the Signal-to-Noise Ratio (SNR) to predict the distortion of traffic bitrate estimates over time. The accuracy of the proposed SNR metric is validated by means of real packet traces. Furthermore, a comparison with respect to an analogous SNR expression derived using classic stochastic tools is proposed, showing that the frequency domain approach grants for a higher accuracy when traffic rate measurements are carried out at fine time granularity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Network Modification to Improve QoS Stability at Failures

    Page(s): 153 - 164
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1248 KB) |  | HTML iconHTML  

    When a link or node fails, flows are detoured around the failed portion, so the hop count of flows and the link load could change dramatically as a result of the failure. As real-time traffic such as video or voice increases on the Internet, ISPs are required to provide stable quality as well as connectivity at failures. For ISPs, how to effectively improve the stability of these qualities at failures with the minimum investment cost is an important issue, and they need to effectively select a limited number of locations to add link facilities. In this paper, efficient design algorithms to select the locations for adding link facilities are proposed and their effectiveness is evaluated using the actual backbone networks of 36 commercial ISPs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Network and Service Management will publish (online only) peerreviewed archival quality papers that advance the state-of-the-art and practical applications of network and service management.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief

Rolf Stadler
Laboratory for Communication Networks
KTH Royal Institute of Technology
Stockholm
Sweden
stadler@kth.se