By Topic

Integrated Network Management (IM), 2011 IFIP/IEEE International Symposium on

Date 23-27 May 2011

Filter Results

Displaying Results 1 - 25 of 185
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (145 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (100 KB)  
    Freely Available from IEEE
  • Table of contents - IM 2011

    Page(s): 1 - 13
    Save to Project icon | Request Permissions | PDF file iconPDF (180 KB)  
    Freely Available from IEEE
  • Programme at a glance

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (123 KB)  
    Freely Available from IEEE
  • Welcome message from the general co-chairs

    Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (63 KB)  
    Freely Available from IEEE
  • Welcome message from the technical program committee co-chairs

    Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (74 KB)  
    Freely Available from IEEE
  • Sponsor logos

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (427 KB)  
    Freely Available from IEEE
  • IFIP/IEEE IM 2011 Committees

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | PDF file iconPDF (255 KB)  
    Freely Available from IEEE
  • Keynotes

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (61 KB)  

    These keynote speeches discuss the following:cloud computing; stalking the wily Homunculus and other adventures in network management; managing the next wave of mobile broadband networks; and information-centric networking. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Welcome message from the workshop co-chairs

    Page(s): 1 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (957 KB)  
    Freely Available from IEEE
  • Welcome message from the workshop co-chairs

    Page(s): 1 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (866 KB)  
    Freely Available from IEEE
  • Welcome message from the workshop co-chairs

    Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (843 KB)  
    Freely Available from IEEE
  • Welcome message from the workshop co-chairs

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | PDF file iconPDF (964 KB)  
    Freely Available from IEEE
  • Welcome message from the workshop co-chairs

    Page(s): 1 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (869 KB)  
    Freely Available from IEEE
  • An information plane architecture supporting home network management

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (931 KB) |  | HTML iconHTML  

    Home networks have evolved to become small-scale versions of enterprise networks. The tools for visualizing and managing such networks are primitive and continue to require networked systems expertise on the part of the home user. As a result, non-expert home users must manually manage non-obvious aspects of the network - e.g., MAC address filtering, network masks, and firewall rules, using these primitive tools. The Homework information plane architecture uses stream database concepts to generate derived events from streams of raw events. This supports a variety of visualization and monitoring techniques, and also enables construction of a closed-loop, policy-based management system. This paper describes the information plane architecture and its associated policy-based management infrastructure. Exemplar visualization and closed-loop management applications enabled by the resulting system (tuned to the skills of non-expert home users) are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Flow signatures of popular applications

    Page(s): 9 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (940 KB) |  | HTML iconHTML  

    Network flow data is widely used to analyze the protocol mix forwarded by a router or to identify anomalies that may be caused by hardware and software failures, configuration errors, or intrusion attempts. The goal of our research is to find application signatures in network flow traces that can be used to pinpoint certain applications, such as specific web browsers, mail clients, or media-players. Our starting point is the hypothesis that popular applications generate application specific flow signatures. In order to verify our hypothesis, we recorded traffic traces of several applications and we subsequently analyzed the traces to identify flow signatures of these applications. The flow signatures were formalized as queries of a stream-based flow query language. The queries have been executed on several flow traces in order to evaluate our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the merits of popularity prediction in multimedia content caching

    Page(s): 17 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1792 KB) |  | HTML iconHTML  

    In recent years, telecom operators have been moving away from traditional, broadcast-driven, television towards IP-based, interactive and on-demand services. Consequently, multicast is no longer a viable solution to limit the amount of traffic in the IP-TV network. In order to counter an explosion in generated traffic, caches can be strategically placed throughout the content delivery infrastructure. As the size of caches is usually limited to only a small fraction of the total size of all content items, it is important to accurately predict future content popularity. Classical caching strategies only take into account the past when deciding what content to cache. Recently, a trend towards novel strategies that actually try to predict future content popularity has arisen. In this paper, we ascertain the viability of using popularity prediction in realistic multimedia content caching scenarios. The use of popularity prediction is compared to classical strategies using trace files from an actual deployed Video on Demand service. Additionally, the synergy between several parameters, such as cache size and prediction window, is investigated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An auditing system for multi-domain IP carrying Service Level Agreements

    Page(s): 25 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1646 KB) |  | HTML iconHTML  

    Service Level Agreements (SLAs) are well-known from a large variety of business branches. However, despite the increasing importance of the Internet and electronic communication, a fairly low dissemination of IP carrying SLAs is observed. IP carrying SLAs are offered by Internet Service Providers (ISPs) documenting network service levels such as maximum packet loss, maximum packet delay, or availability. Among the reasons for the low dissemination are certainly the difficult monitoring and measurement possibilities in a multi-provider environment. This paper presents an architecture for a multi-domain auditing system, which is able to monitor the fulfillment of service levels documented in SLAs across several domains. With the already proposed MeSA protocol this architecture presents a comprehensive solution. We conclude the paper with a detailed assessment, including simulation results that are based on real trace input data, showing the validity of the overall approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed inter-domain SLA negotiation using Reinforcement Learning

    Page(s): 33 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1137 KB) |  | HTML iconHTML  

    Applications requiring network Quality of Service (QoS) (e.g. telepresence, cloud computing, etc.) are becoming mainstream. To support their deployment, network operators must automatically negotiate end-to-end QoS contracts (aka. Service Level Agreements, SLAs) and configure their networks accordingly. Other crucial needs must be considered: QoS should provide incentives to network operators, and confidentiality on topologies, resource states and committed SLAs must be respected. To meet these requirements, we propose two distributed learning algorithms that will allow network operators to negotiate end-to-end SLAs and optimize revenues for several demands while treating requests in real-time: one algorithm minimizes the cooperation between providers while the other demands to exchange more information. Experiment results exhibit that the second algorithm satisfies better customers and providers while having worse runtime performances. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Impact analysis of BGP sessions for prioritization of maintenance operations

    Page(s): 41 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1428 KB) |  | HTML iconHTML  

    Network operators in large-scale networks are often faced with long lists of maintenance tasks and find it difficult to track the relative importance of these tasks, without knowing their impact on the network's operation. As a result, operators may react slowly to critical tasks, increasing network downtime and maintenance costs. We present a system that quantifies the impact of maintenance tasks so that operators can prioritize their reaction according to the estimated impact (i.e., spend more time and effort on avoiding the disruption caused by high-impact maintenance tasks). In particular, the proposed system estimates the amount of traffic loss due to maintenance operations on inter-domain routing sessions, one of the most frequently modified aspects of network configurations. We implement the proposed system and apply it to 372 routing sessions in a nation-wide ISP network. The system identifies sessions with a varying degree of impact: sessions with nearly zero data loss, as well as sessions that can result in more than 1,000 GB of data loss if disrupted without any protection mechanism applied. We also show that predicting the amount of data loss is not straightforward since this amount changes over time, often in unexpected ways (e.g., from 50GB to 0 over one-month period). Therefore, the proposed impact analysis system is necessary for network operators to perform periodic audits of the routing sessions' impact and to classify the sessions according to the projected data losses. Operators can then decide the level of protection for each session (e.g., employ more effective and costly methods to protect critical sessions) and thus allocate maintenance costs more efficiently. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A data confidentiality architecture for developing management mashups

    Page(s): 49 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1771 KB) |  | HTML iconHTML  

    Mashups are powerful applications created from accessing and composing multiple and distributed information sources. Their ease-of-use and modularity allow users at any skill level to construct, share and integrate their own applications. However, data security concerns remain a hindering factor in its widespread adoption, in particular, for network management. In this paper, we propose a novel development methodology and system architecture called Maestro that allows developers to express their data privacy concerns and enforce policies during mashup executions. We evaluated Maestro by building two mashup applications for managing live networks and by running performance tests that show that our runtime has negligible overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managing data retention policies at scale

    Page(s): 57 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1376 KB) |  | HTML iconHTML  

    Compliance with regulatory policies on data remains a key hurdle to cloud computing. Policies such as EU privacy, HIPAA, and PCI-DSS place requirements on data availability, integrity, migration, retention, and access, among many others. This paper proposes a policy management service that offers scalable management of data retention policies attached to data objects stored in a cloud environment. The management service includes a highly available and secure encryption key store to manage the encryption keys of data objects. By deleting the encryption key at a specified retention time associated with the data object, we effectively delete the data object and its copies stored in online and offline environments. To achieve scalability, our service uses Hadoop MapReduce to perform parallel management tasks, such as data encryption and decryption, key distribution and retention policy enforcement. A prototype deployed in a 16-machine Linux cluster currently supports 56 MB/sec for encryption, 76 MB/sec for decryption, 31,000 retention policies/sec read and 15,000 retention policies/sec write. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Run-time performance optimization and job management in a data protection solution

    Page(s): 65 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1818 KB) |  | HTML iconHTML  

    The amount of stored data in enterprise Data Centers quadruples every 18 months. This trend presents a serious challenge for backup management and sets new requirements for performance efficiency of traditional backup and archival tools. In this work, we discuss potential performance shortcomings of the existing backup solutions. During a backup session a predefined set of objects (client filesystems) should be backed up. Traditionally, no information on the expected duration and throughput requirements of different backup jobs is provided. This may lead to an inefficient job schedule and the increased backup session time. We analyze historic data on backup processing from eight backup servers in HP Labs, and introduce two additional metrics associated with each backup job, called job duration and job throughput. Our goal is to use this additional information for automated design of a backup schedule that minimizes the overall completion time for a given set of backup jobs. This problem can be formulated as a resource constrained scheduling problem which is known to be NP-complete. Instead, we propose an efficient heuristics for building an optimized job schedule, called FlexLBF. The new job schedule provides a significant reduction in the backup time (up to 50%) and reduced resource usage (up to 2-3 times). Moreover, we design a simulation-based tool that aims to automate parameter tuning for avoiding manual configuration by system administrators while helping them to achieve nearly optimal performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards efficient resource management for data-analytic platforms

    Page(s): 73 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2110 KB) |  | HTML iconHTML  

    We present architectural and experimental work exploring the role of intermediate data handling in the performance of MapReduce workloads. Our findings show that: (a) certain jobs are more sensitive to disk cache size than others and (b) this sensitivity is mostly due to the local file I/O for the intermediate data. We also show that a small amount of memory is sufficient for the normal needs of map workers to hold their intermediate data until it is read. We introduce Hannibal, which exploits the modesty of that need in a simple and direct way - holding the intermediate data in application-level memory for precisely the needed time - to improve performance when the disk cache is stressed. We have implemented Hannibal and show through experimental evaluation that Hannibal can make MapReduce jobs run faster than Hadoop when little memory is available to the disk cache. This provides better performance insulation between concurrent jobs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A prototype for in-network management in NaaS-enabled networks

    Page(s): 81 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2332 KB) |  | HTML iconHTML  

    In-network management (INM) is a paradigm for distributed and embedded management for future networks. One of its main design goals is to be used in conjunction with Network-as-a-Service (NaaS)-enabled networks, which need to be managed efficiently, in a way that requires only little manual interaction, and across administrative network domains. In this paper, we present an elaborate an INM prototype that we have implemented to demonstrate INM's capabilities, based on our previously introduced INM architecture. Using a comprehensive network scenario, we discuss a number of real-time and algorithm measurements to demonstrate how INM can enable efficient management of NaaS-enabled networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.