Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). We apologize for the inconvenience.
By Topic

Network and service management (cnsm), 2012 8th international conference and 2012 workshop on systems virtualiztion management (svm)

Date 22-26 Oct. 2012

Filter Results

Displaying Results 1 - 25 of 69
  • [Front matter]

    Publication Year: 2012 , Page(s): i - xxxiv
    Save to Project icon | Request Permissions | PDF file iconPDF (3247 KB)  
    Freely Available from IEEE
  • A novel Energy-Saving Management mechanism in cellular networks

    Publication Year: 2012 , Page(s): 1 - 9
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2055 KB) |  | HTML iconHTML  

    When regional traffic is low, one key issue of ESM (Energy-Saving Management) in cellular networks is how to sleep several BSs (Base Stations) and meanwhile guaranteeing regional coverage and service quality. Current ESM methods lack efficient regional coverage compensation method and accurate evaluation model for ESM algorithms. This paper proposes a novel ESM mechanism to resolve these problems. The mechanism includes selection of sleeping BSs based on TP (Trigonal Pair), a regional energy saving algorithm corresponding to TP through adjustments of down tilt and transmit power, and an integrated assessment model based on dynamic traffic. We then simulate the mechanism in WCDMA/HSPA network under urban scenarios with multiple services. Results show that with acceptable coverage and service quality, the ESM mechanism can save at least 34.83% of energy consumption for one sleeping BS. And it can still obtain 18.01% and 10.01% energy-saving gains during sleeping time and the entire simulation time, which takes on much practical significance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comfort-aware home energy management under market-based Demand-Response

    Publication Year: 2012 , Page(s): 10 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (660 KB) |  | HTML iconHTML  

    To regulate energy consumption and enable Demand-Response programs, effective demand-side management at home is key and an integral part of the future Smart Grid. In essence, the home energy management is a mix between discrete appliance scheduling problem with deadlines and continuous Heating, Ventilation and Cooling (HVC) device control problem. In this paper, we present near-optimal algorithm designs for energy management at home that is incentive-compatible with market-based Demand-Response programs under explicit user comfort constraints. Theoretical analysis aside, we also show the effectiveness of our algorithms through simulation studies based on real energy pricing and consumption data in South Korea. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TEStore: Exploiting thermal and energy storage to cut the electricity bill for datacenter cooling

    Publication Year: 2012 , Page(s): 19 - 27
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (322 KB) |  | HTML iconHTML  

    The electricity cost of cooling systems can account for 30% of the total electricity bill of operating a data center. While many prior studies have tried to reduce the cooling energy in data centers, they cannot effectively utilize the time-varying power prices in the power market to cut the electricity bill for data center cooling. This is in contrast to the fact that various thermal and energy storage techniques available in today's data centers, such as ice or water-based thermal tanks and UPS batteries, can be utilized to store energy when the power price is relatively low. The stored energy can then be used to cool the data center when the power price is high. In this paper, we design and evaluate TEStore, a cooling strategy that exploits thermal and energy storage techniques to cut the electricity bill for data center cooling, without causing servers in a data center to overheat. The proposed TEStore system checks the low prices in the hour-ahead power market and precools the thermal masses in the data center, which can then absorb heat when the power price increases later. Meanwhile, TEStore also checks the energy level in UPS batteries and exploits it as a complementary method in shifting energy demand for data center cooling. On a longer time scale, TEStore is integrated with auxiliary thermal tanks, which are recently adopted by some data centers to store energy in the form of ice. We model the impacts of TEStore on server temperatures based on Computational Fluid Dynamics (CFD) to consider the realistic thermal dynamics in a data center with 1,120 servers. We then evaluate TEStore with workload traces from real-world data centers and power price traces from a real power market. Our results show that TEStore can achieve the desired cooling performance with a much lower electricity bill than the current practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Network-aware impact determination algorithms for service workflow deployment in hybrid clouds

    Publication Year: 2012 , Page(s): 28 - 36
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (279 KB) |  | HTML iconHTML  

    In recent years, many service providers have started migrating their service offerings to cloud infrastructure. Sometimes, parts of the service workflow can however not be moved to cloud environments. This can occur due to client policies, or because some services are linked to physical client-site devices. The result of the migration is then a hybrid cloud environment, where part of the services are executed within the client network, while most of the processing is moved to the cloud. Migration to the cloud enables a more flexible deployment of services, but also increases the strain on underlying networks as most tasks are partially handled in a remote cloud, and no longer just in the local network. An important question that providers must answer before new service workflows are deployed is whether they can provide the workflow with sufficient quality of service, and whether the deployment will impact existing service workflows. In this paper we discuss strategies based on multi-commodity flow problems, a subset of graph flow problems that can be used to determine whether new service workflows can be sufficiently provisioned, and whether the addition of new workflows can negatively impact the performance of existing flows. We evaluate the proposed solution by comparing the performance of three approaches with respect to the number of successful workflows and with respect to their execution speed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient and secure data storage operations for mobile cloud computing

    Publication Year: 2012 , Page(s): 37 - 45
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (314 KB) |  | HTML iconHTML  

    In a mobile cloud computing system, lightweight wireless communication devices extend cloud services into the sensing domain. A common mobile cloud secure data service is to inquiry the data from sensing devices. The data can be collected from multiple requesters, which may drain out the power of sensing devices quickly. Thus, an efficient data access control model is desired. To this end, we present a comprehensive security data inquiry framework for mobile cloud computing. Our solution focuses on the following two research directions: First, we present a novel Privacy Preserving Cipher Policy Attribute-Based Encryption (PP-CP-ABE) to protect sensing data. Using PP-CP-ABE, light-weight devices can securely outsource heavy encryption and decryption operations to cloud service providers, without revealing the data content. Second, we propose an Attribute Based Data Storage (ABDS) system as a cryptographic group-based access control mechanism. Our performance assessments demonstrate the security strength and efficiency of the presented solution in terms of computation, communication, and storage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Failure analysis of distributed scientific workflows executing in the cloud

    Publication Year: 2012 , Page(s): 46 - 54
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (576 KB) |  | HTML iconHTML  

    This work presents models characterizing failures observed during the execution of large scientific applications on Amazon EC2. Scientific workflows are used as the underlying abstraction for application representations. As scientific workflows scale to hundreds of thousands of distinct tasks, failures due to software and hardware faults become increasingly common. We study job failure models for data collected from 4 scientific applications, by our Stampede framework. In particular, we show that a Naive Bayes classifier can accurately predict the failure probability of jobs. The models allow us to predict job failures for a given execution resource and then use these failure predictions for two higher-level goals: (1) to suggest a better job assignment, and (2) to provide quantitative feedback to the workflow component developer about the robustness of their application codes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed oblivious load balancing using prioritized job replication

    Publication Year: 2012 , Page(s): 55 - 63
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (187 KB) |  | HTML iconHTML  

    Load balancing in large distributed server systems is a complex optimization problem of critical importance in cloud systems and data centers. However, any full (i.e., optimal) solution incurs significant, often prohibitive, overhead due to the need to collect state-dependent information. We propose a novel scheme that incurs no communication overhead between the users and the servers upon job arrivals, thus removing any scheduling overhead from the job execution's critical path. Furthermore, our scheme is oblivious, that is, it does not use any state information. Our approach is based on creating, in addition to the regular job requests that are assigned to randomly chosen servers, also replicas that are sent to different servers; these replicas are served in low priority, such that they do not add any real burden on the servers. Through analysis and simulations we show that the expected system performance improves up to a factor of 2 (even under high load conditions), if job lengths are exponentially distributed, and over a factor of 5, when job lengths adhere to heavy-tailed distributions. We implemented a load balancing system based on our approach and deployed it on the Amazon Elastic Compute Cloud (EC2). Realistic load tests on that system indicate that the actual performance is as predicted. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • K-sparse approximation for traffic histogram dimensionality reduction

    Publication Year: 2012 , Page(s): 64 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (362 KB) |  | HTML iconHTML  

    Traffic histograms play a crucial role in various network management applications such as network traffic anomaly detection. However, traffic histogram-based analysis suffers from the curse of dimensionality. To tackle this problem, we propose a novel approach called K-sparse approximation. This approach can drastically reduce the dimensionality of a histogram, while keeping the approximation error low. K-sparse approximation reorders the traffic histogram and uses the top-K coefficients of the reordered histogram to approximate the original histogram. We find that after reordering the widely-used histograms of source port and destination port exhibit a power-law distribution, based on which we establish a relationship between K and the resulting approximation error. Using a set of traces collected from a European network and a regional network, we evaluate our K-sparse approximation and compare it with a well-known entropy-based approach. We find that the power-law property holds for different traces and time intervals. In addition, our results show that K-sparse approximation has a unique property that is lacking in the entropy-based approach. Specifically, K-sparse approximation explicitly exposes a tradeoff between compression level and approximation accuracy, enabling to easily select a desired settlement point between the two objectives. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SLA-aware resource over-commit in an IaaS cloud

    Publication Year: 2012 , Page(s): 73 - 81
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (379 KB) |  | HTML iconHTML  

    Cloud paradigm facilitates cost-efficient elastic computing allowing scaling workloads on demand. As cloud size increases, the probability that all workloads simultaneously scale up to their maximum demand, diminishes. This observation allows multiplexing cloud resources among multiple workloads, greatly improving resource utilization. The ability to host virtualized workloads such that available physical capacity is smaller than the sum of maximal demands of the workloads, is referred to as over-commit or over-subscription. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hidden anomaly detection in telecommunication networks

    Publication Year: 2012 , Page(s): 82 - 90
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (249 KB) |  | HTML iconHTML  

    Nowadays one of the challenges of telecommunication systems management is to detect in real-time unexpected or hidden malfunctions in extremely complex environments. In this article, we present an on-line algorithm that performs a flow of messages analysis. More precisely, it is able to highlight hidden abnormal behaviors that existing network management methods would not detect. Our algorithm uses the notion of constraint curves, introduced in the Network Calculus theory, defining successive time windows that bound the flow. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fine-grain diagnosis of overlay performance anomalies using end-point network experiences

    Publication Year: 2012 , Page(s): 91 - 99
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (919 KB) |  | HTML iconHTML  

    Overlay networks were proposed to improve Internet reliability and facilitate a rapid deployment of new services. Non-invasive diagnosis of performance problems is the key capability for overlay service management in order to adapt to dynamic network conditions in a timely manner. Existing overlay diagnosis approaches assume extensive knowledge about the network, and require monitoring sensors or active measurements. In this paper, we propose a novel diagnosis technique to localize performance anomalies and determine the packet loss contribution for each network component. Our approach is purely based on endpoint packet loss observations to reason about the location of observed packet loss without active probing or sensor deployment. We formulate the problem as a constraint-satisfaction problem using constraints derived from network loss invariants and end-user observations. Our solution also circumvents the possibilities of insufficient or malicious end-user participation. We evaluate our approach extensively using simulation and experimentation, and demonstrate the accuracy, effectiveness and scalability of our approach for various network sizes, participation levels and spurious amounts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decentralized detection of SLA violations using P2P technology

    Publication Year: 2012 , Page(s): 100 - 107
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (149 KB) |  | HTML iconHTML  

    Critical networked services enable significant revenue for network operators and, in turn, are regulated by Service Level Agreements (SLAs). In order to ensure SLAs are being met, service levels need to be monitored. One technique for this involves active measurements, such as IPSLA. However, active measurements are expensive in terms of CPU consumption on network devices. As a result, active measurements usually can cover only a fraction of what could be measured, which can lead to SLA violations being missed. The definition of which subsets of service paths to measure and to configure corresponding measurement probes is a practice that does not scale well and results in fairly static measurement setups that do not adapt well to shifting networking patterns. We propose a solution to increase the detection rate of SLA violations in which devices in a network autonomously and dynamically determine how to place probes in order to detect service level violations. It does not require human intervention, is adaptive to changes in network conditions, resilient to networking faults, and independent of the underlying active measurement technology. Our solution is based on peer-to-peer principles and is characterized by a high degree of decentralized decision making across a network using a self-organizing overlay. In these experiments it is possible to observe that an increase in the information used in probe placement decisions decreases the number of SLA violations missed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Planning in the large: Efficient generation of IT change plans on large infrastructures

    Publication Year: 2012 , Page(s): 108 - 116
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (295 KB) |  | HTML iconHTML  

    Change Management, a core process of the Information Technology Infrastructure Library (ITIL), is concerned with the management of changes to networks and services to minimize costly disruptions on the business. As part of Change Management, IT changes need to be planned. Previous approaches to automatically generate IT change plans struggle, in terms of scalability, to properly deal with large Configuration Management Databases (CMDBs). To enable IT change planning in the large, in this paper we discuss and analyze optimizations for refinement-based IT change planning over object-oriented CMDBs. Our optimizations reduce the runtime complexity of several key operations part of refinement-based IT change planning algorithms. A sensitivity analysis shows that our optimizations outperform SHOP2 - the winner of a previous comparison among IT change planners - in terms of runtime complexity for several important characteristics of IT changes and CMDBs. A cloud deployment case study of a Three-tier application and a virtual network configuration case study demonstrate the feasibility of our approach and confirm the results from the sensitivity analysis: IT change planning has evolved from planning in the small to planning in the large. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predicting response times for the Spotify backend

    Publication Year: 2012 , Page(s): 117 - 125
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1747 KB) |  | HTML iconHTML  

    We model and evaluate the performance of a distributed key-value storage system that is part of the Spotify backend. Spotify is an on-demand music streaming service, offering low-latency access to a library of over 16 million tracks and serving over 10 million users currently. We first present a simplified model of the Spotify storage architecture, in order to make its analysis feasible. We then introduce an analytical model for the distribution of the response time, a key metric in the Spotify service. We parameterize and validate the model using measurements from two different testbed configurations and from the operational Spotify infrastructure. We find that the model is accurate-measurements are within 11% of predictions-within the range of normal load patterns. We apply the model to what-if scenarios that are essential to capacity planning and robustness engineering. The main difference between our work and related research in storage system performance is that our model provides distributions of key system metrics, while related research generally gives only expectations, which is not sufficient in our case. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An autonomous topology management framework for QoS enabled P2P video streaming systems

    Publication Year: 2012 , Page(s): 126 - 134
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (187 KB) |  | HTML iconHTML  

    Streaming live video over the Internet presents great challenges due to its sheer bandwidth requirements. Client/Server model suffers from scalability issues and high deployment cost to provide this service. Peer-to-Peer (P2P) approach provides an excellent alternative due to its potential scalability and ease of deployment. Nonetheless, a major limitation of P2P approach lies in its high dependency on users. Since peers relay content, which themselves are controlled by users, the behavior of the latter has a major impact on the streaming quality perceived by users. Indeed, unlike dedicated servers, peers join the system intermittently, which poses great challenges in providing QoS for operated live streaming services. In this paper, we propose an autonomous topology management framework for P2P live streaming architectures that minimizes the impact of peers' frequent departures. It consists in a stabilization strategy for push-based systems that moves unstable peers towards the outskirts of the topology. To validate our approach, we performed experiments on PlanetLab and show here the significant improvement of our contribution as compared to an existing system in terms of the global service quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Path inference in data center networks

    Publication Year: 2012 , Page(s): 135 - 139
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (108 KB) |  | HTML iconHTML  

    This paper presented Chartis, a path inference system for data center networks. Chartis takes as input router configurations and topology data to emulate hop-by-hop path calculation and overcomes the limitations of existing path tracing tools such as traceroute. We verified that the inferred paths by Chartis match the set of devices actual packets traverse. We also demonstrated that how Chartis was used to perform various network management tasks such as service troubleshooting. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive data management for self-protecting objects in cloud computing systems

    Publication Year: 2012 , Page(s): 140 - 144
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (231 KB) |  | HTML iconHTML  

    While Cloud data services are a growing successful business and computing paradigm, data privacy and security are major concerns. One critical problem is to ensure that data owners' policies are honored, regardless of where the data is physically stored and how often it is accessed, and modified. This scenario calls for an important requirement to be satisfied. Data should be managed in accordance to owners' preferences, Cloud providers service agreements, and the local regulations that may apply. In this work we propose innovative policy enforcement techniques for adaptive sharing of users' outsourced data. We introduce the notion of autonomous security-aware objects, that by means of object-oriented programming techniques, encapsulate sensitive resources and assure their protection. Our evaluation demonstrates that our approach is effective. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ACRA: A unified admission control and resource allocation framework for virtualized environments

    Publication Year: 2012 , Page(s): 145 - 149
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (155 KB) |  | HTML iconHTML  

    Exploiting the benefits of virtualization, web services are consolidated in large data centers. Managing the performance of such complex systems is a critical problem. Providers must offer applications with high quality of service (QoS) and performance and simultaneously achieve optimal utilization of their infrastructure. Meeting their Service Level Objectives (SLOs), such as response time in a dynamic environment (dense load, variable capacity), while minimizing the energy consumption of the data center is an open research problem.Most of the proposed approaches use either admission control or resource allocation techniques to solve it. We present a unified framework, which models the system's dynamic behavior with a group of state-space models, scales between different desired operation points and uses a set-theoretic control technique to solve admission control and resource allocation problems as a common decision problem with stability and robustness guarantees for the system under study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Load balancer discovery for the data center: A holistic approach

    Publication Year: 2012 , Page(s): 150 - 154
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (265 KB) |  | HTML iconHTML  

    Data center migration relies heavily upon the ability to detect critical IT components in the original environment. Not all components are easy to detect using traditional discovery tools. For instance, load balancers are designed to b e transparent to network traffic, making them invisible to normal scans. Also, the discovery task must be minimally invasive, since the IT environment to be migrated is usually in a production state until the relocation is complete. Studying the configuration of discovered host machines can reveal the presence of the network appliances they use, allowing a “short list” of probable appliance locations to b e made, even if such appliances do not themselves respond to traditional network scans. This paper describes a collection of heuristics for detecting the presence of network devices in data centers, with a specific focus on detecting load balancers. Our approach lies in exploiting (1) host-based network data, which is often collected during data center migrations, and (2) knowledge of how load balancer appliances and managed servers are conventionally configured in an enterprise network. The effectiveness of our techniques is evaluated using data from an IBM data center migration project. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new approach to the design of flexible cloud management platforms

    Publication Year: 2012 , Page(s): 155 - 158
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (179 KB) |  | HTML iconHTML  

    Current cloud management platforms have been designed to deal mainly with computing and storage resources. Networking, on the other hand, is often focused only on ensuring basic connectivity between virtual machines. That means, advanced requirements, such as delay and bandwidth guarantees or handing of network control to the customer, are not supported in today's platforms. Another important shortcoming is that resource management strategies are mostly implemented as part of the core of platforms, leaving little or no room for personalization by the operator or the customer. Therefore, in this paper we present the building blocks of a new conceptual architecture of a cloud platform aiming to add advanced yet robust network configuration support and more flexibility at the core of the platform to better fit the needs of each cloud environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decision model for provisioning virtual resources in Amazon EC2

    Publication Year: 2012 , Page(s): 159 - 163
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (307 KB) |  | HTML iconHTML  

    Nowadays computing resources can be acquired from IaaS cloud providers in different purchasing options. Taking Amazon Elastic Compute Cloud (EC2) for instance, there are three purchasing models, and each option has different price and yields different benefit to clients. The issue that we address in this paper is how cloud users could make a provisioning plan for computing resources. We propose a model which is based on the characteristics of three purchasing options provided by Amazon EC2, which can be used for guiding the capacity planning activity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MobiCloud: A geo-distributed mobile cloud computing platform

    Publication Year: 2012 , Page(s): 164 - 168
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (531 KB) |  | HTML iconHTML  

    In a cloud computing environment, users prefer to migrate their locally processing workloads onto the cloud where more resources with better performance can be expected. ProtoGENI [1] and PlanetLab [17] have further improved the current Internet-based resource outsourcing by allowing end users to construct a virtual network system through virtualization and programmable networking technologies. However, to the best of our knowledge, there is no such general service or resource provisioning platform designated for mobile devices. In this paper, we present a new design and implementation of MobiCloud that is a geo-distributed mobile cloud computing platform. The discussions of the system components, infrastructure, management, implementation flow, and service scenarios are followed by an example on how to experience the MobiCloud system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CloudDT: Efficient tape resource management using deduplication in cloud backup and archival services

    Publication Year: 2012 , Page(s): 169 - 173
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (273 KB) |  | HTML iconHTML  

    Cloud-based backup and archival services use large tape libraries as a cost-effective cold tier in their online storage hierarchy today. These services leverage deduplication to reduce the disk storage capacity required by their customer data sets, but they usually re-duplicate the data when moving it from disk to tape. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • VMPatrol: Dynamic and automated QoS for virtual machine migrations

    Publication Year: 2012 , Page(s): 174 - 178
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (215 KB) |  | HTML iconHTML  

    As more and more data centers embrace end host virtualization and virtual machine (VM) mobility becomes commonplace, we explore its implications on data center networks. Live VM migrations are considered expensive operations because of the additional network traffic they generate, which can impact the network performance of other applications in the network, and because of the downtime that applications running on a migrating VM may experience. Most virtualization vendors currently recommend a separate network for VM mobility. However, setting up an alternate network just for VM migrations can be extremely costly and thus presents a barrier to seamless VM mobility. Therefore, it is apparent that VM migrations should be orchestrated in a network-aware manner with appropriate QoS controls such that they do not degrade network performance of other flows in the network while still being allocated the bandwidth they require for successful completion within the specified time lines. In this context, we present VMPatrol - a QoS framework for VM migrations. VMPatrol uses a cost of migration model to allocate a minimal bandwidth for a migration flow such that it completes within the specified time limit while causing minimal interference to other flows in the network. Our implementation and experimental evaluation of VMPatrol on real and virtual software testbeds demonstrates that automated bandwidth reservation can reduce the impact of migrations on other flows in the network to a negligible level. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.