By Topic

Network and Service Management, IEEE Transactions on

Issue 1 • Date March 2014

Filter Results

Displaying Results 1 - 13 of 13
  • Table of contents

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (48 KB)  
    Freely Available from IEEE
  • Guest Editors' Introduction: Special Issue on Management of Cloud Services

    Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (108 KB)  
    Freely Available from IEEE
  • Integrated Resiliency Planning in Storage Clouds

    Page(s): 3 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB) |  | HTML iconHTML  

    Storage clouds use economies of scale to host data for diverse enterprises. However, enterprises differ in the requirements for their data. In this work, we investigate the problem of resiliency or disaster recovery (DR) planning in a storage cloud. The resiliency requirements vary greatly between different enterprises and also between different datasets for the same enterprise. We present in this paper Resilient Storage Cloud Map (RSCMap), a generic cost-minimizing optimization framework for disaster recovery planning, where the cost function may be tailored to meet diverse objectives. We present fast algorithms that come up with a minimum cost DR plan, while meeting all the DR requirements associated with all the datasets hosted on the storage cloud. Our algorithms have strong theoretical properties: 2 factor approximation for bandwidth minimization and fixed parameter constant approximation for the general cost minimization problem. We perform a comprehensive experimental evaluation of RSCMap using models for a wide variety of replication solutions and show that RSCMap outperforms existing resiliency planning approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning Automata-Based QoS Framework for Cloud IaaS

    Page(s): 15 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1397 KB) |  | HTML iconHTML  

    This paper presents a Learning Automata (LA)-based QoS (LAQ) framework capable of addressing some of the challenges and demands of various cloud applications. The proposed LAQ framework ensures that the computing resources are used in an efficient manner and are not over- or under-utilized by the consumer applications. Service provisioning can only be guaranteed by continuously monitoring the resource and quantifying various QoS metrics, so that services can be delivered in an on-demand basis with certain levels of guarantee. The proposed framework helps in ensuring guarantees with these metrics in order to provide QoS-enabled cloud services. The performance of the proposed system is evaluated with and without LA, and it is shown that the LA-based solution improves the performance of the system in terms of response time and speed up. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Consistency as a Service: Auditing Cloud Consistency

    Page(s): 25 - 35
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB) |  | HTML iconHTML  

    Cloud storage services have become commercially popular due to their overwhelming advantages. To provide ubiquitous always-on access, a cloud service provider (CSP) maintains multiple replicas for each piece of data on geographically distributed servers. A key problem of using the replication technique in clouds is that it is very expensive to achieve strong consistency on a worldwide scale. In this paper, we first present a novel consistency as a service (CaaS) model, which consists of a large data cloud and multiple small audit clouds. In the CaaS model, a data cloud is maintained by a CSP, and a group of users that constitute an audit cloud can verify whether the data cloud provides the promised level of consistency or not. We propose a two-level auditing architecture, which only requires a loosely synchronized clock in the audit cloud. Then, we design algorithms to quantify the severity of violations with two metrics: the commonality of violations, and the staleness of the value of a read. Finally, we devise a heuristic auditing strategy (HAS) to reveal as many violations as possible. Extensive experiments were performed using a combination of simulations and real cloud deployments to validate HAS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multi-Granularity Memory Mirroring via Binary Translation in Cloud Environments

    Page(s): 36 - 45
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (537 KB) |  | HTML iconHTML  

    As the size of DRAM memory grows in clusters, memory errors are common. Current memory availability strategies mostly focus on memory backup and error recovery. Hardware solutions like mirror memory needs costly peripheral equipments while existing software approaches reduce the expense but are limited by the high overhead in practical usage. Moreover, in cloud environments, containers such as LXC now can be used as process and application-level virtualization to run multiple isolated systems on a single host. In this paper, we present a novel system called Memvisor to provide high availability memory mirroring. It is a software approach achieving flexible multi-granularity memory mirroring based on virtualization and binary translation. We can flexibly set memory areas to be mirrored or not from process level to the whole user mode applications. Then, all memory write instructions are duplicated. Data written to memory are synchronized to backup space in the instruction level. If memory failures happen, Memvisor will recover the data from the backup space. Compared with traditional software approaches, the instruction level synchronization lowers the probability of data loss and reduces the backup overhead. The results show that Memvisor outperforms the state-of-the-art software approaches even in the worst case. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TransCom: A Virtual Disk-Based Cloud Computing Platform for Heterogeneous Services

    Page(s): 46 - 59
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2041 KB) |  | HTML iconHTML  

    This paper presents the design, implementation, and evaluation of TransCom, a virtual disk (Vdisk) based cloud computing platform that supports heterogeneous services of operating systems (OSes) and their applications in enterprise environments. In TransCom, clients store all data and software, including OS and application software, on Vdisks that correspond to disk images located on centralized servers, while computing tasks are carried out by the clients. Users can choose to boot any client for using the desired OS, including Windows, and access software and data services from Vdisks as usual without consideration of any other tasks, such as installation, maintenance, and management. By centralizing storage yet distributing computing tasks, TransCom can greatly reduce the potential system maintenance and management costs. We have implemented a multi-platform TransCom prototype that supports both Windows and Linux services. The extensive evaluation based on both test-bed experiments and real-usage experiments has demonstrated that TransCom is a feasible, scalable, and efficient solution for successful real-world use. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Security as a Service Model for Cloud Environment

    Page(s): 60 - 75
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1417 KB) |  | HTML iconHTML  

    Cloud computing is becoming increasingly important for provision of services and storage of data in the Internet. However there are several significant challenges in securing cloud infrastructures from different types of attacks. The focus of this paper is on the security services that a cloud provider can offer as part of its infrastructure to its customers (tenants) to counteract these attacks. Our main contribution is a security architecture that provides a flexible security as a service model that a cloud provider can offer to its tenants and customers of its tenants. Our security as a service model while offering a baseline security to the provider to protect its own cloud infrastructure also provides flexibility to tenants to have additional security functionalities that suit their security requirements. The paper describes the design of the security architecture and discusses how different types of attacks are counteracted by the proposed architecture. We have implemented the security architecture and the paper discusses analysis and performance evaluation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Information Flow Control for Secure Cloud Computing

    Page(s): 76 - 89
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (369 KB) |  | HTML iconHTML  

    Security concerns are widely seen as an obstacle to the adoption of cloud computing solutions. Information Flow Control (IFC) is a well understood Mandatory Access Control methodology. The earliest IFC models targeted security in a centralised environment, but decentralised forms of IFC have been designed and implemented, often within academic research projects. As a result, there is potential for decentralised IFC to achieve better cloud security than is available today. In this paper we describe the properties of cloud computing-Platform-as-a-Service clouds in particular-and review a range of IFC models and implementations to identify opportunities for using IFC within a cloud computing context. Since IFC security is linked to the data that it protects, both tenants and providers of cloud services can agree on security policy, in a manner that does not require them to understand and rely on the particulars of the cloud software stack in order to effect enforcement. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Proactive Workload Management in Hybrid Cloud Computing

    Page(s): 90 - 100
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (562 KB) |  | HTML iconHTML  

    The hindrances to the adoption of public cloud computing services include service reliability, data security and privacy, regulation compliant requirements, and so on. To address those concerns, we propose a hybrid cloud computing model which users may adopt as a viable and cost-saving methodology to make the best use of public cloud services along with their privately-owned (legacy) data centers. As the core of this hybrid cloud computing model, an intelligent workload factoring service is designed for proactive workload management. It enables federation between on- and off-premise infrastructures for hosting Internet-based applications, and the intelligence lies in the explicit segregation of base workload and flash crowd workload, the two naturally different components composing the application workload. The core technology of the intelligent workload factoring service is a fast frequent data item detection algorithm, which enables factoring incoming requests not only on volume but also on data content, upon a changing application data popularity. Through analysis and extensive evaluation with real-trace driven simulations and experiments on a hybrid testbed consisting of local computing platform and Amazon Cloud service platform, we showed that the proactive workload management technology can enable reliable workload prediction in the base workload zone (with simple statistical methods), achieve resource efficiency (e.g., 78% higher server capacity than that in base workload zone) and reduce data cache/replication overhead (up to two orders of magnitude) in the flash crowd workload zone, and react fast (with an X^2 speed-up factor) to the changing application data popularity upon the arrival of load spikes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Advanced MapReduce: Cloud MapReduce, Enhancements and Applications

    Page(s): 101 - 115
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (994 KB) |  | HTML iconHTML  

    Recently, Cloud Computing is attracting great attention due to its provision of configurable computing resources. MapReduce (MR) is a popular framework for data-intensive distributed computing of batch jobs. MapReduce suffers from the following drawbacks: 1. It is sequential in its processing of Map and Reduce Phases 2. Being cluster based, its scalability is relatively limited. 3. It does not support flexible pricing. 4. It does not support stream data processing. We describe Cloud MapReduce (CMR), which overcomes these limitations. Our results show that CMR is more efficient and runs faster than other implementations of the MR framework. In addition to this, we showcase how CMR can be further enhanced to: 1. Support stream data processing in addition to batch data by parallelizing the Map and Reduce phases through a pipelining model. 2. Support flexible pricing using Amazon Cloud's spot instances and to deal with massive machine terminations caused by spot price fluctuations. 3. Improve throughput and speed-up processing over traditional MR by more than 30% for large data sets. 4. Provide added flexibility and scalability by leveraging features of the cloud computing model. Click-stream analysis, real-time multimedia processing, time-sensitive analysis and other stream processing applications can also be supported. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Flexible Architecture for Service Management in the Cloud

    Page(s): 116 - 125
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1876 KB) |  | HTML iconHTML  

    Cloud computing is a style of computing where different capabilities are provided as a service to customers using Internet technologies. The most common offered services are Infrastructure (IasS), Software (SaaS) and Platform (PaaS). This work integrates the service management into the cloud computing concept and shows how management can be provided as a service in the cloud. Nowadays, services need to adapt their functionalities across heterogeneous environments with different technological and administrative domains. The implied complexity of this situation can be simplified by a service management architecture in the cloud. This paper focuses on this architecture, taking into account specific service management functionalities, like incident management or KPI/SLA management, and provides a complete solution. The proposed architecture is based on a distributed set of agents, using semantic-based techniques: a Shared Knowledge Plane, instantiated in the cloud, has been introduced to ensure communication between agents. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2013 Index IEEE Transactions on Network and Service Management

    Page(s): 126 - 132
    Save to Project icon | Request Permissions | PDF file iconPDF (182 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Network and Service Management will publish (online only) peerreviewed archival quality papers that advance the state-of-the-art and practical applications of network and service management.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief

Rolf Stadler
Laboratory for Communication Networks
KTH Royal Institute of Technology
Stockholm
Sweden
stadler@kth.se