By Topic

Network Cloud Computing and Applications (NCCA), 2012 Second Symposium on

Date 3-4 Dec. 2012

Filter Results

Displaying Results 1 - 25 of 32
  • [Front cover]

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (366 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (53 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (178 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (168 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (129 KB)  
    Freely Available from IEEE
  • Message from the Steering Committee Chairs

    Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (143 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Message from the General Chair

    Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (139 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Organizing Committee

    Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (128 KB)  
    Freely Available from IEEE
  • Programme Committee

    Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (117 KB)  
    Freely Available from IEEE
  • An Efficient Fault-Tolerant Algorithm for Distributed Cloud Services

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3772 KB) |  | HTML iconHTML  

    Several approaches for fault-tolerance in distributed systems were introduced; however, they require prior knowledge of the environment's operating conditions and/or constant monitoring of these conditions at run time. That allows the applications to adjust the load and redistribute the tasks when failures occur. These techniques work well when there is no high communication delay. Yet, this is not true in the Cloud, where data and computation servers are connected over the Internet and distributed across large geographic areas. Thus they usually exhibit high and dynamic communication delays that make discovering and recovering from failures take a long time. This paper proposes a delay-tolerant fault-tolerance algorithm that effectively reduces execution time and adapts for failures while minimizing the fault discovery and recovery overhead in the Cloud. Distributed tasks that can use this algorithm include downloading data from replicated servers and executing parallel applications on multiple independent distributed servers in the Cloud. The experimental results show the efficiency of the algorithm and its fault tolerance feature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Auto-tuning of Cloud-Based In-Memory Transactional Data Grids via Machine Learning

    Page(s): 9 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (595 KB) |  | HTML iconHTML  

    In-memory transactional data grids have revealed extremely suited for cloud based environments, given that they well fit elasticity requirements imposed by the pay-as-you-go cost model. Particularly, the non-reliance on stable storage devices simplifies dynamic resize of these platforms, which typically only involves setting up (or shutting down) some data-cache instance. On the other hand, defining the well suited amount of cache servers to be deployed, and the degree of replication of slices of data, in order to optimize reliability/availability and performance tradeoffs, is far from being a trivial task. As a example, scaling up/down the size of the underlying infrastructure might give rise to scarcely predictable secondary effects on the side of the synchronization protocol adopted to guarantee data consistency while supporting transactional accesses. In this paper we investigate on the usage of machine learning approaches with the aim at providing a means for automatically tuning the data grid configuration, which is achieved via dynamic selection of both the well suited amount of cache servers, and the well suited degree of replication of the data-objects. The final target is to determine configurations that are able to guarantee specific throughput or latency values (such as those established by some SLA), under some specific workload profile/intensity, while minimizing at the same time the cost for the cloud infrastructure. Our proposal has been integrated within an operating environment relying on the well known Infinispan data grid, namely a mainstream open source product by the Red Had JBoss division. Some experimental data are also provided supporting the effectiveness of our proposal, which have been achieved by deploying the data platform on top of Amazon EC2. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Energy Aware Network Management Approach Using Server Profiling in 'Green' Clouds

    Page(s): 17 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (834 KB) |  | HTML iconHTML  

    Clouds and data centres are significant consumers of power. There are however, opportunities for optimising carbon cost here as resource redundancy is provisioned extensively. Data centre resources, and subsequently clouds which support them, are traditionally organised into tiers; switch-off activity when managing redundant resources therefore occurs in an approach which exploits cost advantages associated with closing down entire network portions. We suggest however, an alternative approach to optimise cloud operation while maintaining application QoS: Simulation experiments identify that network operation can be optimised by selecting servers which process traffic at a rate that more closely matches the packet arrival rate, and resources which provision excessive capacity additional to that required may be powered off for improved efficiency. This recognises that there is a point in server speed at which performance is optimised, and operation which is greater than or less than this rate will not achieve optimisation. A series of policies have been defined in this work for integration into cloud management procedures; performance results from their implementation and evaluation in simulation show improved efficiency by selecting servers based on these relationships. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Choosing a Local or Remote Cloud

    Page(s): 25 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (569 KB) |  | HTML iconHTML  

    Energy consumption in ICT as well as quality of service (QoS) are important considerations in the choice of computational resources. In Cloud Computing, users who work within an organisation will increasingly have access to their own cloud service (the "local cloud") and will also be able to access external cloud services. Choices between these options will be made based on security, cost, data and software protec- tion and resilience, but also in terms of technical choices regarding QoS and energy consumption. This paper addresses only the technical choice between a local or remote cloud service, and discusses how this choice can be formulated as an optimisation problem. After providing some experimental measurements regarding energy and performance of servers, we formulate the optimisation problem, describe its solution and present some numerical examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Model-Based Approach for Optimizing Power Consumption of IaaS

    Page(s): 31 - 39
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2939 KB) |  | HTML iconHTML  

    Virtual Machine Image (VMI) provisioning is an important process of Infrastructure as a Service delivery model to provide virtual images in Cloud Computing. The power consumption and energy efficiency of VMI provisioning process depend not only on the hardware infrastructure, but also on the VMI,'s configuration, which helps to compose, configure and deploy VMIs in Cloud Computing environments. The major issue of improving the energy efficiency of VMI provisioning process is how to reduce the power consumption while ensuring the compatibility of software components installed in a virtual machine image. This paper describes a model-driven approach to improve the energy efficiency of VMI provisioning in Cloud Computing. This approach considers virtual images as product lines and uses feature models to represent their configurations. It uses model-based techniques to handle VMI specialization, automatic deployment and reconfiguration. The approach aims at minimizing the amount of unneeded software installed in VMIs, and thus to reduce the power consumption of VMI provisioning as well as the data transfer through the network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic Data Replication Scheme in the Cloud Computing Environment

    Page(s): 40 - 47
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (992 KB) |  | HTML iconHTML  

    In the cloud computing environment, data replication strategy (DRS) is used to improve data access. Related studies have proposed data replication strategies. The performances of these strategies are closely related to the users' access patterns, and work optimally for a particular data access pattern. However, as the data access patterns become more flexible and unpredictable, it is difficult to manage them with traditional replication strategies. Given this circumstance, this paper proposes an algorithm that detects changes in a user's data access pattern and dynamically applies an optimal replication strategy. The proposed algorithm has the advantage of maintaining an optimal performance by responding to various data access patterns. We tested the proposed algorithm and validated its effectiveness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Ontology Cloud Storage System

    Page(s): 48 - 52
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (474 KB) |  | HTML iconHTML  

    There are dramatically increasing interests from both academic and industry in the trend of cloud computing. Cloud computing depends on the idea of computing on demand that provide, support and delivery of computing services with stable and large data space. Our research concerns with improving the searching process in the cloud storage via avoiding the bottleneck in central ontology cloud storage system since all data chunks in the cloud must be indexed by a master ontology server.The contribution of this paper proposes new cloud storage architecture based distributed ontology as one of the main semantic technologies. This architecture provides better scalability, fault tolerance and enhanced performance for searching in the cloud storage avoiding the central bottleneck. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Public Cloud Extension for Desktop Applications -- Case Study of a Data Mining Solution

    Page(s): 53 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1328 KB) |  | HTML iconHTML  

    The paper describes challenges and obstacleswhen developing a cloud extension for a computationally expensive desktop application to perform computation tasks in a public cloud. In a case study we highlight thisstep by step with a real-world data mining application and present solutions to realize this scenario. Amazon'sS3, EC2, RDS, SES, IAM and STS services are utilized in a complex setup in order to realize a completely dynamic cloud layer architecture which is needed to implement the extension. This includes the creation of the infrastructure and the task execution in the cloud aswell as its extermination afterwards, which is described besides the data mining application itself. Today's major challenges regarding cloud computing as data security and privacy are taken into account. Additionally, a multicriteria-optimization across different cloud setups is considered in order to guarantee transparency concerning a runtime-cost tradeoff and to rule out suboptimal setups.The approach is based on benchmarks that have to beperformed. The effectiveness of this setup is illustrated by example application instances. The results show, under which circumstances it is beneficial to use the cloud to perform computing tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Flexible Integration of Eventually Consistent Distributed Storage with Strongly Consistent Databases

    Page(s): 65 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB) |  | HTML iconHTML  

    In order to design distributed business applications or services, the common practice consists in setting up a multi-tier architecture on top of a relational database. Due to the recent evolution of the needs in terms of scalability and availability in cloud environments, the design of the data access layer got significantly more complicated because of the trade-off decisions between consistency, scalability and availability that have to be taken into account in accordance with the CAP theorem. An interesting compromise in this context consists in offering some flexibility at the consistency level, in order to allow multi-tier architectures to support partition tolerance flexibility while guaranteeing availability. This paper introduces a flexible data layer that guarantees availability and gives the ability to the developers to easily select the required execution context, by integrating eventually consistent storage with strongly consistent databases. A given query can either be executed in an eventually consistent but very scalable context or in a strongly consistent context with limited scalability. The benefits of the proposed framework are validated in a real-world use case. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • How the Dataweb Can Support Cloud Federation: Service Representation and Secure Data Exchange

    Page(s): 73 - 79
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (500 KB) |  | HTML iconHTML  

    Cloud Computing and federation enable new challenging business scenarios. However, the technology driving federation is at the early stage and many issues have to be overcome. On one hand, an important problem is how to logically map the virtual resources hosted in federated cloud data-centers. On the other hand, another issue consists of how to exchange data in a secure way to enable a cloud to access only the allowed external resources. In this paper, we address both issues adopting the concept of "Dataweb'' and the XDI technology. More specifically, using the Higgings framework an use case including several IaaS federated clouds is discussed and evaluated, specifically focusing on security. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Secure System Development for Integrated Cloud Applications

    Page(s): 80 - 87
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB) |  | HTML iconHTML  

    Companies that use a Software-as-a-Service (SaaS) application do so mainly to either replace an existing IT solution or as an extension function to other applications. However, both the data and the system may be exposed to higher threats when a SaaS application is integrated into an existing IT infrastructure. Many firms rate security as a critical issue when moving to the cloud, but only a scant few know how to secure their data. In an attempt to solve this problem, this paper included both the technical view and the management view to help enterprises enhance their security capability when using SaaS. The risks of different SaaS applications and their proposed security enhancement controls are presented from a technical point of view. Based on the management point of view, a model is proposed that will guide a firm in the development of a secure SaaS integrated system as well as fulfill the business requirements. Checklists and reference tools are provided to enable an efficient and effective execution. To prove the utility of the proposed model, we arranged for two firms to apply our model. It was found that when enterprises use the proposed model when adopting a SaaS solution, they can enhance the protection level of their data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cloud Computing for Global Name-Resolution in Information-centric Networks

    Page(s): 88 - 94
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (347 KB) |  | HTML iconHTML  

    Information-Centric Networking (ICN) is a novel paradigm for future Internet architectures. It exploits the current trend in Internet usage which mostly involves information dissemination. ICN architectures based on the publish/subscribe model use names for information in order to route requests and data, as well as to facilitate in-network caching, anycasting and multicasting for efficient content delivery. However, the number of named information objects is expected to be huge in the future Internet, raising serious concerns with respect to a global-scale deployment of ICN. Routing and forwarding will require vast amounts of state, which pushes storage, maintenance and processing demands to the limit. In this paper we discuss the feasibility of deploying the Data Oriented Networking Architecture (DONA) by leveraging cloud computing facilities. We identify the exact scalability concerns for DONA based on simulations over a realistic model of the current Internet topology and find that registrations for information objects lead to a state explosion. For this reason, we then discuss how cloud facilities can assist DONA deployment, focusing on various options for deploying DONA in the cloud and their suitability for different areas of the inter-network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Android-Enabled Mobile Framework for Ubiquitous Access to Cloud Emergency Medical Services

    Page(s): 95 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3001 KB) |  | HTML iconHTML  

    Recently, there has been a remarkable upsurge in activity surrounding the adoption of Personal Health Records (PHRs). Since PHRs contain global patient information and not certain pieces collected by individual healthcare providers, they can be used as basic infrastructures for building and operating several important systems for both healthcare and the tax payers. Emergency medical systems (EMS) are among the most crucial ones as they involve a variety of activities which are performed from the time of a call to an ambulance service till the time of patient's discharge from the emergency department of a hospital and are closely interrelated so that collaboration and coordination becomes a vital issue for patients and for emergency healthcare service performance. The integration of leading-edge technologies, such as cloud-based services and mobile technology, with Personal Health Records (PHRs) can prove important in emergency care delivery as it can facilitate authorized access to comprehensive and unified health information at any point of care or decision making through familiar environments such as Google's Android. This paper is concerned with the development of a PHR-based EMS in a cloud computing environment. The proposed EMS is accessible by Android-enabled mobile devices and incorporates a customized asynchronous notification feature whereby caregivers are notified on critical data updates in a way that efficient utilization of mobile device resources is achieved. This feature draws upon a cloud-based push messaging mechanism, namely Google Cloud Messaging, a lightweight mechanism which enables servers to communicate asynchronously with mobile applications running on Android Operating System. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and Architecture of Cloud-Based Mobile Phone Sensing Middleware

    Page(s): 102 - 109
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (733 KB) |  | HTML iconHTML  

    Recently smartphones are in widespread use and they have large storage space and processing power. Thus, the smartphone-based networks with cloud server can be used as a cost-efficient sensing platform with high capable of processing complex, cooperative tasks just in time. However, low level implementation of cloud-based mobile phone applications needs a lot of human efforts, and has a considerable gap with high-level requirement given by application developers. To fill the gap, we propose a support middleware to execute cloud-based mobile sensing applications. Since we have proposed in our previous work, a language to describe high-level specification of cooperative applications on WSN, we extend the concept to manage and control multiple smartphones that participate in the system. We have shown some example descriptions of high-level specifications and have implemented the prototype system to confirm its usefulness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Supporting Apps in the Personal Cloud: Using WebSockets within Hybrid Apps

    Page(s): 110 - 115
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (526 KB) |  | HTML iconHTML  

    Cloud Computing [1,2] is a "utility computing model" (John McCarthy, 1961), that allows the purchase of virtualized hardware (Infrastructure as a Service, IaaS), software platforms (Platform as a Service, PaaS) or applications/functionality (Software as a Service, SaaS) in a pay-as-you-go manner, comparable to the metered purchase of electricity, gas or water. Seen in the past primarily as a means for organizations to increase their flexibility, cloud computing has begun to enter the consumer space by offering solutions to personal computing needs that are based on virtualization e.g. cloud storage. Accessing such cloud services with tablets and smartphones enables user and context aware cloud services resulting in a personal(ized) cloud. This personal cloud allows users to access data and services via their mobile devices which in turn allows time, location and device independent computing. This paper focusses on the infrastructure challenges of enabling the personal cloud and presents an evaluation of our consumer-centric cloud portal (C3P). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic Topology Orchestration for Distributed Cloud-Based Applications

    Page(s): 116 - 123
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB) |  | HTML iconHTML  

    This paper describes a specification language and architecture for managing distributed software and mapped compute, storage and network infrastructure services dynamically, beyond the state of the art in cloud computing. This is referred to as dynamic application topology orchestration, where the mapping and configuration of distributed, interconnected, interdependent application services and infrastructure resources are dynamically adjusted, according to guarantees in Service Level Agreements (SLAs) and operational constraints. The viability and benefits of this architectural approach are compared against simpler strategies, to establish technical and business cases for the associated engineering effort. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.