By Topic

Cloud Computing in Emerging Markets (CCEM), 2012 IEEE International Conference on

Date 11-12 Oct. 2012

Filter Results

Displaying Results 1 - 25 of 43
  • IEEE Cloud Computing for Emerging Markets (CCEM2012) [Front cover]

    Publication Year: 2012 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (144 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2012 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (118 KB)  
    Freely Available from IEEE
  • Welcome message

    Publication Year: 2012 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (71 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Welcome message 2

    Publication Year: 2012 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (69 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Keynote Speakers

    Publication Year: 2012 , Page(s): i - ii
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (190 KB)  

    Provides an abstract for each of the keynote presentations and a brief professional biography of each presenter. The complete presentations were not made available for publication as part of the conference proceedings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sponsor page

    Publication Year: 2012 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (359 KB)  
    Freely Available from IEEE
  • Committee page

    Publication Year: 2012 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (359 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2012 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (103 KB)  
    Freely Available from IEEE
  • Author index

    Publication Year: 2012 , Page(s): i - ii
    Save to Project icon | Request Permissions | PDF file iconPDF (62 KB)  
    Freely Available from IEEE
  • 1 * N Trust Establishment within Dynamic Collaborative Clouds

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (398 KB) |  | HTML iconHTML  

    Federation of security entities in cloud environments has crucial challenges in terms of policy reservations required for each of the multi-tenancy requests. In collaborative clouds, the problem is compounded due to the fact that the tenancy requester would be completely unaware of the end cloud provider. As the tenancy requesters would not have established any negotiation terms with the end cloud provider, it is a complex challenge to ensure dependability in terms of trust, privacy and security of data exchanges. Existing approaches require establishment of point to point trust. However, in the larger context of possible collaborative cloud providers, there is a need for simplified trust management for tenants. Asking the tenants to exclusively establish trust relationships with each other hinders the choice of providers thereby restricting the dynamic collaborations. We propose an approach based on a model wherein a single security mediator is responsible for sharing the required trust with all involved cloud providers within the collaboration. This mediator acts as a hub for all the participant tenants. The tenants establish negotiation terms with the mediator. Whenever the tenancy request needs to be satisfied by a subsequent cloud provider, first a trust is established between the mediator and the new cloud provider. Once this level of trust is accepted and confirmed by the tenancy requester, this provider is added as a trusted provider and tenancy requests can be satisfied by this specific cloud provider. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A CIM (Common Information Model) Based Management Model for Clouds

    Publication Year: 2012 , Page(s): 1 - 5
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (339 KB) |  | HTML iconHTML  

    The recently emerged Cloud Computing paradigm poses new management challenges because of its complex, heterogeneous infrastructure. A cloud contains infrastructure (Servers, Storage, Networks), applications (web apps, database, backup etc.) from various vendors. Generally, different vendor products are managed (discovery, provisioning, monitoring etc.) by their own proprietary management software. Today, in clouds there is no standard way to manage infrastructure and applications using a single management framework. This will cause cloud management a complex task and creates interoperability issues. The Cloud infrastructure cannot be easily replaced due to dependency on the management software. In this paper we will present various independent CIM (Common Information Model) based Management models available as today, their applicability to cloud infrastructure, advantages etc. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Hybrid Approach to Live Migration of Virtual Machines

    Publication Year: 2012 , Page(s): 1 - 5
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB) |  | HTML iconHTML  

    We present, discuss and evaluate a hybrid approach of live migrating a virtual machine across hosts in a Gigabit LAN. Our hybrid approach takes the best of both the traditional methods of live migration - pre and post-copy. In pre-copy, the cpu state and memory is transferred before spawning the VM on destination host whereas the latter is exactly opposite and spawns the VM on destination right after transferring processor state. In our approach, in addition to processor state, we bundle a lot of useful state information. This includes devices and frequently accessed pages of the VM, aka the working set. This drastically reduces the number of page faults over network while we actively transfer memory. Additionally, on every page fault over the network we transfer a set of pages in its locality in addition to the page itself. We propose a prototype design on KVM/Qemu and present a comparative analysis of pre-copy, post-copy and our hybrid approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Scheme for Mobile Based CAPTCHA Service on Cloud

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (544 KB) |  | HTML iconHTML  

    CAPTCHAs are popularly used techniques to distinguish humans and automated applications. Such techniques are often useful in banking transactions, email creation, online surveys, data downloads etc. Starting from a very primitive stage of providing the users with a simple alphabetical string to asking using to do complex calculations, CAPTCHAs have come a long way in terms of sophistication of human-bot distinction. However in this process, these CAPTCHAs have lost their human friendliness, either because of the noise being added to the CATPCHA tests or because of the complications of the challenges thrown to the user resulting in bad experience. The traditional CAPTCHAs also fail to take into account the unique needs of the ubiquitous mobile devices. These mobile devices have few limitations like small display area, limited display resolution, color combinations of the display, processing power etc. Also, they have unique advantages of touch sensitive input devices, voice inputs, voice outputs etc. In this paper, we present a scheme for a CAPTCHA service on Cloud, which is specific to mobile application. We duly consider the need of usability and presentation that is required for a mobile device in our implementation. The proposed CAPTCHA framework offers scalable and flexible implementation opportunities in many verticals and domains. Another unique feature of our framework is that we provide the facility for distributed verification. This greatly improves the efficiency at the CATPCHA generation as well as reduces the response time for the user authentication as human. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Authentication Service for Hadoop in Cloud Environment

    Publication Year: 2012 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (321 KB) |  | HTML iconHTML  

    Authentication remains as significant security challenge in hadoop environment. Hadoop does not strongly authenticate the client. As a result, data nodes can be accessed using block locations. This paper suggests the usage of the fundamental properties of a triangle and dual servers to improve the security level of hadoop clusters The password given by the user is interpreted and alienated into more than one unit using the authentication server and stored in multiple Backend Servers along with the corresponding username. The Authentication Server uses the values stored in multiple Backend Servers to authenticate the user. Authentication and Backend servers work together to authenticate the user. The registration process and the authentication process are hosted as a web service to authenticate the users before logging into the hadoop cluster. This paper suggests three approaches for security enhancement in hadoop environment based on triangle properties. An analysis on the security level and complexity of these approaches has also been presented in this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Always On: Architecture for High Availability Cloud Applications

    Publication Year: 2012 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (407 KB) |  | HTML iconHTML  

    With the shift towards cloud computing, application availability becomes a valid concern. Cloud applications have to be taken down to address various hardware and software problems. Even thoug redundancy insulates applications from hardware errors, the software updates and patches still require the service to be taken down for the period of the update. Organizations generally use lowactivity hours to do software updates. This paper proposes architecture for constructing high-availability cloud applications that can be updated without shutting down the service. The paper discusses how the various components of application can be structured into request receiver nodes and request processor nodes. Request receivers receive the request and request processors act on the request. We provide a mechanism of detecting whether any worker or responder node is not working optimally and allow user to save the state and restart this particular node. Finally we present scenarios wherein these nodes can be upgraded either individually or in batches. This reduces dependency on administrators and operations engineers in data center thereby reducing the cost. The reduced cost can directly benefit cost sensitive emerging markets. Furthermore this improves the service level agreement (SLA) since the application will not have to be taken down for updates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Optimized Resource Allocation Approach for Data-Intensive Workloads Using Topology-Aware Resource Allocation

    Publication Year: 2012 , Page(s): 1 - 4
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (313 KB) |  | HTML iconHTML  

    This paper proposes an optimized resource allocation mechanism in Infrastructure-as-a-Service (IaaS)- based cloud systems. Performance of distributed data-intensive applications are impacted significantly as current IaaS systems are usually unaware of the hosted application's requirements and hence allocating resources independent of its needs. To address this resource allocation problem and to optimise the allocation, we enhance an architecture that adopts a "what if" methodology to guide allocation decisions taken by the IaaS. The architecture uses a prediction engine with a lightweight simulator to estimate the performance of a given resource allocation and an evolutionary algorithm that includes an evolution strategies algorithm and a genetic algorithm, to find an optimized solution in the large search space. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of Artificial Neural Networks in Capacity Planning of Cloud Based IT Infrastructure

    Publication Year: 2012 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (294 KB) |  | HTML iconHTML  

    Cloud is gaining popularity as means for saving cost of IT ownership and accelerating time to market due to ready-to-use, dynamically scalable computing infrastructure and software services offered on Cloud on pay-per-use basis. There is a an important change in the way these infrastructures are assembled, configured and managed. In this research we consider the problem of managing computing infrastructure which are acquired from Infrastructure as a service (IaaS) providers, which support the execution of web applications whose work load experience huge fluctuations over the time. The operating state of the web applications on the cloud is determined by the work load, service rate and utility gain of the web services, As these parameters are changing dynamically, we could not get the exact relationship between these parameters using conventional methods. We can use the Back propagation training algorithm of artificial neural networks to solve this problem. By training the Artificial neural network with the past data, we can estimate the future numbers. In this paper we proposed a artificial neural network based model that can be used for guiding the capacity planning activity. This paper reports on an investigation on the application of ANNs in Capacity planning of cloud based infrastructure. A multi-layer feed-forward artificial neural network (ANN) with error back-propagation learning is proposed for calculation of number of reserved instances for future use. Matlab Neural Network Toolbox is used for simulation of required ANN and considering Amazon web services as a IaaS provider. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approaches Towards Energy-Efficiency in the Cloud for Emerging Markets

    Publication Year: 2012 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (439 KB) |  | HTML iconHTML  

    With the growing importance of the cloud computing paradigm, it is a challenge for cloud providers to keep the operational costs of the data centers in check, especially in the emerging markets, alongside catering to the customers' needs. It becomes essential to increase the operational efficiency of the data centers to be able to maximize VM (Virtual machine) offerings at minimal cost. To that end, energy-efficiency of the servers plays a critical role, as they influence the electrical and the cooling costs which constitute a major part of the total cost involved in the operation of a data center. Power-savings can be achieved at several different levels in a system: processors, memory, devices, and system-wide (involving powering down multiple components of a host all at once). At the processors level, depending on the workload trends, we can exploit technologies like DVFS (Dynamic Voltage and Frequency Scaling) or P-states when the CPU is running, and CPU sleep states (C-states) when the CPU is idle, to save power. Memory standards such as DDR3 have provisions for putting idle memory banks into low-power states. At the devices level, individual devices can be put into low-power states, controlled and co-ordinated by a run-time power management framework in the Operating System. This paper outlines the state-of-the-art in power- management technology on server hardware and describes how these raw features can be abstracted into a set of energy policies. We then explain how these policies or energy-profiles can be used to run a cloud datacener energy efficiently. Further, this paper also highlights some of the challenges involved in running cloud infrastructures in the emerging markets optimally despite some unique energy constraints. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Building Resilient Cloud over Unreliable Commodity Infrastructure

    Publication Year: 2012 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (135 KB) |  | HTML iconHTML  

    Cloud Computing has emerged as a successful computing paradigm for efficiently utilizing managed compute infrastructure such as high speed rack-mounted servers, connected with high speed networking, and reliable storage. Usually such infrastructure is dedicated, physically secured and has reliable power and networking infrastructure. However, much of our idle compute capacity is present in unmanaged infrastructure like idle desktops, lab machines, physically distant server machines, and laptops. We present a scheme to utilize this idle compute capacity on a best-effort basis and provide high availability even in face of failure of individual components or facilities. We run virtual machines on the commodity infrastructure and present a cloud interface to our end users. The primary challenge is to maintain availability in the presence of node failures, network failures, and power failures. We run multiple copies of a Virtual Machine (VM) redundantly on geographically dispersed physical machines to achieve availability. If one of the running copies of a VM fails, we seamlessly switchover to another running copy. We use Virtual Machine Record/Replay capability to implement this redundancy and switchover. In current progress, we have implemented VM Record/Replay for uniprocessor machines over Linux/KVM and are currently working on VM Record/Replay on shared-memory multiprocessor machines. We report initial experimental results based on our implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cloud Enabling Data Centers for Optimized Development and Test Operations

    Publication Year: 2012 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (379 KB) |  | HTML iconHTML  

    The demand from IT for the development organizations today are required to reduce capital expense and maximize existing investment. At the same time deliver high quality IT services for large development and testing teams that are geographically dispersed, with better visibility and control. As such, cloud computing with its widely-touted benefits made a convincing case for adoption. The imperative is to consolidate, virtualize and cloud enable the Data Centers in order to reduce escalating costs and free up resources to support growth and innovation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cloud Monitor: Monitoring Applications in Cloud

    Publication Year: 2012 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (197 KB) |  | HTML iconHTML  

    With the advent of cloud computing applications, monitoring becomes a valid concern. Monitoring for failures in a cloud application is difficult because of multiple failure points spanning both hardware and software. Moreover the cluster nature of a cloud application increases the scope of failure and it becomes even harder to detect the same. This paper presents Cloud Monitor - a scalable framework for monitoring cloud applications. Cloud Monitor monitors cluster nodes for errors. It supports dependent monitors, redundancy, multiple notification levels and auto-healing. Cloud Manager supports a flexible architecture where users can add custom monitors and associated self-heal actions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cloud Service Costing Challenges

    Publication Year: 2012 , Page(s): 1 - 6
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (291 KB) |  | HTML iconHTML  

    In today's world we have smarter consumer and enterprise who are not looking to spend much up-front cost and getting tied down to a vendor or brand. This benefits consumers to a large extend in reducing their capital expenses and operational cost, also allows them to focus on their core business and leave out the support needs to the service provider. This challenge is resolved to an extend by a very highly competitive service market. But from the provider's end, enabling any service incur cost and this expense need to be recovered and has to gain from the investment by defining the right pricing model. It is no different in cloud computing too, the cloud enablement itself involves significant cost and it has to be recovered from the consumer by adopting a competitive pricing model. Hence a proper costing of your cloud offering is always important to lead in the market. In this paper the different capital expenditures and operational expenditures for enabling a cloud service is detailed. Also the cost depreciation factors that need to be considered for a cloud model would be assessed with practical scenario. This paper would also cover the costing models samples for an IT profit center and a cost center for a private cloud. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data Migration Using Active Cloud Engine

    Publication Year: 2012 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (226 KB) |  | HTML iconHTML  

    Information being the key advantage in today's world, its growth rate and amount requires big data analysis, which is key challenge. Collection and retention of such collected data results in massive growth, which sets the need for infrastructure expansion, replacement and proper disposition of existing data. This important data should not be scrapped or forgotten; instead it should be messaged and tailored for the new system, leading into the world of Data Migration. Organizations are looking for cloud based storage solution which relies on having a highly efficient storage infrastructure in place to support rapid large-scale operations, without losing access to any data. That is where IBM Active Cloud Engine (ACE) [10] comes into picture, which enhances the process of Data Migration by caching the world-wide data and making it available locally, with zero downtime. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detecting Workload Hotspots and Dynamic Provisioning of Virtual Machines in Clouds

    Publication Year: 2012 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (271 KB) |  | HTML iconHTML  

    One of the primary goals of Cloud Computing is to provide reliable QoS. The users of the cloud applications may access their applications from any Region. The cloud infrastructure must be Elastic enough to improve the QoS requirements. In order to provide reliable QoS, the cloud infrastructure must be able to detect the potential workload hotspots for various cloud applications across Regions and take appropriate measures. This paper presents an approach to detect workload hotspots using application access pattern based method in the cloud. This paper also presents how the existing VDN based Virtual Machine provisioning approach [1] can be used to provision new Virtual Appliances at the detected hotspots dynamically and efficiently at the potential hotspots to improve the QoS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enterprise Compatible Cloud Object Storage and Synchronization Service

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1116 KB) |  | HTML iconHTML  

    With the kind of growth that enterprises are witnessing, their enterprise data, especially the unstructured data, is growing like never before. This trend has led to an increase in the competition among various cloud storage services which are providing a wide range of options for storing data online. The access APIs offered by most cloud storage service providers are rarely in accordant with each other. Yet, sooner or later a company may need to find a new provider to store its data, e.g., if a new provider provides better service or at a cheaper cost. So, this lack of a common cloud API standard makes it difficult to move data from one service provider to another and is becoming a barrier to adoption by some customers. A common answer to this vendor lock-in lies in adapting common standards. The most promising direction for such is SNIA's Cloud Data Management Interface (CDMI), which describes the semantics of handling containers and data objects, including their metadata. In this paper we present our prototype (COSS - Cloud Object Storage and Synchronization) implementation of a CDMIcompatible enterprise Cloud Object Storage service. We also present a Synchronization service, an application that runs over the object store and explores how cloud-based clients might take advantage of it. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.