By Topic

ChinaGrid Annual Conference, 2009. ChinaGrid '09. Fourth

Date 21-22 Aug. 2009

Filter Results

Displaying Results 1 - 25 of 49
  • [Front cover]

    Publication Year: 2009 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (565 KB)  
    Freely Available from IEEE
  • [Title page i]

    Publication Year: 2009 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (91 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Publication Year: 2009 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (159 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2009 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (129 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2009 , Page(s): v - viii
    Save to Project icon | Request Permissions | PDF file iconPDF (157 KB)  
    Freely Available from IEEE
  • Preface

    Publication Year: 2009 , Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (159 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Conference Committee

    Publication Year: 2009 , Page(s): x - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (67 KB)  
    Freely Available from IEEE
  • Market-oriented cloud computing: vision, hype, and reality of delivering computing as the 5th utility

    Publication Year: 2009 , Page(s): xii - xv
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (156 KB)  

    Computing is being transformed to a model consisting of services that are commoditised and delivered in a manner similar to utilities such as water, electricity, gas, and telephony. In such a model, users access services based on their requirements without regard to where the services are hosted. Several computing paradigms have promised to deliver this utility computing vision and they include Grid computing, P2P computing, and more recently Cloud computing. The latter term denotes the infrastructure as a "Cloud" in which businesses and users are able to access applications from anywhere in the world on demand. Cloud computing delivers infrastructure, platform, and software (application) as services, which are made available as subscription-based services in a pay-as-you-go model to consumers. These services in industry are respectively referred to as Infrastructure as a Service (Iaas), Platform as a Service (PaaS), and Software as a Service (SaaS). To realize Cloud computing, vendors such as Amazon, HP, IBM, and Sun are starting to create and deploy Clouds in various locations around the world. In addition, companies with global operations require faster response time, and thus save time by distributing workload requests to multiple Clouds in various locations at the same time. This creates the need for establishing a computing atmosphere for dynamically interconnecting and provisioning Clouds from multiple domains within and across enterprises. There are many challenges involved in creating such Clouds and Cloud interconnections. This keynote talk (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver the vision of computing utilities; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as VMs; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk man- gement to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our recent initiative in cloud computing, called as Megha: (i) Aneka, a software system for providing PaaS within private or public Clouds and supporting market-oriented resource management, (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications, (iii) creation of 3rd party Cloud brokering services for content delivery network and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon and Nirvanix along with Grid mashups, and (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; and (5) concludes with the need for convergence of competing IT paradigms for delivering our 21st century vision along with pathways for future research. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliability Analysis Approach of Grid Monitoring Architecture

    Publication Year: 2009 , Page(s): 3 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB) |  | HTML iconHTML  

    Grid monitoring, as an important part of any grid systems, is needed to query the state of grid resources and match user requirements with available grid resources. In order to ensure the availability of grid monitoring, the reliability imposed by software or hardware failure happened with unpredictable probability must be assessed. This paper contributes to study the reliability analysis approach of grid monitoring in the context of grid monitoring architecture (GMA) that has been de facto standards for many areas of grid computing. Failure types and contributing factors in GMA are analyzed, which are likely to take place in comprised components, channels or process behaviors. Then, the respective evaluation equations are suggested via Markov procedure, queue model, and probability theory. Furthermore, the reliability issue of hierarchical GMA is discussed based on four basic architectural relations. Numerical examples are given to illustrate the proposed computing equations. The results show that our approach is feasible. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FCCS: File Classification Caching Service Based on RAM Grid

    Publication Year: 2009 , Page(s): 10 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB) |  | HTML iconHTML  

    Memory intensive applications and I/O intensive applications often suffer from the poor performance of disk swapping when memory is inadequate.RAM Grid, which combines network memory, service-oriented computing and grid computing technology, focuses on solving these problems. Being excellent at providing shared memory to improve system performance, RAM Grid can't guarantee busy node load balance or optimal file remote cache efficiency. After studying data placement policy of large-scale network storage systems, we propose a file classification caching service (FCCS) based on RAM Grid, which promises fairness and high availability, to give a method that tries to solve those shortages. As experiment results show: FCCS improve system performance greatly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • To Improve Throughput via Multi-pathing and Parallel TCP on Each Path

    Publication Year: 2009 , Page(s): 16 - 21
    Cited by:  Papers (1)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB) |  | HTML iconHTML  

    Parallel TCP, which opens multiple TCP connections over a single direct path, and Multi-Pathing, which concurrently uses multiple disjointed paths to transfer data, have both been proved to be effective methods to improve end-to-end throughput. How much throughput can we ultimately achieve between a source and a destination if we use multiple overlay paths and open multiple TCP connections on each used path? In order to find all possible overlay paths of good quality between a source and a destination, a path probing process similar to the path discovery protocol of IEEE 802.5 is started by the destination. A probing packet (a TCP connection request followed by padding data) is flooded across an overlay between the destination and the source. Intermediate overlay nodes selectively accept and forward probing packets. If a probing pack is accepted, a corresponding TCP connection is created. Trade-offs then are made between reducing the probing traffic and keeping multiple TCP connections on each path. The source strips data into small packets and adaptively assigns them to selected overlay paths according to the changing quality of each path. This proposed data transfer technology is evaluated within an overlay that consists of 15 servers on the Internet in China, across 3 different autonomous systems. Experiments show that with this technology, 54% of the measured samples yield a throughput larger than 60 Mb/s, which is 60% of the bandwidth that could be possibly obtained(the access bandwidth is 100 Mb/s for all servers). Comparing with direct path and Parallel TCP, only less than 1% and 25% of the measured samples reach the same level of throughput respectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Double Redundant Fault-Tolerance Service Routing Model in ESB

    Publication Year: 2009 , Page(s): 22 - 27
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (337 KB) |  | HTML iconHTML  

    With the development of the Service Oriented Architecture (SOA), the Enterprise Service Bus (ESB) is becoming more and more important in the management of mass services. The main function of it is service routing which focuses on delivery of message among different services. At present, some routing patterns have been implemented to finish the messaging, but they are all static configuration service routing. Once one service fails in its operation, the whole service system will not be able to detect such fault, so the whole business function will also fail finally. In order to solve this problem, we present a double redundant fault tolerant service routing model. This model has its own double redundant fault tolerant mechanism and algorithm to guarantee that if the original service fails, another replica service that has the same function will return the response message instead automatically. The service requester will receive the response message transparently without taking care where it comes from. Besides, the state of failed service will be recorded for service management. At the end of this article, we evaluated the performance of double redundant fault tolerant service routing model. Our analysis shows that, by importing double redundant fault tolerance, we can improve the fault-tolerant capability of the services routing apparently. It will solve the limitation of existent static service routing and ensure the reliability of messaging in SOA. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Resource Allocation Method for Computational Grids Based on On-line Reverse Auction

    Publication Year: 2009 , Page(s): 28 - 31
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (409 KB) |  | HTML iconHTML  

    Resource allocation and task scheduling are two key technologies in grid computing system. The market-based resource allocation model is considered as a good one. In this paper, an on-line reverse auction method of resource allocation for computational grids was proposed to solve the problem of resource management considering the dynamic characteristics of computing resources in the computational grid environment and the advantages of economics mechanism. In this method, the current price can be set using former bids. And bidders arriving one by one the on-line buyer must be required to make a decision immediately about each bid as it is received. Then we prove that the algorithm is incentive compatible and simulate the auction protocol in Gridsim to evaluate its communication demand. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Load Balancing on the Exchanged Hypercube

    Publication Year: 2009 , Page(s): 32 - 35
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB) |  | HTML iconHTML  

    The exchanged hypercube is an interconnection network which is obtained by systemically removing some links from a binary hypercube. In parallel systems, load balancing is a very important factor which can affect the performance of the whole system. So distributing tasks evenly on processors is essential for multiprocessor computing systems. Based on the classical DE-based algorithm, in this paper we propose a load balancing algorithm for the exchanged hypercube architecture. We also theoretically prove the correctness of the proposed algorithm. Finally, we use a case study to further explain our algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CampusWare: An Easy-to-Use, Efficient and Portable Grid Middleware for Compute-Intensive Applications

    Publication Year: 2009 , Page(s): 36 - 43
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (435 KB) |  | HTML iconHTML  

    This paper describes the design and implementation of CampusWare, a lightweight grid middleware for the compute-intensive applications in campus environment. Though there are many grid middlewares available, they are unnecessarily complex for use in the campus environment where the main applications are compute-intensive, and the main requirements are the convenience and efficiency in clusters management and job submissions. To deal with the problem, CampusWare proposes a ldquofast jobrdquo concept and provides a two-layer middleware architecture as well as a three-layer user account hierarchy. Compared with the existing grid middlewares, its deployment, configuration and usage are all simplified, which makes it easy-to-use, efficient and portable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • GridDEV: A Platform for Service Grid Evaluation

    Publication Year: 2009 , Page(s): 44 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (298 KB) |  | HTML iconHTML  

    GridDEV is a DEVS-based platform for evaluation of service grid. Through the analysis of SOA, we characterize important features of service grid and select four components as the base of grid modeling. With DSDEVS, this paper models the basic components selected, as well as the relationships between them. As a result, the model fully formalizes the grid system;hence facilitates the design and optimization of grid services. Based on this model, we map the DEVS component into elements of simulator development library SimGrid, and construct the platform GridDEV for grid performance and reliability evaluation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multi-objective Optimization Approaches Using a CE-ACO Inspired Strategy to Improve Grid Jobs Scheduling

    Publication Year: 2009 , Page(s): 53 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (355 KB) |  | HTML iconHTML  

    Grid scheduling is one of the most crucial issue in a grid environment because it strongly affects the performance of the whole system. Taking into account that the issue of allocating jobs on resources is a combinatorial optimization problem, a NP-complete problem, several heuristics have been proposed to provide good performance. In this paper, the proposed approach considers a stochastic optimization called the cross entropy method. The CE method is used to tackle efficiently the initialization sensitiveness problem associated with ant colony algorithm for multi-objective scheduling, which accelerates the convergence rate and improves the ability of searching an optimum solution. Simulation shows that it performs better than the ACO in the integrated performances. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Backward Planning: A Simple and Efficient Method to Improve the Performance of List Scheduling Algorithms

    Publication Year: 2009 , Page(s): 59 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (311 KB) |  | HTML iconHTML  

    The problem of scheduling task graph which is represented by DAG (directed acyclic graph) in heterogeneous environment is not a new problem. However, up to now nearly all of the heuristics use forward planning methods. This paper introduces novel backward planning (BWP) procedure in this field. Quite different from forward planning methods, BWP always delays the tasks' starttime as late as possible, in an attempt to delay the starttime of the entry nodes and shorten the makespan of the schedule in the end.Combining BWP and traditional list scheduling algorithm, this paper proposes a two pass forward/backward (F/B) scheduling technique which can improve the performance of the original algorithm significantly. Using famous HEFT algorithm as example, we evaluate and compare the performance of two scheduling technique: our F/B technique and traditional insertion based scheduling technique used by HEFT and some other existing list scheduling algorithms. The experimental results indicate that F/B technique outperforms another in all the metrics considered in our examination. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MPICH-G-DM: An Enhanced MPICH-G with Supporting Dynamic Job Migration

    Publication Year: 2009 , Page(s): 67 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (449 KB) |  | HTML iconHTML  

    Grid is attracting more and more attentions by its massive computational capacity. Tools like Globus Toolkit and MPICH-G2 have been developed to help scientists to facilitate their researches. As a Grid-enabled implementation of MPI, MPICH-G2 helps developers to port parallel applications to cross-domain environment. Since the current computationally-intensive parallel applications, especially long-running tasks, require high availability as well as high performance computing platform, dynamic job migration in Grid environment has became an essential issue. In this study, we present a dynamic job migration enabled MPICH-G2 version, MPICH-G-DM. We use Virtual Job Model (VJM) to reserve resources for the migrating jobs in advance to improve the efficiency of the system. An Asynchronous Migration Protocol (AMP) is proposed to enable the migrating sub jobs to checkpoint/restart and update their new addresses concurrently without a global synchronization. In order to reduce the communicating overhead of job migration, MPICH-G-DM minimized the number of control messages among domains to O(N). Experiment results show that MPICH-G-DM is effective and reliable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ant Algorithm with Execution Quality Based Prediction in Grid Scheduling

    Publication Year: 2009 , Page(s): 77 - 83
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (354 KB) |  | HTML iconHTML  

    Task scheduling is one important job in Grid computing and also a hard and complex problem. We have quite a few algorithm of task scheduling in the past researches. Ant algorithm is a heuristic algorithm. The inherent parallelism and scalability make this algorithm meet the requirement of complex task scheduling in Grid computing.In this paper, we will propose an improved ant algorithm in our power grid environment. We made several improvements on the calculation of pheromone and task issue method as well. The improved algorithm becomes more sensitive with the power grid environment and more robust with heavy workload. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Priority Tasks Scheduling Algorithm Based on Grid Resource Prediction

    Publication Year: 2009 , Page(s): 84 - 87
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (370 KB) |  | HTML iconHTML  

    According to the dependence and deadline of the grid workflow tasks, the effective degrees and MIPS of the grid resources, a new algorithm called the priority tasks scheduling algorithm based on the grid resource prediction is presented. The algorithm uses DAG to find the critical path, obtain the deadline of every task and compute their PRI (priority). The algorithm takes the below problems into consideration: the request of user, the type of resources and re-scheduling of failed tasks. The result shows that the algorithm is effective. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Research on Multi-QoS On-line Scheduling Based on Fuzzy Theory in the Grid

    Publication Year: 2009 , Page(s): 88 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (317 KB) |  | HTML iconHTML  

    In the background of widely use of grid applications, users would no longer be satisfied with the current grid infrastructure services of ¿do my best¿, and hope the grid system is able to provide those services which can guarantee quality. Furthermore, as the expansion of user group, the service should be much easier to use and correspond with the applications' particular requirements. Users' QoS (Quality of Service) requirements are of subjective, fuzzy and incompletely considering, so aiming at these problems this paper proposes a method named FuzzyQoS. It considers users' multi-QoS requirements using fuzzy decision-making theory based on their preferences in online scheduling. This method not only normalizes the multi-QoS requirements which have the features of subjectivity and fuzziness, but also highlights the QoS requirements of user preferences. It is shown that, with the same system makespan, the degree of user satisfaction is close to other multi-QoS methods and significantly improved comparing with one-dimensional QoS scheduling method. What is the most important, this method does not require users to be sufficiently professional. It is simple, effective and sensitive to the users' preferences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Metadata Management Based on Hierarchical Bloom Filters in Data Grid

    Publication Year: 2009 , Page(s): 95 - 101
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (318 KB) |  | HTML iconHTML  

    Distributed metadata management is an important issue in the design and implementation of Data Grid. The key challenge lies in the strategies of metadata synchronization and the representation of the distributed metadata. We have designed a Hierarchical Bloom Filter, which consists of two level Bloom filters, to facilitate the metadata management. A Recent Bloom Filter at the top level is based on the list of recent accessed files while a Summary Bloom Filter at the bottom level represents the set of entire files. Furthermore, we propose a novel update scheme to make Recent Bloom Filters synchronized among metadata servers. Each metadata server could use the Hierarchical Bloom Filters to reduce the update frequency and the network overhead. The experimental results show that the Hierarchical Bloom Filters improve the performance and scalability of Data Grid markedly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • STBucket: A Self-Tuning Bucket Index in DAS Paradigm

    Publication Year: 2009 , Page(s): 102 - 109
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB) |  | HTML iconHTML  

    In the Database-As-a-Service (DAS) paradigm, data owners outsource their data to the third-party service provider. Since the service provider is untrusted, the data should be encrypted before outsourced. Various approaches have been proposed to query on encrypted data, among which bucket based method is effective. However, previous researches just look at the data distribution with respect to a given workload, which is ineffective in changing workload behaviors. In this paper, we propose a Self-Tuning Bucket scheme: STBucket. By gathering and analyzing query feedback, STBucket achieves adaptation to workload through online bucket splitting and merging. Experimental results show that STBucket is workload aware and performs well with reasonable overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Data-Intensive Workflow Scheduling Algorithm for Grid Computing

    Publication Year: 2009 , Page(s): 110 - 115
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB) |  | HTML iconHTML  

    The data-intensive workflow in scientific and enterprise grids has gained popularity in recent times. Data-intensive workflow needs to access, process and transfer large datasets that may each be replicated on different data hosts. Because of the large data sets, the execution time is bounded by the cost of data transfer. Minimizing the time of transferring these datasets to the computational resources where the tasks of workflow are executed requires that appropriate computational and data resources be selected. In this paper, we introduce an algorithm MDTT to select the resource set which the task should be mapped. Our experiments show that our algorithm is able to minimize the total makespan of data-intensive workflow and the time of data transferring. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.