By Topic

e-Science and Grid Computing, IEEE International Conference on

Date 10-13 Dec. 2007

Filter Results

Displaying Results 1 - 25 of 84
  • Third IEEE International Conference on e-Science and Grid Computing - Cover

    Publication Year: 2007 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (166 KB)  
    Freely Available from IEEE
  • Third IEEE International Conference on e-Science and Grid Computing - Title

    Publication Year: 2007 , Page(s): i - iii
    Save to Project icon | Request Permissions | PDF file iconPDF (118 KB)  
    Freely Available from IEEE
  • Third IEEE International Conference on e-Science and Grid Computing - Copyright

    Publication Year: 2007 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (72 KB)  
    Freely Available from IEEE
  • Message from the Conference Chairs

    Publication Year: 2007 , Page(s): xii - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (270 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Message from the Program Chairs

    Publication Year: 2007 , Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (126 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Program Committee

    Publication Year: 2007 , Page(s): xv - xvii
    Save to Project icon | Request Permissions | PDF file iconPDF (148 KB)  
    Freely Available from IEEE
  • Reviewers

    Publication Year: 2007 , Page(s): xviii - xix
    Save to Project icon | Request Permissions | PDF file iconPDF (25 KB)  
    Freely Available from IEEE
  • Large-Scale ATLAS Simulated Production on EGEE

    Publication Year: 2007 , Page(s): 3 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (646 KB) |  | HTML iconHTML  

    In preparation for first data at the LHC, a series of Data Challenges, of increasing scale and complexity, have been performed. Large quantities of simulated data have been produced on three different Grids, integrated into the ATLAS production system. During 2006, the emphasis moved towards providing stable continuous production, as is required in the immediate run-up to first data, and thereafter. Here, we discuss the experience of the production done on EGEE resources, using submission based on the gLite WMS, CondorG and a system using Condor Glide-ins. The overall walltime efficiency of around 90% is largely independent of the submission method, and the dominant source of wasted cpu comes from data handling issues. The efficiency of grid job submission is significantly worse than this, and the glide-in method benefits greatly from factorising this out. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CDF Monte Carlo Production on LCG Grid via LcgCAF Portal

    Publication Year: 2007 , Page(s): 11 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (735 KB) |  | HTML iconHTML  

    The improvements of the luminosity of the Tevatron Collider require large increases in computing requirements for the CDF experiment which has to be able to increase proportionally the amount of Monte Carlo data it produces. This is, in turn, forcing the CDF collaboration to move beyond the use of dedicated resources and to exploit grid resources. CDF has been running a set of CDF Analysis Farm (CAFs), which are submission portals to dedicated pools, and LcgCAF is basically a reimplementation of the CAF model in order to access grid resources by using the LCG/EGEE middleware components. By mean of LcgCAF CDF users can submit analysis jobs with the same mechanism adopted for the dedicated farms and at the same time the grid resources are accessed without any specific software requirements for the sites. This is obtained using Parrot for the experiment code distribution and Frontier for the run condition database availability on the worker nodes. Currently many sites in Italy and in Europe are accessed through this portal in order to produce Monte Carlo data and in one year of operations we expect about 100,000 grid jobs submitted by the CDF users. We review here the setup used to submit jobs and retrieve the output, including the grid components CDF-specific configuration. The batch and interactive monitor tools developed to allow users to verify the jobs status during their lifetimes in the grid environment are described. We analyze the efficiency and typical failure modes of the current grid infrastructure reporting the performances of different parts of the used system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rapid Prototyping Capabilities for Conducting Research of Sun-Earth System

    Publication Year: 2007 , Page(s): 17 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (324 KB) |  | HTML iconHTML  

    This paper describes the requirements, design and implementation progress of an e-Science environment to enable rapid evaluation of potential uses of NASA research products and technologies to improve future operational systems for societal benefits. This project is intended to be a low-cost effort focused on integrating existing open source, public domain, and/or community developed software components and tools. Critical for success is a carefully designed implementation plan allowing for incremental enhancement of the scale and functionality of the system while maintaining an operational the system and hardening its implementation. This has been achieved by rigorously following the principles of separation of concerns, loose coupling, and service oriented architectures employing Portlet (GridSphere), Service Bus (ServiceMix), and Grid (Globus) technologies, as well as introducing a new layer on top of the THREDDS data server. At the current phase, the system provide data access through a data explorer allowing the user to view the metadata and provenance of the datasets, invoke data transformations such as subsampling, reprojections, format translations, and de-clouding of selected data sets or collections, as well as generate simulated data sets approximating data feed from future NASA missions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Integrated Grid Portal for Managing Energy Resources

    Publication Year: 2007 , Page(s): 25 - 33
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (773 KB) |  | HTML iconHTML  

    The discovery and management of energy resources, especially at locations in the Gulf of Mexico, requires an economic but technically enhanced infrastructure. Research teams from Louisiana State University, University of Louisiana at Lafayette, and Southern University Baton Rouge are engaged in a collaborative effort to create a ubiquitous computing and monitoring system (UCoMS) for the discovery and management of energy resources. The UCoMS team has sucessfully addressed two difficult issues in this research: (1) the computational challenges faced by compute-intensive simulations for reservoir uncertainty analysis that requires thousands of simulations and deals with terabytes, and even petabytes, of data, (2) the development of a prototype wireless sensor network (WSN) infrastructure to collect and process realtime data from production locations. While the former requires the intensive computational power of the UCoMS grid resources, the latter requires efficient interfacing between WSN & grid. A unified workflow analysis has been performed to ensure smooth operation of both efforts and a unified portal has been created. This paper integrates the above two workflows and portals into a single platform. It illustrates the need for such integration for users with similar (but not same) goals and describes how to partition users among different groups with different access rights to ensure security within subgroups. Such a system can easily integrate future UCoMS sub-projects into a unified whole. Hence, our portal prototype serves as a good example of the benefit that may accrue from integrated workflows. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Dynamic Critical Path Algorithm for Scheduling Scientific Workflow Applications on Global Grids

    Publication Year: 2007 , Page(s): 35 - 42
    Cited by:  Papers (14)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB) |  | HTML iconHTML  

    Effective scheduling is a key concern for the execution of performance driven grid applications. In this paper, we propose a dynamic critical path (DCP) based workflow scheduling algorithm that determines efficient mapping of tasks by calculating the critical path in the workflow task graph at every step. It assigns priority to a task in the critical path which is estimated to complete earlier. Using simulation, we have compared the performance of our proposed approach with other existing heuristic and meta-heuristic based scheduling strategies for different type and size of workflows. Our results demonstrate that DCP based approach can generate better schedule for most of the type of workflows irrespective of their size particularly when resource availability changes frequently. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semantic-Based On-demand Synthesis of Grid Activities for Automatic Workflow Generation

    Publication Year: 2007 , Page(s): 43 - 50
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (229 KB) |  | HTML iconHTML  

    On-demand synthesis of grid activities can play a significant role in automatic workflow composition and in improving quality of the grid resource provisioning. However, in the grid, synthesis of activities has been largely ignored due to the limited expressiveness of the representation of activity capabilities and the lack of adapted resource management means to take advantage of such activity synthesis. This paper introduces a new mechanism for automatic synthesis of available activities in the grid by applying ontology rules. Rule-based synthesis combines multiple primitive activities to form new compound activities. The synthesized activities can be provisioned as new or alternative options for negotiation as well as advance reservation. This is a major advantage compared to other approaches that only focus on resource matching and brokerage. Furthermore, the new synthesized activities provide aggregated capabilities that otherwise may not be possible, leading towards an automatic generation of grid workflows. We developed a prototype to demonstrate advantages of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Peer-to-Peer Based Grid Workflow Runtime Environment of SwinDeW-G

    Publication Year: 2007 , Page(s): 51 - 58
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (573 KB) |  | HTML iconHTML  

    Nowadays, grid and peer-to-peer (p2p) technologies have become popular solutions for large- scale resource sharing and system integration. For e- science workflow systems, grid is a convenient way of constructing new services by composing existing services, while p2p is an effective approach to eliminate the performance bottlenecks and enhance the scalability of the systems. However, existing workflow systems focus either on p2p or grid environments and therefore cannot take advantage of both technologies. It is desirable to incorporate the two technologies in workflow systems. SwinDeW-G (Swinburne Decentralised Workflow for Grid) is a novel hybrid decentralised workflow management system facilitating both grid and p2p technologies. It is derived from the former p2p based SwinDeW system but redeveloped as grid services with communications between peers conducted in a p2p fashion. This paper describes the system design and functions of the runtime environment of SwinDeW-G. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Data Playground: An Intuitive Workflow Specification Environment

    Publication Year: 2007 , Page(s): 59 - 68
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (375 KB) |  | HTML iconHTML  

    Workflows systems are steadily finding their way into the work practices of scientists. This is particularly true in the in silico science of bioinformatics, where biological data can be processed by Web services. In this paper we investigate the potential of evolving the users' interaction with workflow environments so that it more closely relates to the mode in which their day to day work is carried out. We present the Data Playground, an environment designed to encourage the uptake of workflow systems in bioinformatics through more intuitive interaction by focusing the user on their data rather than on the processes. We implement a prototype plug-in for the Taverna workflow environment and show how this can promote the creation of workflow fragments by automatically converting the users' interactions with data and Web services into a more conventional workflow specification. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intelligent Selection of Fault Tolerance Techniques on the Grid

    Publication Year: 2007 , Page(s): 69 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (521 KB) |  | HTML iconHTML  

    The emergence of computational grids has lead to an increased reliance on task schedulers that can guarantee the completion of tasks that are executed on unreliable systems. There are three common techniques for providing task-level fault tolerance on a grid: retrying, replicating, and checkpointing. While these techniques are varyingly successful at providing resilience to faults, each of them presents a tradeoff between performance and resource cost. As such, tasks having unique urgency requirements would ideally be placed using one of the techniques; for example, urgent tasks are likely to prefer the replication technique, which guarantees timely completion, whereas low priority tasks should not incur any extra resource cost in the name of fault tolerance. This paper introduces a placement and selection strategy which, by computing the utility of each fault tolerance technique in relation to a given task, finds the set of allocation options which optimizes the global utility. Heuristics which take into account the value offered by a user, the estimated resource cost, and the estimated response time of an option are presented. Simulation results show that the resulting allocations have improved fault tolerance, runtime, profit, and allow users to prioritize their tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • From Monitoring Data to Experiment Information – Monitoring of Grid Scientific Workflows

    Publication Year: 2007 , Page(s): 77 - 84
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1762 KB) |  | HTML iconHTML  

    Monitoring of running scientific workflows (experiments) is not only important for observing their execution status, but also for collecting provenance, improving performance, knowledge extraction, etc. We propose an ontology model of experiment information which describes the execution of an experiment using a well-defined semantics, and aggregates various aspects of workflow execution including provenance, performance, resource information, and others. Such multi-aspect semantic-rich information is indispensable to build knowledge services on top of it. We describe a grid workflow monitoring architecture which is necessary to collect and correlate workflow monitoring data. The process of aggregation of monitoring data into experiment information is presented. Our approach is validated on a drug resistance ranking application running in the ViroLab virtual laboratory for infectious diseases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A SLA-Oriented Management of Containers for Hosting Stateful Web Services

    Publication Year: 2007 , Page(s): 85 - 92
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (569 KB) |  | HTML iconHTML  

    Service-oriented architectures provide integration of interoperability for independent and loosely coupled services. Web services and the associated new standards such as WSRF are frequently used to realise such service-oriented architectures. In such systems, autonomic principles of self-configuration, self-optimisation, self-healing and self- adapting are desirable to ease management and improve robustness. In this paper we focus on the extension of the self management and autonomic behaviour of a WSRF container connected by a structured P2P overlay network to monitor and rectify its QoS to satisfy its SIAs. The SLA plays an important role during two distinct phases in the life-cycle of a WSRF container. Firstly during service deployment when services are assigned to containers in such a way as to minimise the threat of SLA violations, and secondly during maintenance when violations are detected and services are migrated to other containers to preserve QoS. In addition, as the architecture has been designed and built using standardised modern technologies and with high levels of transparency, conventional Web services can be deployed with the addition of a SLA specification. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tracing Resource Usage over Heterogeneous Grid Platforms: A Prototype RUS Interface for DGAS

    Publication Year: 2007 , Page(s): 93 - 101
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB) |  | HTML iconHTML  

    Tracing resource usage by Grid users is of utmost importance especially in the context of large-scale scientific collaborations such as within the High Energy Physics (HEP) community to guarantee fairness of resource sharing, but many difficulties can arise when tracing the resource usage of distributed applications over heterogeneous Grid platforms. These difficulties are often related to a lack of interoperability of the accounting components across middlewares. This paper brie y describes the architecture and workflow of the Distributed Grid Accounting System (DGAS) [1] and evaluates the possibility to extend it with a Resource Usage Service (RUS) [2, 3] interface according to the Open Grid Forum (OGF) sped cation that allows to store and retrieve OGF Usage Records (URs) [4, 5] via Web Services. In this context the OGF RUS and UR sped cations are critically analyzed. Furthermore, a prototype of a RUS interface for DGAS (DGAS-RUS) is presented and the most recent test results towards a full interoperability between heterogeneous Grid platforms are outlined. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Arts and Humanities e-Science From Ad Hoc Experimentation to Systematic Investigation

    Publication Year: 2007 , Page(s): 103 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    This paper will explain the role, activities, and context of the arts and humanities e-science initiative in the UK, which is funded by the AHRC, EPSRC and JISC. It will firstly present last year's pioneering phase with ad hoc experiments by the early adopters. Secondly, the award holding projects for the major funding scheme for Arts and Humanities e-Science will be described, as they start their work in autumn 2007. This second phase can be seen as one of systematic investigations where specific experimentations will deliver parts of an e-Infrastructure for the arts and humanities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Croquet Based Virtual Museum Implementation with Grid Computing Connection

    Publication Year: 2007 , Page(s): 111 - 117
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1013 KB) |  | HTML iconHTML  

    A 3D computation technology in the form of Virtual Reality enables user to access ancient artifacts and facilitates the feel of presence. Virtual Reality consumes a lot of computing resources. Grid computing can be used to manage the distributed computation resources to perform computational processes Croquet application is used in this work to provide a virtual museum which will store an ancient Java manuscript. Croquet is a virtual machine which can be programmed for a collaborative 3 dimension application. The collaboration in virtual world can be conducted for multi users. In this work, we have created a virtual museum using a 3 dimension processing application support using 3D Studio Max. The Croquet application has been connected to a grid computing based on Globus and JOGL based manuscript system through Virtual Network Computing. A user acceptance test was conducted and the result indicated that the users where satisfied with the application performance, although Croquet is still rarely used despite its usefulness for Virtual Reality. The connection between the VR world and Globus based Grid Computing System for a 3D manuscript has successfully been implemented, despite of the slow processing in the system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Constructing a Web Service System for Large-Scale Meteorological Grid Data

    Publication Year: 2007 , Page(s): 118 - 124
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1964 KB) |  | HTML iconHTML  

    In this paper we report a Web service system constructed on top of the GPV/JMA Archive. The archive is to offer the daily operational weather forecasting data provided by the Japan meteorological agency (JMA). Currently, it is serving several kinds of GPV data, such as global spectral model data, regional spectral model data, and meso- scale non-hydrostatic model data. In order to make those data more usable from the Net, we have constructed a Web service system, consisting of several Web services, such as metadata generation, searching, and rendering, and a workflow engine that allows users to execute BPEL based scientific workflows. In particular, we have designed a dedicated metadata format for GPV data, and implemented metadata extractor to enable automatic metadata extraction for incoming GPV data. In this paper, the entire system and the details of the constructed services are described. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Ring Buffer Network Bus (RBNB) DataTurbine Streaming Data Middleware for Environmental Observing Systems

    Publication Year: 2007 , Page(s): 125 - 133
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (434 KB) |  | HTML iconHTML  

    The environmental science and engineering communities are actively engaged in planning and developing the next generation of large-scale sensor-based observing systems. These systems face two significant challenges: heterogeneity of instrumentation and complexity of data stream processing. Environmental observing systems incorporate instruments across the spectrum of complexity, from temperature sensors to acoustic Doppler current profilers, to streaming video cameras. Managing these instruments and their data streams is a serious challenge. Critical infrastructure requirements common to all of these sensor-based observing systems are reliable data transport, the promotion of sensors and sensor streams to first-class objects, a framework for the integration of heterogeneous instruments, and a comprehensive suite of services for data management, routing, synchronization, monitoring, and visualization. In this paper we present the RBNB DataTurbine, an open-source streaming data middleware system, and discuss how the RBNB DataTurbine satisfies the critical cyberinfrastructure requirements core to these sensor-based observing systems. The discussion includes the results from real-world deployments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards "Chemical" Desktop Grids

    Publication Year: 2007 , Page(s): 135 - 142
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB) |  | HTML iconHTML  

    This paper introduces the application of an unconventional approach to Grid programming. The proposed programming model is based on the chemical metaphor for expressing coordination of large grain computations. A chemical program can be seen as a set of chemical reactions, representing computations, that transform a set of floating molecules, representing data, within a chemical solution until an inert solution is reached. This model is intrinsically parallel and possesses nice autonomic properties, which are expected for programming Grids. We illustrate this novel programming model with a simple ray-tracing application and describe an implementation in the context of Desktop Grids. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and Implementation of Network Performance Aware Applications Using SAGA and Cactus

    Publication Year: 2007 , Page(s): 143 - 150
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (702 KB) |  | HTML iconHTML  

    This paper demonstrates the use of appropriate programming abstractions - SAGA and cactus - that facilitate the development of applications for distributed infrastructure. SAGA provides a high-level programming interface to Grid- functionality; Cactus is an extensible, component based framework for scientific applications. We show how SAGA can be integrated with cactus to develop simple, useful and easily extensible applications that can be deployed on a wide variety of distributed infrastructure, independent of the details of the resources. Our model application can gather and analyze network performance data and migrate across heterogeneous resources. We outline the architecture of our application and discuss how it imparts important features required of eScience applications. As a proof-of-concept, we present details of the successful deployment of our application over distinct and heterogeneous Grids and present the network performance data gathered. We also discuss several interesting use cases for such an application - which can be used either as stand-alone network diagnostic agent, or in conjunction with more complex scientific applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.