By Topic

Advanced Engineering Computing and Applications in Sciences, 2009. ADVCOMP '09. Third International Conference on

Date 11-16 Oct. 2009

Filter Results

Displaying Results 1 - 25 of 42
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (333 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (11 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (55 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (126 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (172 KB)  
    Freely Available from IEEE
  • Preface

    Page(s): viii - ix
    Save to Project icon | Request Permissions | PDF file iconPDF (84 KB)  
    Freely Available from IEEE
  • Committee

    Page(s): x - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (93 KB)  
    Freely Available from IEEE
  • list-reviewer

    Page(s): xiv - xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (91 KB)  
    Freely Available from IEEE
  • Multi-objective Optimization of Graph Partitioning Using Genetic Algorithms

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (497 KB) |  | HTML iconHTML  

    Graph partitioning is a NP-hard problem with multiple conflicting objectives. The graph partitioning should minimize the inter-partition relationship while maximizing the intra-partition relationship. Furthermore, the partition load should be evenly distributed over the respective partitions. Therefore this is a multi-objective optimization problem. There are two approaches to multi-objective optimization using genetic algorithms: weighted cost functions and finding the Pareto front. We have used the Pareto front method to find the suitable curve of non-dominated solutions, composed of a high number of solutions. The proposed methods of this paper used to improve the performance are injecting best solutions of previous runs into the first generation of next runs and also storing the non-dominated set of previous generations to combine with later generation's non-dominated set. These improvements prevent the GA from getting stuck in the local optima and make the search more efficient and increase the probability of finding more optimal solutions. Finally, a simulation research is carried out to investigate the effectiveness of the proposed algorithm. The simulation results confirm the effectiveness of the proposed multi-objective GA method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Workflow Resiliency for Large-Scale Distributed Applications

    Page(s): 7 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (624 KB) |  | HTML iconHTML  

    Large-scale simulation and optimization are demanding applications that require high-performance computing platforms. Because their economic impact is fundamental to the industry, they also require robust, seamless and effective mechanisms to support dynamic user interactions, as well as fault-tolerance and resiliency on parallel computing platforms. Distributed workflows are considered here as a means to support large-scale dynamic and resilient multiphysics simulation and optimization applications, such as multiphysics aircraft simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applying Inductive Logic Programming to Self-Healing Problem in Grid Computing: Is it a Feasible Task?

    Page(s): 13 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB) |  | HTML iconHTML  

    Grid computing systems are extremely large and complex so, manually dealing with its failures becomes impractical. Recently, it has been proposed that the systems themselves should manage their own failures or malfunctions. This is referred as self-healing. To deal with this challenging, is required to predict and control the process through a number of automated learning and proactive actions. In this paper, we proposed inductive logic programming, a relational machine learning method, for prediction and root causal analysis that makes it possible the development of a self-healing component. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Application of Data Mining to Identify Data Quality Problems

    Page(s): 17 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB) |  | HTML iconHTML  

    Modern information systems consist of many distributed computer and database systems. The integration of such distributed data into a single data warehouse system is confronted with the well known problem of low data quality. In this paper we present an approach that facilitates a dynamic identification of spurious and error-prone data stored in a large data warehouse. The identification of data quality problems is based on data mining techniques, such as clustering, subspace clustering and classification. Furthermore, we present via a case study the applicability of our approach on real data. The experimental results show that our approach efficiently identifies data quality problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Shape-Function Grammar Approach for the Synthesis and Modelling of Pixel-Microstrip-Antennas

    Page(s): 23 - 28
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (604 KB) |  | HTML iconHTML  

    Classical microstrip antenna models are either an empirically derived mathematical formula or a full-wave numerical solution. The former type is limited to specific simple geometries, is very fast to compute, and to some extent relates dimensional attributes to electromagnetic properties. On the other hand the latter is applicable to any arbitrary geometrical shape and is very accurate, but is computationally intensive and does not give an explanation of how the device works. In this paper a novel approach to modelling microstrip antennas that address this void is proposed. The model makes use of a coupled shape-function grammar to yield an estimate of the electromagnetic properties of arbitrary shaped microstrip antennas and also relates shape attributes to electromagnetic properties. The model is demonstrated on a pixel microstrip antenna structure in analysis and synthesis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A General Enumeration Method for Models of Crystal Structures

    Page(s): 29 - 34
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (343 KB) |  | HTML iconHTML  

    A graph-based method is described allowing to generate models of crystal structures for a given set of parameters. It uses symmetry-labeled periodic graphs and is complete insofar as all possible topologies are enumerated. Parameters and information on symmetries are taken into account as early as possible in order to reduce the number of graphs in intermediate results significantly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliable Banknote Classification Using Neural Networks

    Page(s): 35 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (275 KB) |  | HTML iconHTML  

    We present a method based on principal component analysis (PCA) for increasing the reliability of banknote recognition. The system is intended for classifying any kind of currency, but in this paper we examine only US dollars (six different bill types). The data was acquired through an advanced line sensor, and after preprocessing, the PCA algorithm was used to extract the main features of data and to reduce the data size. A linear vector quantization (LVQ) network was applied as the main classifier of the system. By defining a new method for validating the reliability, we evaluated the reliability of the system for 1,200 test samples. The results show that the reliability is increased up to 95% when the number of PCA components is taken properly as well as the number of LVQ codebook vectors. In order to compare the results of classification, we also applied hidden Markov models (HMMs) as an alternative classifier. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Building the Trident Scientific Workflow Workbench for Data Management in the Cloud

    Page(s): 41 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1108 KB) |  | HTML iconHTML  

    Scientific workflows have gained popularity for modeling and executing in silico experiments by scientists for problem-solving. These workflows primarily engage in computation and data transformation tasks to perform scientific analysis in the Science Cloud. Increasingly workflows are gaining use in managing the scientific data when they arrive from external sensors and are prepared for becoming science ready and available for use in the Cloud. While not directly part of the scientific analysis, these workflows operating behind the Cloud on behalf of the -data valets¿ play an important role in end-to-end management of scientific data products. They share several features with traditional scientific workflows: both are data intensive and use Cloud resources. However, they also differ in significant respects, for example, in the reliability required, scheduling constraints and the use of provenance collected. In this article, we investigate these two classes of workflows - Science Application workflows and Data Preparation workflows - and use these to drive common and distinct requirements from workflow systems for eScience in the Cloud. We use workflow examples from two collaborations, the NEPTUNE oceanography project and the Pan-STARRS astronomy project, to draw out our comparison. Our analysis of these workflows classes can guide the evolution of workflow systems to support emerging applications in the Cloud and the Trident Scientific Workbench is one such workflow system that has directly benefitted from this to meet the needs of these two eScience projects. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Fault Tolerant Adaptive Method for the Scheduling of Tasks in Dynamic Grids

    Page(s): 51 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (874 KB) |  | HTML iconHTML  

    An essential issue in distributed high-performance computing is how to allocate efficiently the workload among the processors. This is specially important in a computational Grid where its resources are heterogeneous and dynamic. Algorithms like Quadratic Self-Scheduling (QSS) and Exponential Self-Scheduling (ESS) are useful to obtain a good load balance, reducing the communication overhead. Here, it is proposed a fault tolerant adaptive approach to schedule tasks in dynamic Grid environments. The aim of this approach is to optimize the list of chunks that QSS and ESS generates, that is, the way to schedule the tasks. For that, when the environment changes, new optimal QSS and ESS parameters are obtained to schedule the remaining tasks in an optimal way, maintaining a good load balance. Moreover, failed tasks are rescheduled. The results show that the adaptive approach obtains a good performance of both QSS and ESS even in a highly dynamic environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Embedding Existing Heterogeneous Monitoring Techniques into a Lightweight, Distributed Integration Platform

    Page(s): 57 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (390 KB) |  | HTML iconHTML  

    In the computer aided engineering field high performance computing clusters are essential for today's work. In order to use them efficiently, monitoring systems are required. There are a lot of software systems for different monitoring purposes. The task of managing all of them at the same time usually becomes very complex, because they all have different administration requirements. In order to reduce the complexity and thus increase the manageability of high performance computing clusters for engineering applications, we propose a solution that bundles all monitoring activities into a single monitoring environment based upon an integration platform. As the base of the monitoring environment, the distributed integration platform RCE (Reconfigurable Computing Environment) is chosen. Thereby, many things like distribution or privilege management are already available. As it is a requirement to reuse existing monitoring techniques, two techniques are exemplarily examined, and a concept is developed how to embed them into the Reconfigurable Computing Environment. In this paper we describe the concept of an embedded monitoring environment for multiple purposes that reuses existing monitoring techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and Implementation of a Distributed Metascheduler

    Page(s): 63 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    The paper describes a metascheduler for high-performance computing (HPC) grids that is build upon a distributed architecture. It is modelled around cooperating peers represented by the local proxies deployed by participating sites. These proxies exchange job descriptions between themselves with the aim of improving user-, administration-, and grid-defined metrics. Relevant metrics can include, e.g., reduced job runtimes, improved resource utilization, and increased job turnover. The metascheduler uses peer-to-peer algorithms to discover under-utilized resources and unserviced jobs. A selection is made based on a simplified variant of the Analytic Hierarchy Process that we adapted to the special requirements imposed by the Grid. It enables geographically distributed stakeholders to participate in the decision and supports dynamic evaluation of the necessary utility values. Finally, we identify four intrinsic problems that obstruct the implementation of metaschedulers in general. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scientific Applications Running at IFIC Using the GRID Technologies within the e-Science Framework

    Page(s): 73 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (382 KB) |  | HTML iconHTML  

    Projects and research lines of the grid computing group at IFIC are presented. These projects can be divided in two main groups related to the subject of research. The first one is related to physics projects in particular to the GRID infrastructure needed to provide solution to the high CPU demanding and huge storage capacity demanding experiments related to the Large Hadron Collider at CERN. Between these experiments there is special relation with the ATLAS experiment, one of the four main experiments devoted to search new physics at the LHC. The other research line is related to Medical Physics and in particular the use of GRID technologies for the specific applications for Radiotherapy or the recent field of Hadrontherapy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Metaheuristic Approaches for the Minimum Vertex Guard Problem

    Page(s): 77 - 82
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (411 KB) |  | HTML iconHTML  

    We address the problem of stationing guards in vertices of a simple polygon in such a way that the whole polygon is guarded and the number of guards is minimum. This problem is NP-hard with relevant practical applications. In this paper we propose three metaheuristic approaches to this problem. Combined with the genetic algorithms strategy, which was proposed, these four approximation algorithms have been implemented and compared. The experimental evaluation from the hybrid strategy shows significant improvement in the number of guards compared to theoretical bounds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improvement of Link Process in 4D CAD Viewer by Using Interface Board for Construction Project Management

    Page(s): 83 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1965 KB) |  | HTML iconHTML  

    Interference check process in construction schedule management is being one of important items as well as traditional scheduling analysis in project management. For visualizing interference check process, 4D CAD viewer is being gradually used in practical construction project. However, current 4D CAD system still needs more easy functions for practical application in construction site. To make a more easy function, it is necessary to equip a simple link system between 3D object and activity schedule. The link methodologies in current 4D CAD viewer should be improved with understanding of characteristic by each project type. This study suggests an improved and useful link methodology in 4D CAD system for plant project management. Plant project usually includes a great number of detailed elements in each functional facility. In that case, it is more important how it is easy to map 3D object with schedule in link process because the 3D drawings of each element should be linked with its schedule for visualizing 4D object. This study presents a new link process that the objects can be linked based on both 3D and schedule. That is, link process can be simple and easy procedure because the priority in link process can be both 3D object or schedule object by each element type. For developing the improved link system, this study introduces a concept of interface board that visualizes the detailed link phases including all objects in link process. Finally the suggested link process is verified in a developed 4D CAD viewer. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scribble Vectorization Using Concentric Sampling Circles

    Page(s): 89 - 94
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (398 KB) |  | HTML iconHTML  

    In this paper we introduce a path extraction algorithm for multi-stroke scribbled paths by making use of path-centred concentric sampling circles. Circle and line geometry is then exploited to efficiently obtain piece-wise linear models of the multi-stroke segments in the drawing. Parzen window estimation is used to obtain the probability distribution of the grey-level profile of the sampling circles to determine the intersecting angle of the sampling circle with the stroke segments and hence determine the line model parameters. The results obtained show that the algorithm identifies the line models accurately while reducing considerably the computational time required to obtain the line models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Object Detection in Flatland

    Page(s): 95 - 100
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5217 KB) |  | HTML iconHTML  

    Given a rectangle with emitters and receivers on its perimeter, one can detect objects in it by determining which of the line segments between emitters and receivers are blocked by objects. The problem of object detection can be formulated as the problem of finding all non-empty n-wedge intersections, where a wedge is defined by a consecutive set of blocked line segments from the same emitter. We show that for a given set of wedges, one emanating from each emitter, we can determine the intersection (i.e., the convex polygon) in time linear in the number of wedges, assuming some given ordering of the wedges. We present two algorithms that efficiently determine all non-empty n-wedge intersections, assuming that objects are sufficiently large. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automation of Aircraft Pre-design Using a Versatile Data Transfer and Storage Format in a Distributed Computing Environment

    Page(s): 101 - 104
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (243 KB) |  | HTML iconHTML  

    In the aerospace field one often has to deal with a host of highly specialized software applications that need to be orchestrated into one optimization process to produce e. g., an optimized aircraft model. This optimization process combines engineering knowledge from fields as diverse as aerodynamics, aeroelasticity, engine building, environmental impact assessment, material science and structural issues. For each field of science there are highly performant problem solvers available, but yet they don't share the same data exchange formats. The specialized knowledge in each institution involved in a larger project cannot be leveraged easily by other project partners, thus calling for a software and data integration solution to enable global interconnection of local tools. At the German Aerospace Center (DLR) there have been carried out several projects linked with this challenge. We will present here shortly the building blocks for data interchange as well as an example for a workflow building between DLR's institutes (work in progress). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.