Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Information Reuse and Integration, 2008. IRI 2008. IEEE International Conference on

Date 13-15 July 2008

Filter Results

Displaying Results 1 - 25 of 94
  • Message from Program Co-Chairs

    Publication Year: 2008 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (95 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Forward

    Publication Year: 2008 , Page(s): ii
    Save to Project icon | Request Permissions | PDF file iconPDF (98 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Conference organizers

    Publication Year: 2008 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (102 KB)  
    Freely Available from IEEE
  • International Technical Program Committee

    Publication Year: 2008 , Page(s): iv - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (99 KB)  
    Freely Available from IEEE
  • Computation with imprecise probabilities

    Publication Year: 2008 , Page(s): viii - ix
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | PDF file iconPDF (115 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Inventing the future of neurology: Integrated wavelet-chaos-neural network models for knowledge discovery and automated EEG-based diagnosis of neurological disorders

    Publication Year: 2008 , Page(s): x - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (115 KB)  
    Freely Available from IEEE
  • Panel: The role of information search and retrieval in economic stimulation

    Publication Year: 2008 , Page(s): xii
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | PDF file iconPDF (94 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2008 , Page(s): xiii - xxii
    Save to Project icon | Request Permissions | PDF file iconPDF (186 KB)  
    Freely Available from IEEE
  • Author index

    Publication Year: 2008 , Page(s): xxiii - xxix
    Save to Project icon | Request Permissions | PDF file iconPDF (157 KB)  
    Freely Available from IEEE
  • [Copypright notice]

    Publication Year: 2008 , Page(s): xxx
    Save to Project icon | Request Permissions | PDF file iconPDF (172 KB)  
    Freely Available from IEEE
  • Reengineering XML into object-oriented database

    Publication Year: 2008 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (127 KB) |  | HTML iconHTML  

    This paper handles the conversion from an existing XML-Schema into object-oriented database. The major motivation for this work is to store XML-Schema into object oriented database. There are more common features between the object-oriented model and XML, and thus it is more attractive to map from XML into object-oriented database; such mapping preserves database specifics. To achieve the mapping, what we call the object graph is derived based on characteristics of the XML-Schema; it simply summarizes and includes all complex and simple elements and the links, which are the basics of the XML-Schema. Then, the links are simulated in terms of nesting to get a simulated object graph. This way, everything in a simulated object graph is directly representable in object-oriented database. Finally, we handle the mapping of the actual data from XML document(s) into the corresponding object-oriented database. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling role-based access control using a relational database tool

    Publication Year: 2008 , Page(s): 7 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (420 KB) |  | HTML iconHTML  

    Traditional access control schemes have certain inherent weaknesses. As a promising alternative to traditional access control schemes, role-based access control has received special attention for its unique flexibility. In this paper, we use a database tool called WinRDBI to study the behavior of a role-based access control model. A detailed discussion of the role-based access control behaviors and policies is then presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using a search engine to query a relational database

    Publication Year: 2008 , Page(s): 11 - 16
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (178 KB) |  | HTML iconHTML  

    While search engines are the most popular way to find information on the web, they are generally not used to query relational databases (RDBs). This paper describes a technique for making the data in an RDB accessible to standard search engines. The technique involves using a URL to express queries and creating a wrapper that can then process the URL-query and generate web pages that contain the answer to the query as well as links to additional data. By following these links, a crawler is able to index the RDB along with all the URL-queries. Once the content and their corresponding URL-queries have been indexed, a user may submit keyword queries through a standard search engine and receive up-to-date database information. The system was then tested to determine if it could return results that were similar to those submitted using SQL. We also looked at whether a standard search engine such as Google could actually index the database content appropriately. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Route selection algorithm based on integer operation Ant Colony Optimization

    Publication Year: 2008 , Page(s): 17 - 21
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (386 KB) |  | HTML iconHTML  

    This paper discusses a new route selection algorithm which combines the integer operation Ant Colony Optimization (ACO) with Dijkstra algorithm. Regarding calculation of selection probability, local update rule, and global update rule, the proposed ACO adopts new integer arithmetic instead of conventional floating point arithmetic. As compared with conventional floating point arithmetic approach, the proposed integer operation approach, which is hardware-oriented, achieves not only reduction of calculation cost and gate size, but also improvement of latency and clock frequency. Moreover, experiments using actual map data prove the effectiveness of the proposed route selection algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimization of controllers in the thermal system using initial pheromone distribution in Ant Colony Optimization

    Publication Year: 2008 , Page(s): 22 - 27
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (117 KB) |  | HTML iconHTML  

    Ant Colony Optimization (ACO) , an intelligent swarm algorithm, proves effective in various fields. However, the choice of the first route and the initial distribution of pheromone are among the toughest yet most crucial factors in determining the performance of process optimization. According to the materials we referred to, almost all the existing methods of ACO set the same constant in all routes as the initial pheromone. However, in that case, the searching process might be misleading, or stick into local optimal values. In this article, a new method is proposed to optimize the parameter searching process in thermal objects particularly, implementing initial pheromone distribution according to a set of formulas concluded from many observances and practical tests. We used MATLAB as the program design platform. The experiment showed that this method is satisfactory. Moreover, it can be applied in other intelligent algorithms such as Genetic Algorithm, which is also in demand of setting initial parameters and range of values. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Portfolio selection using an artificial immune system

    Publication Year: 2008 , Page(s): 28 - 33
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5533 KB) |  | HTML iconHTML  

    This paper presents a novel heuristic method for solving a generalized Markowitz mean-variance portfolio selection model. The generalized model includes two types of constraints; bounds-on-holdings and cardinality constraints. The former guarantee that the amount invested (if any) in each asset is between its predetermined upper and lower bounds while the latter ensures that the total selected assets in the portfolio is equal to a predefined number. The generalized model is, thus, classified as a quadratic 0/1 integer programming model necessitating the use of efficient heuristics to find the solution. Some heuristic methods based on Genetic Algorithm, Simulated Annealing, Tabu Search and Neural Networks have been reported in the literatures. In this paper, we propose a novel heuristic based on an artificial immune system. The proposed approach is illustrated and compared with other methods using five sample set of data utilized by other researchers. The computational results show that the proposed approach can effectively solve large-scale problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Web service composition based on integrated substitution and adaptation

    Publication Year: 2008 , Page(s): 34 - 39
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (401 KB) |  | HTML iconHTML  

    Traditional web service composition is often statically generated using knowledge available at the design time. However, in practice, composed web services must cope with changing environment at run time to maintain the usability of current composition plan. In this paper, we propose a solution for web service composition based on adaptation to avoid the problem of such invalidated running composition plans. In addition, we add another layer of process between adaptation and composition, called substitution, in the system architecture. The objective of the research is to show that adaptation and substitution complement each other, and their integration in the system provides best flexibility and efficiency for web service composition and execution during the design time and running time, respectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reachability analysis of Web service interfaces

    Publication Year: 2008 , Page(s): 40 - 45
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (968 KB) |  | HTML iconHTML  

    We use WCFA (Web Service Interface Control Flow Automata) to model web service interfaces. Global behaviors of web service compositions are captured by abstract reachability graph(ARG). A polynomial time algorithm for the construction of ARG is presented. The algorithm uses a reachability analysis to verify both safety and call stack inspection properties. Both kinds of properties are expressed by assertions at control points of ARG. Each control point is equipped with a state formula and a call stack. Verification is done by a SAT solver which checks whether the assertions are logical consequences of the state formulas(or the call stacks). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling and synthesis of service composition using tree automata

    Publication Year: 2008 , Page(s): 46 - 51
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (222 KB) |  | HTML iconHTML  

    We revisit the problem of synthesis of service composition in the context of service oriented architecture from a tree automata perspective. Comparing to existing finite state machine and graph-based approaches to the problem of service composition, tree automata offers a more flexible and faithful modeling of multi-input services and their admissible compositions. In our framework, tree automata is used to express both type signature constraints of individual services as well as constraints on the order in which services must be invoked. To synthesize service compositions, users may provide optional specifications on the desired composite service. The user specifications are also expressed as tree automata. We employ a combination of tree automata algorithms to compute the set of all possible valid service compositions which satisfy the user specifications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • AMSQM: Adaptive multiple super-page queue management

    Publication Year: 2008 , Page(s): 52 - 57
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (450 KB) |  | HTML iconHTML  

    Super-Pages have been wandering around for more than a decade. There are some particular operating systems that support Super-Paging and there are some recent research papers that show interesting ideas how to intelligently integrate them; however, nowadays Operating System’s page replacement mechanism still uses the old Clock algorithm which gives the same priority to small and large pages. In this paper we show a technique that enhances the page replacement mechanism to an algorithm based on more parameters and is suitable for a Super-Paging environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data warehouse architecture and design

    Publication Year: 2008 , Page(s): 58 - 63
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (153 KB) |  | HTML iconHTML  

    A data warehouse is attractive as the main repository of an organization’s historical data and is optimized for reporting and analysis. In this paper, we present a data warehouse the process of data warehouse architecture development and design. We highlight the different aspects to be considered in building a data warehouse. These range from data store characteristics to data modeling and the principles to be considered for effective data warehouse architecture. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mining knowledge flow for modeling the information needs of task-based groups

    Publication Year: 2008 , Page(s): 64 - 69
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (179 KB) |  | HTML iconHTML  

    Knowledge is the most important resource to create core competitive advantages for an organization. Such knowledge is circulated and accumulated by a knowledge flow (KF) in an organization to support worker’s tasks. Workers may cooperate and participate in several task-based groups to fulfill their needs. In this paper, we propose a group-based knowledge flow mining algorithm which integrates information retrieval and data mining techniques for mining and constructing the group-based KF (GKF) for task-based groups. The GKF is expressed as a directed knowledge graph to represent the knowledge referencing behavior for a group of workers with similar task needs. The frequent knowledge referencing paths are identified from the knowledge graph to indicate the frequent knowledge flows of the workers. We also implement a prototype of GKF mining system to demonstrate the effectiveness of our proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An empirical study of supervised learning for biological sequence profiling and microarray expression data analysis

    Publication Year: 2008 , Page(s): 70 - 75
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (154 KB) |  | HTML iconHTML  

    Recent years have seen increasing quantities of high-throughput biological data available for genetic disease profiling, protein structure and function prediction, and new drug and therapy discovery. High-throughput biological experiments output high volume and/or high dimensional data, which impose significant challenges for molecular biologists and domain experts to properly and rapidly digest and interpret the data. In this paper, we provide simple background knowledge for computer scientists to understand how supervised learning tools can be used to solve biological challenges, with a primary focus on two types of problems: Biological sequence profiling and microarray expression data analysis. We employ a set of supervised learning methods to analyze four types of biological data: (1) gene promoter site prediction; (2) splice junction prediction; (3) protein structure prediction; and (4) gene expression data analysis. We argue that although existing studies favor one or two learning methods (such as Support Vector Machines), such conclusions might have been biased, mainly because of the inadequacy of the measures employed in their study. A line of learning algorithms should be considered in different scenarios, depending on the objective and the requirement of the applications, such as the system running time or the prediction accuracy on the minority class examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An unsupervised protein sequences clustering algorithm using functional domain information

    Publication Year: 2008 , Page(s): 76 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (398 KB) |  | HTML iconHTML  

    In this paper, we present an unsupervised novel approach for protein sequences clustering by incorporating the functional domain information into the clustering process. In the proposed framework, the domain boundaries predicated by ProDom database are used to provide a better measurement in calculating the sequence similarity. In addition, we use an unsupervised clustering algorithm as the kernel that includes a hierarchical clustering in the first phase to pre-cluster the protein sequences, and a partitioning clustering in the second phase to refine the clustering results. More specifically, we perform the agglomerative hierarchical clustering on protein sequences in the first phase to obtain the initial clustering results for the subsequent partitioning clustering, and then, a profile Hidden Markove Model (HMM) is built for each cluster to represent the centroid of a cluster. In the second phase, the HMMs based k-means clustering is then performed to refine the cluster results as protein families. The experimental results show our model is effective and efficient in clustering protein families. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Outcome prediction in traumatic pelvic injuries using maximum similarity and quality measures

    Publication Year: 2008 , Page(s): 82 - 85
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB) |  | HTML iconHTML  

    Traumatic pelvic injury is frequently life-threatening due to its association with severe hemorrhage and the high risk of complications. Immediate medical treatment is therefore of utmost importance; however, decisions regarding treatment are often very difficult to make due to the amount and complexity of patient information. The use of a computer-aided decision making system to help trauma surgeons assess the severity of a patient’s condition, and to make more reliable and rapid treatment decisions, could improve care giving standards and reduce the cost of trauma care. This paper focuses on creating such a system based on the rules derived through CART and C4.5. The system is designed to predict the eventual outcome of a trauma case home or rehab - by using maximum similarity, measure of discrimination, specificity and sensitivity to form a reliable decision regarding a patient’s condition. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.