By Topic

Soft Computing and Pattern Recognition, 2009. SOCPAR '09. International Conference of

Date 4-7 Dec. 2009

Filter Results

Displaying Results 1 - 25 of 153
  • [Front cover]

    Publication Year: 2009 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (11111 KB)  
    Freely Available from IEEE
  • [Title page i]

    Publication Year: 2009 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (15 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Publication Year: 2009 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (76 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2009 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (109 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2009 , Page(s): v - xv
    Save to Project icon | Request Permissions | PDF file iconPDF (156 KB)  
    Freely Available from IEEE
  • Welcome from the General Chairs

    Publication Year: 2009 , Page(s): xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (65 KB)  
    Freely Available from IEEE
  • Welcome from the Program Chairs

    Publication Year: 2009 , Page(s): xvii
    Save to Project icon | Request Permissions | PDF file iconPDF (63 KB)  
    Freely Available from IEEE
  • Conference Committees

    Publication Year: 2009 , Page(s): xviii - xix
    Save to Project icon | Request Permissions | PDF file iconPDF (73 KB)  
    Freely Available from IEEE
  • Program Committee

    Publication Year: 2009 , Page(s): xx - xxiv
    Save to Project icon | Request Permissions | PDF file iconPDF (103 KB)  
    Freely Available from IEEE
  • Additional reviewers

    Publication Year: 2009 , Page(s): xxv
    Save to Project icon | Request Permissions | PDF file iconPDF (75 KB)  
    Freely Available from IEEE
  • Special Sessions Committees

    Publication Year: 2009 , Page(s): xxvi
    Save to Project icon | Request Permissions | PDF file iconPDF (69 KB)  
    Freely Available from IEEE
  • Technical Support and Sponsors

    Publication Year: 2009 , Page(s): xxvii
    Save to Project icon | Request Permissions | PDF file iconPDF (122 KB)  
    Freely Available from IEEE
  • Plenary Keynotes

    Publication Year: 2009 , Page(s): xxviii - xxxiv
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (92 KB)  

    Provides an abstract for each of the plenary presentations and a brief professional biography of each presenter. The complete presentations were not made available for publication as part of the conference proceedings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Industrial Talk

    Publication Year: 2009 , Page(s): xxxv
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (79 KB)  

    Multi-level approaches become widely applicable to complex tasks of data analysis, wherein complexity may refer to what we try to learn from data, to technical details that prevent users from understanding data mining algorithms, or to data volumes that are too large to keep using standard algorithms. Levels (or layers) can be understood in various ways, as in artificial neural networks, ensembles of classifiers, granular computing, layered learning, et cetera. In this talk, we attempt to categorize the meanings of levels in data mining and processing approaches. Then we focus on two examples: (1) execution of basic database operations that stop being basic for terabytes of data, and (2) extraction of data dependencies in a way that is clear to the users and, in the same time, remains useful while constructing classifiers. The first example relates to the database technology developed by Infobright, where SQL queries can be executed over large data volumes stored on a standard machine, and with a need of neither advanced database tuning nor administration. Performance is achieved using a two-level model of data storage and processing, wherein, at the higher layer, we operate with rough rows, each corresponding to a set of 2?16 of original rows. Rough rows are automatically labelled with compact information related to the values of corresponding rows. This way, we create a new information system, where objects correspond to rough rows and attributes to various types of compact information. Data operations are supported at such a rough level, with an access to original rows still possible whenever compact information turns out insufficient to continue query execution. We implement a number of algorithms that use compact information to minimize and optimize an access to original data stored in a compressed form on disk. The second example relates to a two-level methodology for extracting multi-attribute dependencies approximately holding in data, wherein th- higher layer corresponds to their intuitive representation, while the lower layer hides away the details of how the degrees of their satisfaction in data are actually computed. Extraction of multi-attribute dependencies is an important phase in data mining, e.g., in feature selection, classification, or construction of mechanisms for reasoning about data. On the other hand, the users usually want to interpret such dependencies without a need of understanding in what mathematical sense and to what specific degree they hold in data. In this talk, we outline a framework for representing, extracting and reasoning about multi-attribute dependencies in a way that is the same in case of expressing their degrees of satisfaction in data using, e.g., statistical estimates, information measures, rough sets, fuzzy sets, etc., referring to some common mathematical properties of all those approaches. We believe that the proposed framework is both convenient for the users and efficient in adjusting lowerlevel technical details for particular data types and particular goals of data analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • BiSim: A Simple and Efficient Biclustering Algorithm

    Publication Year: 2009 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (443 KB) |  | HTML iconHTML  

    Analysis of gene expression data includes classification of the data into groups and subgroups based on similar expression patterns. Standard clustering methods for the analysis of gene expression data only identifies the global models while missing the local expression patterns. In order to identify the missed patterns biclustering approach has been introduced. Various biclustering algorithms have been proposed by scientists. Among them binary inclusion maximal algorithm (BiMax) forms biclusters when applied on a gene expression data through divide and conquer approach. The worst-case running-time complexity of BiMax for matrices containing disjoint biclusters is O(nmb) and for arbitrary matrices is of order O(nmb min{n, m}) where b is the number of all inclusion-maximal biclusters in matrix. In this paper we present an improved algorithm, BiSim, for biclustering which is easy and avoids complex computations as in BiMax. The complexity of our approach is O(n*m) for n genes and m conditions, i.e, a matrix of size n*m. Also it avoids extra computations within the same complexity class and avoids missing of any biclusters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CBGP: Classification Based on Gradual Patterns

    Publication Year: 2009 , Page(s): 7 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (461 KB) |  | HTML iconHTML  

    In this paper, we address the issue of mining gradual classification rules. In general, gradual patterns refer to regularities such as "The older a person, the higher his salary''. Such patterns are extensively and successfully used in command-based systems, especially in fuzzy command applications. However, in such applications, gradual patterns are supposed to be known and/or provided by an expert, which is not always realistic in practice. In this work, we aim at mining from a given training dataset such gradual patterns for the generation of gradual classification rules. Gradual classification rules thus refer to rules where the antecedent is a gradual pattern and the conclusion is a class value. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Initial Result of Clustering Strategy to Euclidean TSP

    Publication Year: 2009 , Page(s): 13 - 18
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (391 KB) |  | HTML iconHTML  

    There has been growing interest in studying combinatorial optimization problems by clustering strategy, with a special emphasis on the traveling salesman problem (TSP). Since TSP naturally arises as a sub problem in many transportation, manufacturing and various logistics application, this problem has caught much attention of mathematicians and computer scientists. A clustering strategy will decompose TSP into subgraph and form clusters, so it may reduce the TSP graph to smaller problem. The primary objective of this research is to produce a better clustering strategy that fit into Euclidean TSP. General approach for this research is to produce an algorithm for generating clusters and able to handle large size cluster. The next step is to produce Hamilton path algorithm and followed by inter cluster connection algorithm to form global tour. The significant of this research is solution result error less than 10% compare to best known solution (TSPLIB) and there is an improvement to a hierarchical clustering strategy in order to fit in such the Euclidean TSP method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Clustering Method Based on Weighted Kernel K-Means for Non-linear Data

    Publication Year: 2009 , Page(s): 19 - 24
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (681 KB) |  | HTML iconHTML  

    Clustering is the process of gathering objects into groups based on their feature's similarity. In this paper, we concentrate on Weighted Kernel K-Means method for its capability to manage nonlinear separability and high dimensionality in the data. A new slight modification of WKM algorithm has been proposed and tested on real Rice data. The results show that the accuracy of proposed algorithm is higher than other famous clustering algorithm and ensures that the WKM is a good solution for real world problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Review of Recent Alignment-Free Clustering Algorithms in Expressed Sequence Tag

    Publication Year: 2009 , Page(s): 25 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (383 KB) |  | HTML iconHTML  

    Expressed sequence tags (ESTs) are short single pass sequence reads derived from cDNA libraries, they have been used for gene discovery, detection of splice variants, expression of genes and also transciptome analysis. Clustering of ESTs is a vital step before they can be processed further. Currently there are many EST clustering algorithms available. Basically they can be generalized into two broad approaches, i.e. alignment-based and alignment-free. The former approach is reliable but inefficient in terms of running time, while the latter approach is gaining popularity and currently under rapid development due to its faster speed and acceptable result. In this paper, we propose a taxonomy for sequence comparison algorithms and another taxonomy for EST clustering algorithms. In addition, we also highlight the peculiarities of recently introduced alignment-free EST clustering algorithms by focusing on their features, distance measures, advantages and disadvantages. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parameter Study and Optimization of a Color-Based Object Classification System

    Publication Year: 2009 , Page(s): 31 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (331 KB) |  | HTML iconHTML  

    Typical computer vision systems usually include a set of components such as a preprocessor, a feature extractor, and a classifier that together represent an image processing pipeline. For each component there are different operators available. Each operator has a different number of parameters with individual parameter domains. The challenge in developing a computer vision system is the optimal choice of the available operators and their parameters to construct the appropriate pipeline for the problem at hand. The task of finding the optimal combination and setting depends strongly on the definition of the term optimal. Optimality can reach from minimal computational time to maximal recognition rate of a system. Using the example of the color-based object classification system, this contribution presents a comprehensive approach for finding an optimal system by defining the required image processing pipeline, defining the optimization problem for the classification and improving the optimization by taking parameter studies into consideration. This unique approach produces a color-based classification system with an illuminant independent structure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Arrhythmia Beat Classification Using Pruned Fuzzy K-Nearest Neighbor Classifier

    Publication Year: 2009 , Page(s): 37 - 42
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (405 KB) |  | HTML iconHTML  

    In this paper, pruned fuzzy k-nearest neighbor (PFKNN) classifier is proposed to classify different types of arrhythmia beats present in the MIT-BIH Arrhythmia database. We have tested our classifier on ~103100 beats for six beat types present in the database. Fuzzy KNN (FKNN) can be implemented very easily but large number of training examples used for classification which can be very time consuming and requires large storage space. Hence, we have proposed a time efficient pruning algorithm especially suitable for FKNN which can maintain good classification accuracy with appropriate retained ratio of training data. By using the pruning algorithm with Fuzzy KNN, we have achieved beat classification accuracy of 97% and geometric mean of sensitivity is 94.5% with only 19% of the total training examples. The accuracy and sensitivity is comparable to FKNN when all the training data is used. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • League Championship Algorithm: A New Algorithm for Numerical Function Optimization

    Publication Year: 2009 , Page(s): 43 - 48
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (375 KB) |  | HTML iconHTML  

    Inspired by the competition of sport teams in a sport league, an algorithm is presented for optimizing nonlinear continuous functions. A number of individuals as sport teams compete in an artificial league for several weeks (iterations). Based on the league schedule in each week, teams play in pairs and the outcome is determined in terms of win or loss, given known the team's playing strength (fitness value) resultant from a particular team formation (solution). In the recovery period, each team devises the required changes in the formation/playing style (a new solution) for the next week contest and the championship goes on for a number of seasons (stopping condition). Performance of the proposed algorithm is tested in comparison with that of particle swarm optimization algorithm (PSO) on finding the global minimum of a number of benchmarked functions. Results testify that the new algorithm performs well on all test problems, exceeding or matching the best performance obtained by PSO. This suggests that further developments and practical applications of the proposed algorithm would be worth investigating in the future. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Improved Discrete Particle Swarm Optimization in Evacuation Planning

    Publication Year: 2009 , Page(s): 49 - 53
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB) |  | HTML iconHTML  

    In any flash flood evacuation operation, vehicle assignment at the inundated areas is vital to help eliminate loss of life. Vehicles of various types and capacities are used to evacuate victims to relief centers. This paper examines combinatorial optimization approach with the objective function to assign a specified number of vehicles with maximum number of evacuees to the potential inundated areas. Discrete particle swarm optimization (DPSO) and improved DPSO are proposed and experimented on. Results are presented and compared. Improved DPSO with the proposed min-max approach yields better performance for all four testing categories. Experimenting on a large number of evacuees could further improve the performance of the DPSO. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cat Swarm Optimization for Clustering

    Publication Year: 2009 , Page(s): 54 - 59
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB) |  | HTML iconHTML  

    Cat swarm optimization (CSO) is one of the new heuristic optimization algorithm which based on swarm intelligence. Previous research shows that this algorithm has better performance compared to the other heuristic optimization algorithms: Particle swarm optimization (PSO) and weighted-PSO in the cases of function minimization. In this research a new CSO algorithm for clustering problem is proposed. The new CSO clustering algorithm was tested on four different datasets. The modification is made on the CSO formula to obtain better results. Then, the accuracy level of poposed algorith was compared to those of K-means and PSO clustering. The modification of CSO formula can improve the performance of CSO clustering. The comparison indicates that CSO clustering can be considered as a sufficiently accurate clustering method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementing Particle Swarm Optimization to Solve Economic Load Dispatch Problem

    Publication Year: 2009 , Page(s): 60 - 65
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB) |  | HTML iconHTML  

    Economic Load Dispatch (ELD) is one of an important optimization tasks which provides an economic condition for a power systems. In this paper, Particle Swarm Optimization (PSO) as an effective and reliable evolutionary based approach has been proposed to solve the constraint economic load dispatch problem. The proposed method is able to determine, the output power generation for all of the power generation units, so that the total constraint cost function is minimized. In this paper, a piecewise quadratic function is used to show the fuel cost equation of each generation units, and the B-coefficient matrix is used to represent transmission losses. The feasibility of the proposed method to show the performance of this method to solve and manage a constraint problems is demonstrated in 4 power system test cases, consisting 3,6,15, and 40 generation units with neglected losses in two of the last cases. The obtained PSO results are compared with Genetic Algorithm (GA) and Quadratic Programming (QP) base approaches. These results prove that the proposed method is capable of getting higher quality solution including mathematical simplicity, fast convergence, and robustness to solve hard optimization problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.