By Topic

Soft Computing and Pattern Recognition (SoCPaR), 2010 International Conference of

Date 7-10 Dec. 2010

Filter Results

Displaying Results 1 - 25 of 87
  • [Front matter]

    Page(s): i - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (103 KB)  
    Freely Available from IEEE
  • Optimization of a fuzzy decision trees forest with artificial ant based clustering

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (132 KB) |  | HTML iconHTML  

    In the recent years, forests of decision trees have seen an increasing interest from the Machine Learning community since they allow to aggregate the decisions from a set of decision trees into one robust answer. However, this approach suffers from two well-known limits: first, their performances depend on the number of trees and thus finding the right size and how to aggregate decisions could be very difficult and second, large forests loose the interpretability capacity of a single decision tree. In this paper, we propose a new approach in which decisions trees from a forest are clustered to simplify the overall decision process while maintaining a large amount of decision trees and to facilitate the interpretation of the results. The preliminary results that are presented in this paper show the effectiveness of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Describing acceptable objects by means of Sugeno integrals

    Page(s): 6 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (179 KB) |  | HTML iconHTML  

    Objects are usually described by combinations of properties. Logic-based descriptions offer compact representations for binary properties. Besides, Sugeno integrals are well-known as a powerful qualitative aggregation tool in multiple-criteria decision, which is applicable to gradual properties, and takes into account positive synergies between properties. The paper proposes to investigate the potential use of Sugeno integrals as a representation tool, to lay bare their relation with possibilistic logic representations, and to discuss the handling of negative synergies in this setting using a pair of Sugeno integrals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Classification of brand names based on n-grams

    Page(s): 12 - 17
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (524 KB)  

    Supervised classification has been extensively addressed in the literature as it has many applications, especially for text categorization or web content mining where data are organized through a hierarchy. On the other hand, the automatic analysis of brand names can be viewed as a special case of text management, although such names are very different from classical data. They are indeed often neologisms, and cannot be easily managed by existing NLP tools. In our framework, we aim at automatically analyzing such names and at determining to which extent they are related to some concepts that are hierarchically organized. The system is based on the use of character n-grams. The targeted system is meant to help, for instance, to automatically determine whether a name sounds like being related to ecology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iris features extraction using dual-tree complex wavelet transform

    Page(s): 18 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB) |  | HTML iconHTML  

    This paper presents an iris recognition method based on the two dimensional dual-tree complex wavelet transform (2D-CWT) and the support vector machines (SVM). 2D-CWT has such significant properties as the approximate shift-invariance, high directional selectivity and computationally much more efficient. These properties are very useful in invariant iris recognition. SVM is used as a classifier and several kernel functions are tested in the experiments. The obtained experimental results showed that the proposed approach enhanced the classification accuracy. The experimental results were also compared with the k-NN and Naïve Bayes classifiers to demonstrate the efficacy of the proposed technique. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy methods for forensic data analysis

    Page(s): 23 - 28
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (305 KB) |  | HTML iconHTML  

    In this paper we describe a methodology and an automatic procedure for inferring accurate and easily understandable expert-system-like rules from forensic data. This methodology is based on the fuzzy set theory. The algorithms we used are described in detail, and were tested on forensic data sets. We also present in detail some examples, which are representative for the obtained results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new weighted rough set framework for imbalance class distribution

    Page(s): 29 - 34
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    Jaundice is the most common condition that requires medical attention in newborns. Although most newborns develop some degree of jaundice, a high level bilirubin puts a newborn at risk of bilirubin encephalopathy and kernicterus which are rare but still occur in Egypt. In this paper, a new weighted rough set framework is introduced for early intervention and prevention of neurological dysfunction and kernicterus that are catastrophic sequels of neonatal jaundice. The obtained results show that the weighted rough set can provide significantly more accurate and reliable predictive accuracy than well known algorithms such as weighted SVM and decision tree. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multi stereo camera data fusion for fingertip detection in gesture recognition systems

    Page(s): 35 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (809 KB) |  | HTML iconHTML  

    In this paper we present our results of fingertip detection to realize an automatic gesture recognition system by using a multi stereo camera setup. The online framework detects automatically the hands and the face of the user based on depth and color information. To estimate the spatial position and the joints of fingers a 3D hand model was generated. We used the Iterative Closest Point (ICP) algorithm to calculate the distance error between the model and 3D input data. In addition a separation and evaluation of hand and fingertip movements was implemented. To solve the general problem of self-occlusion we developed a multi stereo camera system to increase the information density. The required calibration is presented by using ICP algorithm and Genetic Algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recognition of signed expressions using visually-Oriented subunits obtained by an immune-Based optimization

    Page(s): 41 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (225 KB) |  | HTML iconHTML  

    The paper considers automatic visual recognition of signed expressions. The proposed method is based on modeling gestures with subunits, which is similar to modeling speech by means of phonemes. To define the subunits a data-driven procedure is applied. The procedure consists in partitioning time series, extracted from video, into subsequences which form homogeneous groups. The cut points are determined by an immune optimization procedure based on quality assessment of the resulting clusters. In the paper the problem is formulated, its solution method is proposed and experimentally verified on a database of 100 Polish words. The results show that our subunit-based classifier outperforms its whole-word-based counterpart, which is particularly evident when new words are recognized on the basis of a small number of examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integration of gesture and posture recognition systems for interpreting dynamic meanings using particle filter

    Page(s): 47 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (814 KB) |  | HTML iconHTML  

    This paper proposes a novel approach for determining the integration criteria using Particle filter for fusion of hand gesture and posture recognition system at decision level. For decision level fusion, integration framework requires the classification of hand gesture and posture symbols in which HMM and SVM are used to classify the alphabets and numbers from gesture and posture recognition system respectively. These classification results are input to integration framework to compute the contribution-weights. For this purpose, Condensation algorithm is employed to approximate optimal a-posterior probability using a-prior probability and Gaussian based likelihood function thus making the weights independent of classification ambiguities. Considering the recognition as a problem of regular grammar, we have developed the production rules based on Context Free Grammar for restaurant scenario. On the basis of contribution-weights, we mapped the recognized outcome over CFG rules and infer meaningful expressions. Experiments are conducted on 500 different combinations of restaurant orders with overall 98.3% inference accuracy which proves the significance of proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Partial approximative set theory: A generalization of the rough set theory

    Page(s): 51 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB) |  | HTML iconHTML  

    There are close links between mathematical morphology and rough set theory. Both theories are successfully applied among others to image processing and pattern recognition. This paper presents a new generalization of the classical rough set theory, called the partial approximative set theory (PAST). According to Pawlak's classic rough set theory, the vagueness of a subset of a finite universe is defined by the difference of its upper and lower approximations with respect to an equivalence relation on the universe. There are two most natural ways of the generalization of this idea. Namely, the equivalence relation is replaced by either any other type of binary relations on the universe or an arbitrary covering of the universe. In this paper, our starting point will be an arbitrary family of subsets of an arbitrary universe, neither that it covers the universe nor that the universe is finite will be assumed. We will give some reasons why this new approach is worth studying, and put our discussions into an overall treatment, called the general approximation framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Possibilistic contextual skylines with incomplete preferences

    Page(s): 57 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (210 KB) |  | HTML iconHTML  

    We propose a possibility theory-based approach to the treatment of missing user preferences in skyline queries. To compensate this lack of knowledge, we show how a set of plausible preferences suitable for the current context can be derived either in a case-based reasoning manner, or using an extended possibilistic logic setting. Uncertain dominance relationships are defined in a possibilistic way and the notion of possibilistic contextual skyline is introduced. This kind of skyline allows us to return the tuples that are non-dominated with a high certainty. The paper also includes a structured overview of the different types of “fuzzy” skylines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An uncertain database model and a query algebra based on possibilistic certainty

    Page(s): 63 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (173 KB) |  | HTML iconHTML  

    In this paper, we consider relational databases containing uncertain attribute values, in the situation where some knowledge is available about the more or less certain value (or disjunction of values) that a given attribute in a tuple can take. We propose a possibility-theory-based model suited to this context and extend the operators of relational algebra in order to handle such relations in a “compact” thus efficient way. It is shown that the proposed model is a strong representation system for the whole relational algebra. An important result is that the data complexity associated with the extended operators in this context is the same as in the classical database case, which makes the approach highly scalable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient method for real-time activity recognition

    Page(s): 69 - 74
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (327 KB) |  | HTML iconHTML  

    Real-time feature extraction is a key component for any action recognition system that claims to be truly real-time. In this paper we present a conceptually simple and computationally efficient method for real-time human activity recognition based on simple statistical features. Such features are very cheap to compute and form a relatively low dimensional feature space in which classification can be carried out robustly. On the Weizmann dataset, the proposed method achieves encouraging recognition results with an average rate up to 97.8%. These results are in a good agreement with the literature. Further, the method achieves real-time performance, and thus can offer timing guarantees to real-time applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mining web videos for video quality assessment

    Page(s): 75 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB) |  | HTML iconHTML  

    Correlating estimates of objective measures related to the presence of different coding artifacts with the quality of video as perceived by human observers is a non-trivial task. There is no shortage of data to learn from, thanks to the Internet and web-sites such as YouTubetm. There has, however, been little done in the research community to try to use such resources to advance our understanding of perceived video quality. The problem is the fact that it is not easy to obtain the Mean Opinion Score (MOS), a standard measure of the perceived video quality, for more than a handful of videos. The paper presents an approach to determining the quality of a relatively large number of videos obtained randomly from YouTubetm. Several measures related to motion, saliency and coding artifacts are calculated for the frames of the video. Programmable graphics hardware is used to perform clustering: first, to create an artifacts-related signature of each video; then, to cluster the videos according to their signatures. To obtain an estimate for the video quality, MOS is obtained for representative videos, closest to the cluster centers. This is then used as an estimate of the quality of all other videos in the cluster. Results based on 2,107 videos containing some 90,000,000 frames are presented in the paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Crowd behavior detection by statistical modeling of motion patterns

    Page(s): 81 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1465 KB) |  | HTML iconHTML  

    The governing behaviors of individuals in crowded places offer unique and difficult challenges, and limit the scope of conventional surveillance systems. In this paper, we investigate the crowd behaviors and localize the anomalies due to individual's abrupt dissipation. The novelty of the proposed approach can be described in three aspects. First, we introduce block-clips by sectioning the video segments into non-overlapping spatio-temporal patches to marginalize the arbitrarily complicated and dense flow field. Second, we treat the flow field in each block-clip as 2d distribution of samples and mixtures of Gaussian is used to parameterize it keeping generality of flow field intact. K-means algorithm is employed to initialize the mixture model and is followed by Expectation Maximization for optimization. These mixtures of Gaussian result in the distinct flow patterns precisely a sequence of dynamic patterns for each block-clip. Third, a bank of Conditional Random Field model is employed one for each block clip and is learned from the sequence of dynamic patterns and classifies each block-clip as normal and abnormal. We conduct experiment on two challenging benchmark crowd datasets PETS 2009 and University of Minnesota and results show that our method achieves higher recognition rates in detecting specific and overall crowd behaviors. In addition, the proposed approach shows dominating performance during the comparative analysis with similar approaches in crowd behavior detection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ultra fast fingerprint indexing for embedded system

    Page(s): 87 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (541 KB) |  | HTML iconHTML  

    A novel fingerprint indexing scheme for embedded system is presented in this paper. Our approach is a model-based one, which efficiently retrieves correct hypotheses using novel rotation-invariant features formed by the core and minutiae of fingerprint image. Differently from most existing fingerprint indexing approaches, the proposed algorithm is suitable for embedded systems because it is not time consuming and does not acquire large memory. Experiments were conducted on the FVC2002 database and our private database respectively. Experiment results based on FVC2002 database show that the time consuming for identification using proposed algorithm is about 50 times faster than the conventional approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Speech recognition by indexing and sequencing

    Page(s): 93 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (174 KB) |  | HTML iconHTML  

    Recognition by Indexing and Sequencing (RISq) is a general-purpose method for classification of temporal vector sequences. We developed an advanced version of RISq and applied it to isolated-word speech recognition, a task most commonly performed with Hidden Markov Models (HMMs) or Dynamic Time Warping (DTW). RISq is substantially different from both these methods and presents several advantages over them: robust recognition can be achieved with only a few samples from the input sequence and training can be carried out with one or more examples per class. This enables much faster training and also allows to recognize speech with a variety of accents. A two-step classification algorithm is used: first the training samples closest to each input sample are identified and weighted with a parallel algorithm (indexing). Then a maximum weighted bipartite graph matching is found between the input sequence and a training sequence, respecting an additional temporal constraint (sequencing). We discuss the application of RISq to speech recognition and compare its architecture and performance with that of Sphinx, a state-of-the-art speech recognizer based on HMMs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iris recognition based on multi-block Gabor statistical features encoding

    Page(s): 99 - 104
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (476 KB) |  | HTML iconHTML  

    Iris recognition has been recently given greater attention in human identification and it's becoming increasingly an active topic in research. This paper presents a personal identification method based on iris. The Method includes three steps. In the first one, the eye image is processed in order to obtain a segmented and normalized eye image by applying an integrodifferential operator, Hough transform and polar transformation. In the second step, the texture of iris image is analyzed by Gabor filters. Then, we have proposed a novel encoding method based on extracting invariant from local region of the iris to create an iris code of 144 bytes. We have studied different statistical descriptors of filtered image. We have calculated the modified Hamming distance between templates to find out the similarity between irises. The method is tested on the Casia v3 database. The experimental results illustrate the effectiveness of this coding in two modes of biometric iris: 100% of rank-one recognition rate and 1.97% of equal error rate in verification. Therefore the coding process is presented to achieve more satisfactory and convincing results than performed by traditional statistical based approaches and low storage requirements as an interesting alternative to Gabor phase coding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Investigating analysis of speech content through text classification

    Page(s): 105 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB) |  | HTML iconHTML  

    The field of Text Mining has evolved over the past years to analyze textual resources. However, it can be used in several other applications. In this research, we are particularly interested in performing text mining techniques on audio materials after translating them into texts in order to detect the speakers' emotions. We describe our overall methodology and present our experimental results. In particular, we focus on the different features selection and classification methods used. Our results show interesting conclusions opening up new horizons in the field, and suggest an emergence of promising future work yet to be discovered. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TIGER: Querying large tables through criteria extension

    Page(s): 111 - 118
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (334 KB) |  | HTML iconHTML  

    Sales on the Internet have increased significantly during the last decade, and so, it is crucial for companies to retain customers on their web site. Among all strategies towards this goal, providing customers with a flexible search tool is a crucial issue. In this paper, we propose an approach, called TIGER, for handling such flexibility automatically. More precisely, if the search criteria of a given query to a relational table or a Web catalog are too restrictive, our approach computes a new query combining extensions of the criteria. This new query maximizes the quality of the answer, while being as close as possible to the original query. Experiments show that our approach improves the quality of queries, in the sense explained just above. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward ontology-based personalization of a recommender system in social network

    Page(s): 119 - 122
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    Personalized search, navigation and content delivery techniques have attracted interest in the recommender systems as a means to decrease search ambiguity and return results most relevant to a particular user preferences. In this paper, we study the effect of incorporating user semantic profile derived from past user's behavior and preferences on the accuracy of a recommender system. We present a preliminary work which aims at tackling the most technical issues due to the integration of an ontology-based semantic user profile within a hybrid recommender system based on our early released guided recommender algorithm. A semantic user profile context is represented as an instance of a reference domain ontology in which concepts are annotated by interest scores. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Background subtraction for object detection under varying environments

    Page(s): 123 - 126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (691 KB) |  | HTML iconHTML  

    Background subtraction is widely used for extracting unusual motion of object of interest in video images. In this paper, we propose a fast and flexible approach of object detection based on an adaptive background subtraction technique that also effectively eliminates shadows based on color constancy principle in RGB color space. This approach can be used for both outdoor and indoor environments. Our proposed method of background subtraction makes use of multiple thresholding technique for detecting object of interests for any given scene. Once the moving object has been detected from the complex background, then the shadows are detected and eliminated by considering some environmental parameters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Peculiar image search by Web-extracted appearance descriptions

    Page(s): 127 - 132
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1562 KB) |  | HTML iconHTML  

    We have become able to get enough approvable images of a target object just by submitting its object-name to a conventional keyword-based Web image search engine. However, because the search results rarely include its uncommon images, we can often get only its common images and cannot easily get exhaustive knowledge about its appearance (look and feel). As next steps of image searches in the Web, it is very important to discriminate between “Typical Images” and “Peculiar Images” in the approvable images, and moreover, to collect many different kinds of peculiar images as exhaustively as possible. This paper proposes a novel method to precisely retrieve peculiar images of a target object by its typical/peculiar appearance descriptions (e.g., color-names) extracted from the Web and/or its typical/peculiar image features (e.g., color-features) converted from them, as a solution to the 1st next step of image retrievals in the Web. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A discrete particle swarm optimization with random selection solution for the shortest path problem

    Page(s): 133 - 138
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (194 KB)  

    This article proposes a discrete particle swarm optimization (DPSO) for solution of the shortest path problem (SPP). The proposed DPSO adopts a new solution mapping which incorporates a graph decomposition and random selection of priority value. The purpose of this mapping is to reduce the searching space of the particles, leading to a better solution. Detailed descriptions of the new solution and the DPSO algorithm are elaborated. Computational experiments involve an SPP dataset from previous research and road network from Malaysia. The DPSO is compared with a genetic algorithm (GA) using naive and new solution mapping. The results indicate that the proposed DPSO is highly competitive and shows good performance in both fitness value and processing time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.