By Topic

Computational Intelligence for Measurement Systems and Applications (CIMSA), 2012 IEEE International Conference on

Date 2-4 July 2012

Filter Results

Displaying Results 1 - 25 of 29
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (353 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): ii
    Save to Project icon | Request Permissions | PDF file iconPDF (97 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): iii - vi
    Save to Project icon | Request Permissions | PDF file iconPDF (119 KB)  
    Freely Available from IEEE
  • [Front matter]

    Page(s): vii - ix
    Save to Project icon | Request Permissions | PDF file iconPDF (114 KB)  
    Freely Available from IEEE
  • The partitioned kernel machine algorithm for online learning

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1023 KB) |  | HTML iconHTML  

    Kernel machines have been successfully applied to many engineering problems requiring pattern recognition and regression. Kernel machines are a family of machine learning algorithms including support vector machines (SVM) [1], kernel least mean squares adaptive filter (KLMS) [2], and kernel recursive least squares (KRLS) adaptive filter [3] to name a few. In this paper we present the partitioned kernel machine algorithm for use in online learning in virtual environments. The PKM algorithm enhances the accuracy of the computationally efficient KLMS algorithm. The PKM algorithm is an iterative update procedure that focuses on a subset of the stored vectors in the kernel machine buffer. We use a similarity measure for the selection of kernel machine vectors that allow more common vectors to be updated more frequently, and outlier vectors to be updated less frequently. We validate the increased accuracy of our novel algorithm in two separate experimental settings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structural damage detection using artificial neural networks and wavelet transform

    Page(s): 7 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (976 KB) |  | HTML iconHTML  

    With the ever-increasing demand for the safety and functionality of civil infrastructures, structure health monitoring (SHM) has now become more and more important. Recent developments in computational intelligence and digital signal processing offer great potentials to develop a more efficient, reliable, and robust structure damage identification system. In this paper, the application of artificial neural networks and wavelet analysis is investigated to develop an intelligent and adaptive structural damage detection system. The proposed approach is tested on an IASC (International Association for Structural Control)-ASCE (American Society of Civil Engineers) SHM benchmark problem. Satisfactory computer simulation results are obtained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Holonic granularity in intelligent data analysis: A case study implementation

    Page(s): 12 - 17
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1342 KB) |  | HTML iconHTML  

    A granule is any atomic element that is not distinguishable from its peers for manifest features but only for the fact that it represents a singleton (eventually overarching a subset of elements) among other singletons. The importance of granule in Computational Intelligence (CI) is testified by the recent development of Granular Computing (GrC) whose aim is to provide computational methodologies and tools to properly handle information processing at different granularity levels. One important aspect, sometimes dismissed by mainstream research in GrC, is the way interpretations are hidden in observational data at multiple granule scales. It is often the case, in fact, that certain patterns showing coarse statistical evidence at a given observation level have a number of well-defined rules of interpretation at a finer granule level. Currently available CI tools seem to lack on this point. This work reports on the experience gained in developing a CI tool for data analysis named H-GIS (Holonic-Granularity Inference System). The tool is specifically conceived to focus on measurement data interpretation at multiple granularity scales by employing the modeling framework of the so-called holonic systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probabilistic Differential Evolution for optimal design of LQR weighting matrices

    Page(s): 18 - 23
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (665 KB) |  | HTML iconHTML  

    Recently, Differential Evolution algorithms come into play as they provide best tradeoffs between solution quality and the computational effort required for determining a satisfactory approximation of the optimised solution. In this paper, an optimal design of LQR weighting matrices using Probabilistic Differential Evolution algorithm for a constrained state feedback problem is presented. These weighting matrices are designed on the desired performance set by the designer. This method shows its efficiency in the proposed problem and can be considered as a competitive method for those issued from a complicate mathematic formulation. In fact, the results of simulation of an Aircraft Landing system show the effectiveness of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A genetic simulated annealing hybrid algorithm for relay nodes deployment optimization in industrial wireless sensor networks

    Page(s): 24 - 28
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (745 KB) |  | HTML iconHTML  

    With the development of wireless sensor networks, low-cost and high-reliability industrial wireless sensor networks become feasible. The industrial wireless sensor networks should be designed to resist the failure of some nodes in harsh environments. In order to minimize the installation cost of relay nodes for fault tolerant hierarchical networks planning, we proposed a genetic simulated annealing hybrid algorithm. The algorithm determines the number of relay nodes, along with their locations, so that each sensor node can be covered by at least two relay nodes, and the network of relay nodes is 2-connected. The result produced by the presented algorithm within limited number of iterations is reasonable and the algorithm leads to improvements compared with genetic algorithm and integer linear program (ILP). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weight estimation from frame sequences using computational intelligence techniques

    Page(s): 29 - 34
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (358 KB) |  | HTML iconHTML  

    Soft biometric techniques can perform a fast and unobtrusive identification within a limited number of users, be used as a preliminary screening filter, or combined in order to increase the recognition accuracy of biometric systems. The weight is a soft biometric trait which offers a good compromise between distinctiveness and permanence, and is frequently used in forensic applications. However, traditional weight measurement techniques are time-consuming and have a low user acceptability. In this paper, we propose a method for a contactless, low-cost, unobtrusive, and unconstrained weight estimation from frame sequences representing a walking person. The method uses image processing techniques to extract a set of features from a pair of frame sequences captured by two cameras. Then, the features are processed using a computational intelligence approach, in order to learn the relations between the extracted characteristics and the weight of the person. We tested the proposed method using frame sequences describing eight different walking directions, and captured in uncontrolled light conditions. The obtained results show that the proposed method is feasible and can achieve a view-independent weight estimation, also without the need of computing a complex model of the body parts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modelling of survival curves in food microbiology using adaptive fuzzy inference neural networks

    Page(s): 35 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (949 KB) |  | HTML iconHTML  

    The development of accurate models to describe and predict pressure inactivation kinetics of microorganisms is very beneficial to the food industry for optimization of process conditions. The need for “intelligent” methods to model highly nonlinear systems is long established. The architecture and learning scheme of a novel fuzzy logic system implemented in the framework of a neural network is proposed. The objective of this research is to investigate the capabilities of the proposed scheme, to predicting of survival curves of Listeria monocytogenes inactivated by high hydrostatic pressure in UHT whole milk. The network constructs its initial rules by clustering while the final fuzzy rule base is determined by competitive learning. The performance of the proposed scheme has been compared against neural networks and partial least squares models usually used in food microbiology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Facial emotional expressions recognition based on Active Shape Model and Radial Basis Function Network

    Page(s): 41 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1096 KB) |  | HTML iconHTML  

    Facial emotional expressions recognition (FEER) is important research fields to study how human beings reflect to environments in affective computing. With the rapid development of multimedia technology especially image processing, facial emotional expressions recognition researchers have achieved many useful result. If we want to recognize the human's emotion via the facial image, we need to extract features of the facial image. Active Shape Model (ASM) is one of the most popular methods for facial feature extraction. The accuracy of ASM depends on several factors, such as brightness, image sharpness, and noise. To get better result, the ASM is combined with Gaussian Pyramid. In this paper we propose a facial emotion expressions recognizing method based on ASM and Radial Basis Function Network (RBFN). Firstly, facial feature should be extracted to get emotional information from the region, but this paper use ASM method by the reconstructed facial shape. Second stage is to classify the facial emotion expressions from the emotional information. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotional expressions by using RBFN. The experimental result from RBFN classifiers show a recognition accuracy of 90.73% for facial emotional expressions using the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of how the choice of Machine Learning algorithms affects the prediction of a clinical outcome prior to minimally invasive treatments for Benign Pro Static Hyperplasia BPH

    Page(s): 47 - 52
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (657 KB) |  | HTML iconHTML  

    Benign Pro Static Hyperplasia (BPH) is estimated to effect 50% of men by the age of 50, and 75% by the age of 80. Predicting a clinical outcome prior to minimally invasive treatments for BPH would be very useful, but has not been reliable in spite of multiple assessment parameters such as symptom indices and flow rates. I our prior work we have shown the effect of greater impact feature selection has on prediction of the BPH clinical outcomes. In this work we take an in depth look at how changes to the Artificial Intelligence and Machine Learning methods can have an affect on how well the process does at predicting the outcome of the patients in the testing group. The affect of which classifier is used, to predict BPH surgical outcomes, is investigated to see if certain classifiers perform better with the data. The affect of which metric is selected for analyzing the performance of the classifier prediction is also observed. The affect of which features and how many are selected to train and predict the data is observed. Finally, the affect of using the original, unchanged, date versus a discretized version of the data is also observed. The objective in this paper is to determine, in this case, which of the above-mentioned factors affect the outcome of the predictive models, to allow the best factor selection in each case so that the best predictive method of NPH for this data, can be determined. In particular, the data is analyzed to determine if some of these factors have a larger effect on the outcome than others. Through experimental results we show which and how some factors are found to have no real influence on clinical outcome prediction, and show how in some other cases there are a few equally good choices. Here four machine learning algorithms, namely Decision Tree, Naïve Bayes, LDA, and ADABoost are selected and used in the comparison. For prediction performance metrics comparison we use the Area Under the Curve (AUC), Accuracy (ACC), and the Ma- thew Correlation Coefficient (MCC). Both internal cross-validation and external validation are used to analyze the performance and results of the predictive models considered. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modified corticomuscular coherence measurement and computation under static force output of human-machine interaction

    Page(s): 53 - 57
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB) |  | HTML iconHTML  

    Beta-rage electroencephalogram (EEG)-EMG coupling has been extensively investigated under different force output tasks of human-machine interaction over the past decades. By applying corticomuscular coherence (CMC), beta-range (15-30hz) coherence has been well investigated during static force condition as well as dynamic force condition. However, the traditional CMC method limits the two different signals within a same frequency band, thus, a large portion of useful frequency information may be lost. The present study addresses this problem by applying the modified CMC. The experimental results of 4 static force outputs with 8 subjects showed that by using the traditional CMC, as the force output increased, the dominant peak of EEG-EMG coherence spread from alpha and beta bands to gamma (30-45Hz) band, while by using the modified method, the highest EEG-EMG coherence value focused in beta band and a notable increasing tendency of coherence was achieved in gamma band as with the force value increased. By applying the modified method, more information under static force conditions instead of modulated force output have been obtained, there may be a promising application of this method in the neurophysiology study of motor control during human-machine interaction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel probabilistic fuzzy set for uncertainties-based integration inference

    Page(s): 58 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1381 KB) |  | HTML iconHTML  

    The probabilistic fuzzy set and the related probabilistic fuzzy logic system is designed for handling the uncertainties with both stochastic and fuzzy features. In this paper, based on the review of probabilistic fuzzy logic system, a novel probabilistic fuzzy set is proposed. It considers random variation from center in the triangular membership function. The work presented will improve the potential application of probabilistic fuzzy sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Interconnected Dynamical System composed of dynamics-based Reinforcement Learning agents in a distributed environment: A case study

    Page(s): 63 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1214 KB) |  | HTML iconHTML  

    This paper presents a case study of an Interconnected Dynamical System (IDS) composed of Intelligent Reinforcement Learning (RL) agents, and characterized by a Hybrid P2P/Master-Slave architecture. In particular, we propose and extent our previously proposed non-dynamics-based RL work to make it an IDS. Furthermore, we study how the addition of motion constrains, knowledge sharing between agents, and distributed computing affect the overall performance of the system. In addition, we introduce a new dynamics based reward mechanism for reinforcement learning agents. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Human gait recognition based on hybrid-dimensional features from infrared motion images

    Page(s): 69 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (371 KB) |  | HTML iconHTML  

    Gait recognition, also called gait-based human identification, is a relatively new research direction in biometrics. It aims to discriminate individuals by the way they walk. This paper describes a human recognition algorithm by combining three-dimensional and two-dimensional features of the infrared gait. The similarity of the human models and image was measured using the pose evaluation function which included the boundary and region characteristic. A hierarchical search strategy was used to extract the lower body joint angles. And then the peak values of Radon transform from 2D human silhouettes were also attained. Finally, we carry out the human infrared gait recognition based on SVM using the hybrid-dimensional features. Multiple feature fusion is also executed at feature level, and the recognition results demonstrate that the performance of multiple features is better than any single feature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combining concepts of inertia weights and constriction factors in particle swarm optimization

    Page(s): 73 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (141 KB) |  | HTML iconHTML  

    A particle swarm optimization algorithm with global star topology designed by combining the concepts of the inertia weight and constriction factor is proposed in this paper. We enhance the global search ability at the beginning, while slowing down in the local search when the particles are near the local minimum by the linearly decreasing inertia weight. We apply the constriction factor with a value of 0.729 scales down the velocity step sizes, such that the particles can move without large overshoots at the beginning and smoothly approach the goals with a series of small steps when the particles are near the area of the optimal solution prior to the end of iterations. For a a quick convergence, the global star topology is chosen in this algorithm. The simulations performed on 2 well-known benchmark functions for over 50 runs indicate that the proposed algorithm with a population size of only 20 particles can achieve the goals quickly and accurately. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of wavelet transform and principal component analysis in mineral oil's 3D fluorescence spectra compression

    Page(s): 77 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (827 KB) |  | HTML iconHTML  

    Wavelet transform combined with principal component analysis (WT-PCA) is designed and applied in mineral oil's 3D fluorescence spectra compression. At the first stage, WT is used to improve fluorescence information quality. Through lots of experiments, it is found that wavelet basis function db3 does well in eliminating spectral noise and irrelevant redundancy in 3D fluorescence spectra. The compressed scores (CS) and the recovery scores (RS) are used to evaluate noise-inhibiting effect of WT. At the second stage, PCA is used in data compression, using data compression ratio and the root mean square error (RMSE) as compression criterions. The WT-PCA method is applied in 10 kinds of spectra, CS and RS are above 90%. At the same cumulative variance (98%), compression ratio is improved by 1.25~2.33 times compared to PCA used only. Its RMSE is less than 3.8%. The main characteristic peaks in the reconstructed and original spectra are almost the same, and their correlation coefficients are higher than 0.9, a high degree of linear correlation considering noise or redundancy eliminated. So, this method achieves a good compression effect. It is meaningful and profitable that pre-filtering irrelevant information by WT has ensured the PCA works better with correct and reliable result. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving the optimization performance of NSGA-II algorithm by experiment design methods

    Page(s): 82 - 85
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (682 KB) |  | HTML iconHTML  

    NSGA-II is an effective multi-objective optimization algorithm, and how to further improve its optimizing performance is an interesting but difficult problem. The Orthogonal Array method(OA) and the Taguchi method are two important kinds of experiment design methods. In this paper, the classical genetic operators are replaced by these experiment design methods to generate new individuals in NSGA-II. This results into two hybrid NSGA-II algorithms, whose optimizing ability is approved by the experiments on the typical multi-objective test functions, and the algorithm combined with Taguchi method is better than the other one with OA, while the calculation complexity of the former is a little higher. In fact, the differences between NSGA-II and the two hybrid algorithms are just the steps to generate new individuals, and the hybrid algorithms don't change any other operations of NSGA-II, which makes them easy for implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The multi-objective controller adjustment using Ants system metaheuristic for non linear systems described by TS fuzzy models

    Page(s): 86 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (666 KB) |  | HTML iconHTML  

    Artificial intelligence has attracted more and more attention in recent years. In this paper, we exploit essentially Ants System (AS) metaheuristic to adjust parameters of controller for non linear systems described by Takagi-Sugeno (TS) fuzzy models. The controller parameters are adjusted based on the desired performance set by the designer. Moreover, ants system metaheuristic was exploited to identify Pareto optimal solutions to find suitable trade-off between these performance criteria. An application example is presented to evaluate the competence of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gamelan music onset detection using Elman Network

    Page(s): 91 - 96
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (813 KB) |  | HTML iconHTML  

    Gamelan, one of Indonesia's traditional music instruments, generates signals that have variations in terms of fundamental frequency, amplitude, and signal envelope, due to its handmade construction and playing style. Therefore onset detection which is crucial for gamelan music analysis; undergoes several shortcomings using spectral and temporal features. This paper investigates the implementation of machine learning approach to understand statistical variations contained in gamelan signals which are relevant to onsets. The method uses Elman Network which consists of one hidden layer. Input units came from the power spectrogram and its positive first order difference of the signals as well as the context units from the output of each hidden unit one step back in time. The spectrogram was built using Short-time Fourier Transform and was converted into the log of Mel scale. A fixed threshold was used to select among the local peaks and the result is considered as binary classification of the signal at each time instant. The network was trained on a set of gamelan signals consists of synthetic and real recording data of single instrument playing. The performance gained 93% of F-measure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance evaluation of Particle Swarm Optimization and Solid Isotropic Material with Penalization in topology optimization

    Page(s): 97 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (399 KB) |  | HTML iconHTML  

    This paper presents a comparison between the Solid Isotropic Material with Penalization (SIMP) approach and the Particle Swarm Optimization (PSO) method in continuous structural topology optimization. SIMP is a mature gradient-based algorithm which has stable performance and fast convergence in various topology optimization applications. PSO is a relatively new evolutionary algorithm, inspired by the nature of bird flocking. Its applications to topology optimization are only reported in recent years. In this paper, the mechanisms of these two algorithms are introduced first. Their performances in continuous topology optimization are compared through an example. Through these comparisons, improvement directions of PSO in topology optimization are outlined. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fractional power NARX model identification using a harmony search algorithm

    Page(s): 102 - 107
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1149 KB)  

    A novel type of discrete-time fractional-power nonlinear autoregressive with exogenous input (FPNARX) model is introduced for system identification, modeling and prediction. Parameter estimation of such a model is a nonlinear optimization problem. A harmony search algorithm is then applied to solve such fractional models. Examples of both simulated and real data are provided. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measuring intelligent false alarm reduction using an ROC curve-based approach in network intrusion detection

    Page(s): 108 - 113
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (290 KB) |  | HTML iconHTML  

    Currently, network intrusion detection systems (NIDSs) are being widely deployed in various network environment with the purpose of defending against network attacks. However, these systems can generate a large number of alarms especially false alarms during their detection procedure, which is a big problem that decreases the effectiveness and efficiency of their detection. To mitigate this issue, we have developed an intelligent false alarm filter to filter out false alarms by periodically selecting the most appropriate machine learning algorithm which conducts the best performance from an algorithm pool. To evaluate the best single-algorithm performance among several machine learning schemes, we utilized two measures (e.g., classification accuracy, precision of false alarm) to determine the best algorithm. In this paper, we mainly conduct a study of applying an ROC curve-based approach with cost analysis in our intelligent filter to further improve the decision quality. The experimental results show that by combining our defined ROC curve-based measure, namely relative expected cost, our developed filter can achieve a better outcome in the aspect of cost consideration. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.